7 comments

  • whakim 23 minutes ago
    Worth noting that the filtering implementation is quite restrictive if you want to avoid post-filtering: filters must be expressible as discrete smallints (ruling out continuous variables like timestamps or high cardinality filters like ids); filters must always be denormalized onto the table you're indexing (no filtering on attributes of parent documents, for example); and filters must be declared at index creation time (lots of time spent on expensive index builds if you want to add filters). Personally I would consider these caveats pretty big deal-breakers if the intent is scale and you do a lot of filtering.
  • ricw 8 hours ago
    I’ve been using this since early this year and it’s been great. It was what convinced me to just stick to Postgres rather than using a dedicated vector db.

    Only working with 100m or so vectors, but for that it does the job.

    • pqdbr 8 hours ago
      Are you using a dedicated pg instance for vector or you keep all your data in a single pg instance (vector and non-vector)?
      • ComputerGuru 8 hours ago
        The biggest selling point to using Postgres over qdrant or whatever is that you can put all the data in the same db and use joins and ctes, foreign keys and other constraints, lower latency, get rid of effectively n+1 cases, and ensure data integrity.
        • dalberto 7 hours ago
          I generally agree that one database instance is ideal, but there are other reasons why Postgres everywhere is advantageous, even across multiple instances:

          - Expertise: it's just SQL for the most part - Ecosystem: same ORM, same connection pooler - Portability: all major clouds have managed Postgres

          I'd gladly take multiple Postgres instances even if I lose cross-database joins.

      • ricw 4 hours ago
        All in one of course. That’s the biggest advantage. And why postgres is great - it covers virtually all standard use cases.
    • esafak 8 hours ago
      What kind of performance do you observe with what setup?
      • ricw 4 hours ago
        Depends on the query and I don’t have exact numbers of the top of my head, but we’re talking low 100ms range for something pgvector itself wasn’t able to handle in a reasonable amount of time.
  • aunty_helen 7 hours ago
    Related discussion for pgvector perf: https://news.ycombinator.com/item?id=45798479
    • tacoooooooo 7 hours ago
      the main issue with pgvectorscale is that it's not available in RDS :(
      • mrinterweb 4 hours ago
        I'm considering hosting a separate pg db just to be able to access certain extensions. I am interested in this extension as well as https://wiki.postgresql.org/wiki/Incremental_View_Maintenanc... (also not available on RDS). Then use logical replication for specific data source tables (guess it would need to be DMS).
      • omg2864 6 hours ago
        Yes, RDS seems to really hold PG back on AWS, with all the interesting pg extensions getting released now (pg_lake). It is a share I can't move to other PG vendors because it is a pain in the ass to get all privacy, legal docs in order.
        • calderwoodra 4 hours ago
          Yes, the InfoSec advantages of using RDS are very real, especially in B2B Enterprise SaaS.
  • jascha_eng 5 hours ago
    Combined with our other search extension for full text search these two extensions make postgres a really capable hybrid search engine: https://github.com/timescale/pg_textsearch
  • isoprophlex 7 hours ago
    The linked blogpost is an interesting read, too, comparing well-tuned pgvector to pinecone:

    https://www.tigerdata.com/blog/pgvector-vs-pinecone

  • dmarwicke 3 hours ago
    does this actually fix metadata filtering during vector search? that's the thing that kills performance in pgvector. weaviate had the same problem, ended up using qdrant instead
  • mmmeff 7 hours ago
    This is still unsupported in RDS, right?