---
title: "Postgres vs. Qdrant: Why Postgres Wins for AI and Vector Workloads"
published: 2025-04-29T11:00:43.000-04:00
updated: 2025-04-30T12:10:04.000-04:00
excerpt: "Read how Postgres delivers high-performance vector search without splitting your stack—with the benchmark results to prove it."
tags: AI, Announcements & Releases, PostgreSQL, Blog
authors: James Blackwood-Sewell, Noah Hein
---

> **TimescaleDB is now Tiger Data.**

Today marks **day 2 of Timescale Launch Week**, and we’re bringing benchmarks.

There’s a belief in the AI infrastructure world that you need to abandon general-purpose databases to get great performance on vector workloads. The logic goes: Postgres is great for transactions, but when you need high-performance vector search, it’s time to bring in a specialized vector database like Qdrant.

That logic doesn’t hold—just like it didn’t when we benchmarked [pgvector vs. Pinecone](https://www.timescale.com/blog/pgvector-vs-pinecone).

Like everything in Launch Week, this is about **speed without sacrifice**. And in this case, Postgres delivers both.

We’re releasing a new benchmark that challenges the assumption that you can only scale with a specialized vector database. We compared Postgres (with [pgvector](https://github.com/pgvector/pgvector) and [pgvectorscale](https://github.com/timescale/pgvectorscale)) to Qdrant on a massive dataset of **50 million embeddings.** The results show that **Postgres not only holds its own but also delivers standout throughput and latency, even at production scale**.

This post summarizes the key takeaways, but it’s just the beginning. [Check out the full benchmark blog post](https://www.timescale.com/blog/pgvector-vs-qdrant) on query performance, developer experience, and operational experience.

Let’s dig into what we found and what it means for teams building production AI applications.

## The Benchmark: Postgres vs. Qdrant on 50M Embeddings

We tested Postgres and Qdrant on a level playing field:

-   **50 million embeddings**, each with 768 dimensions
-   **ANN-benchmarks**, the industry-standard benchmarking tool
-   Focused on **approximate nearest neighbor (ANN) search**, no filtering
-   All benchmarks run on identical AWS hardware

The takeaway? Postgres with pgvector and pgvectorscale showed significantly higher throughput while maintaining sub-100 ms latencies. Qdrant performed strongly on tail latencies and index build speed, but Postgres pulled ahead where it matters most for teams scaling to production workloads.

![Vector search query throughput at 99 % recall (bar graph). Postgres with pgvector and pgvectorscale processes 471.57 queries per second vs. Qdrant's 41.47.](https://storage.ghost.io/c/6b/cb/6bcb39cf-9421-4bd1-9c9d-fa7b6755ba0e/content/images/2025/04/Postgres-vs.-Qdrant-Why-Postgres-Wins-for-AI-and-Vector-Workloads.png)

For the complete results, including detailed performance metrics, graphs, and testing configurations, [read the full benchmark blog post](https://www.timescale.com/blog/pgvector-vs-qdrant).

## Why It Matters: AI Performance Without the Rewrite

These results aren’t just a technical curiosity. They have **real implications** for how you architect your AI stack:

-   **Production-grade latency:** Postgres with pgvectorscale delivers sub-100 ms p99 latencies needed to power real-time or responsive AI applications.
-   **Higher concurrency**: Postgres delivered significantly higher throughput, meaning you can support more simultaneous users without scaling out as aggressively.
-   **Lower complexity**: You don't need to manage and integrate a separate, specialized vector database.
-   **Operational familiarity**: You leverage the reliability, tooling, and operational practices you already have with Postgres.
-   **SQL-first development**: You can filter, join, and integrate vector search naturally with relational data, without learning new APIs or query languages.

Postgres with pgvector and pgvectorscale gives you the performance of a specialized vector database _without_ giving up the ecosystem, tooling, and developer experience that make Postgres the world’s most popular database.

You don’t need to split your stack to do vector search.

## What Makes It Work: Pgvectorscale and StreamingDiskANN

How can Postgres compete with (and outperform) purpose-built vector databases?

The answer lies in [pgvectorscale](https://github.com/timescale/pgvectorscale) (part of the [pgai](https://github.com/timescale/pgai) family), which implements the StreamingDiskANN index (a disk-based ANN algorithm built for scale) for pgvector. Combined with Statistical Binary Quantization (SBQ), [it balances memory usage and performance](https://www.timescale.com/blog/how-we-made-postgresql-as-fast-as-pinecone-for-vector-data) better than traditional in-memory HNSW (hierarchical navigable small world) implementations.

That means:

-   You can run large-scale vector search on standard cloud hardware.
-   You don’t need massive memory footprints or expensive GPU-accelerated nodes.
-   Performance holds steady even as your dataset grows to tens or hundreds of millions of vectors.

All while staying inside Postgres.

## When to Choose Postgres, and When Not To

To be clear: Qdrant is a capable system. It has faster index builds and lower tail latencies. It’s a strong choice if you’re not already using Postgres, or for specific use cases that need native scale-out and purpose-built vector semantics.

However, for many teams—especially those already invested in Postgres—**it makes no sense to introduce a new database** just to support vector search.

If you want high recall, high throughput, and tight integration with your existing stack, Postgres is more than enough.

## Want to Try It?

Pgvector and pgvectorscale are open source and available today:

-   [pgvector GitHub](https://github.com/pgvector/pgvector)
-   [pgvectorscale GitHub](https://github.com/timescale/pgvectorscale)
-   Or save time and access both by creating a [free Timescale Cloud account](https://timescale.com/signup)

Vector search in Postgres isn’t a hack or a workaround. It’s fast, it scales, and it works. If you’re building AI applications in 2025, you don’t have to sacrifice your favorite database to move fast.

## Up Next at Timescale Launch Week

_That’s it for day 2! Tomorrow, we’re taking Postgres even further: Learn how to_ [_stream external S3 data into Postgres with livesync for S3_](https://www.timescale.com/blog/connecting-s3-and-postgres-automatic-synchronization-without-etl-pipelines) _and work with S3 data in place using the pgai Vectorizer. Two powerful ways to seamlessly integrate external data from S3 directly into your Postgres workflows!_