TigerData logo
TigerData logo
  • Product

    Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    TimescaleDB Enterprise

    Self-managed TimescaleDB for on-prem, edge and private cloud

    Open source

    TimescaleDB

    Time-series, real-time analytics and events on Postgres

    Search

    Vector and keyword search on Postgres

  • Industry

    Crypto

    Energy Telemetry

    Oil & Gas Operations

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InStart a free trial
TigerData logo

Products

Time-series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Tutorials Changelog Success Stories Time-series Database

Company

Contact Us Careers About Newsroom Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2026 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Back to blog

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

Matty Stratton

By Matty Stratton

6 min read

Apr 01, 2026

PostgreSQL

Table of contents

01 What MongoDB is actually good at02 What time-series data actually looks like03 Where MongoDB struggles with this workload04 Why moving to vanilla Postgres isn't automatically the fix05 The decision framework06 What to do next

Document Databases: Be Honest

Document Databases: Be Honest

Back to blog

PostgreSQL
Matty Stratton

By Matty Stratton

6 min read

Apr 01, 2026

Table of contents

01 What MongoDB is actually good at02 What time-series data actually looks like03 Where MongoDB struggles with this workload04 Why moving to vanilla Postgres isn't automatically the fix05 The decision framework06 What to do next

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

MongoDB gets a bad reputation in certain engineering circles that it doesn't entirely deserve. It ships fast. Schema flexibility is real. The developer experience for document-shaped data is good. A lot of teams made a reasonable call when they chose it.

But there's a version of this story that ends badly, and it follows a recognizable pattern. The team picks MongoDB for a new system. The system works. Then the data starts looking less like documents and more like a stream of timestamped events. Queries start filtering by time range. Write volume climbs. Performance degrades in ways that feel familiar if you've read about this problem, and deeply confusing if you haven't.

This post isn't here to relitigate the MongoDB decision. It's here to help you figure out whether the pain you're feeling is a MongoDB problem, a document database problem, or a workload problem that would follow you to Postgres.

The answer matters because the fix is different in each case.

What MongoDB is actually good at

Flexible schema for variable data that's actually variable. Product catalogs where every SKU has different attributes. User profiles where fields vary by account type. Content management where article structure differs by category. These are real document shapes, and MongoDB handles them without the ceremony Postgres requires.

Rapid iteration without migration overhead. Early-stage products change their data model constantly. In Postgres, every schema change is an ALTER TABLE. In MongoDB, you just write different fields. For teams that are still figuring out the shape of their data, this is a real advantage.

Nested and hierarchical data. Some data is naturally a tree. A purchase order with line items with sub-components. A configuration object with nested sections. Postgres can model this with JSONB, but MongoDB's native document model fits it more naturally and queries it more cleanly.

Horizontal scaling for document reads. MongoDB's sharding model was designed for document workloads. For read-heavy document access at scale, it's a mature and well-understood architecture.

These aren't consolation prizes. They're real reasons MongoDB is the right choice for a lot of workloads.

The trouble starts when the data changes shape.

What time-series data actually looks like

Time-series data has a specific shape, and it's not a document shape. Every row is a measurement. It has a timestamp, a source identifier, and a value or set of values. The schema doesn't vary between rows. There's nothing hierarchical about it. The document model isn't adding anything.

What time-series data has instead: enormous volume, strict ordering requirements, queries that almost always filter by time range, and retention policies that drop entire time windows at once.

A wind turbine sensor reporting every five seconds doesn't produce documents. It produces a flat stream of readings: timestamp, sensor ID, RPM, temperature, vibration. A financial trade feed isn't a document store. It's a sequence of immutable events. An APM platform collecting metrics from a distributed system is generating hundreds of thousands of measurements per second, all with the same shape.

The test is simple. Look at your most-written collection. Does each document have a different structure? Or does every document look essentially the same, with a timestamp and some measurements?

If it's the latter, you're storing time-series data in a document database, and the document model is providing zero value while the storage engine works against you.

Where MongoDB struggles with this workload

WiredTiger (MongoDB's default storage engine) uses a B-tree structure optimized for a workload that includes updates to existing documents. For high-frequency append-only writes, it faces a fundamental mismatch. Consider a single sensor reading: one document insert triggers a write to the primary collection, a write to the oplog, and a separate B-tree update for every index on that collection. Three indexes means five writes for one data point. At 10,000 inserts per second, that's 50,000 storage operations per second before you've run a single query. The engine was designed for mixed read-write workloads with in-place updates, not an endless append stream where no document is ever modified after creation.

MongoDB has no native time-based partitioning. Postgres has declarative range partitioning. TimescaleDB automates it entirely with hypertables. MongoDB has no equivalent primitive. Teams end up implementing time-based collection bucketing manually: separate collections per day or week, application-level routing logic, custom cleanup scripts. It works, but it's the same operational burden as manual Postgres partitioning, without the tooling ecosystem that exists on the Postgres side.

MongoDB's aggregation pipeline is expressive. But for time-series workloads, the queries that matter are time-range aggregations: hourly averages, daily maximums, week-over-week comparisons. These queries scan large volumes of documents and aggregate across fields. Without columnar storage and purpose-built time-series compression, performance degrades with data volume in the same way it does in vanilla Postgres.

MongoDB did add a native time-series collection type in 5.0. It's a real improvement for simple append-only use cases. But it doesn't support secondary indexes the same way regular collections do, restricts certain aggregation stages and update operations, and is still relatively new compared to the Postgres ecosystem. Worth knowing about. Not a full answer.

Why moving to vanilla Postgres isn't automatically the fix

This is the section most competitive content skips entirely. If you're evaluating a migration, you deserve the full picture.

If the workload is continuous high-frequency time-series ingestion with long retention and operational query requirements, vanilla Postgres has its own version of this problem. The MVCC overhead, write amplification, autovacuum contention, and index maintenance costs that create the Optimization Treadmill exist in Postgres too. The storage model is different from MongoDB's, but the outcome at scale is the same: performance degrades with data volume, maintenance overhead accumulates, and each optimization cycle buys time without changing the trajectory.

Moving from MongoDB to vanilla Postgres solves the schema flexibility problem (you probably don't need it for this workload anyway). You get a mature partitioning ecosystem, a better query planner, and a richer extension ecosystem. These are real improvements.

It doesn't solve the core time-series storage problem, because that problem lives in the storage model, not the database brand.

The question isn't MongoDB vs. Postgres. It's document store vs. purpose-built time-series storage. That's the actual axis the decision should sit on.

The decision framework

Your data is actually documents. Variable schema, nested structures, hierarchical relationships, read-heavy access patterns. MongoDB is the right tool. The pain you're feeling is probably a schema design or indexing problem, not a fundamental architectural mismatch. Fix the schema.

Your data is time-series but volume is modest. Sub-10K inserts per second, retention under 90 days, no hard operational latency requirements on the full retention window. Vanilla Postgres with good partitioning and indexing handles this fine. The Optimization Treadmill exists, but the ceiling is far enough away that standard tuning keeps you ahead of it. Move to Postgres, implement partitioning early, and monitor the warning signs.

Your data is time-series at sustained high volume. Continuous ingestion, long retention, operational query requirements, growing data volume. This is the workload that breaks both MongoDB and vanilla Postgres through the same class of mechanisms. Purpose-built time-series storage on Postgres (same SQL, same wire protocol, same tooling) is the right answer. Migration from MongoDB to TimescaleDB follows a well-documented path: you keep everything Postgres-compatible and gain the storage architecture that matches the workload.

What to do next

MongoDB didn't fail you if you're reading this. Your workload evolved past what document storage was designed for. That's a different thing.

Most database choices are right at the time they're made and wrong eighteen months later when the system looks nothing like it did at launch. Sensor data that started as a feature became the core product. The document store that handled early prototyping became the production system for a time-series pipeline.

The question now is whether the fix is tuning, migration, or architecture. The framework above gives you a clear read on which one applies. If it's architecture, the good news is that moving from MongoDB to a Postgres-compatible time-series database is less disruptive than it sounds. Your application SQL stays the same. Your tooling stays the same. The storage engine underneath is the thing that changes.

That's the right scope for the change. Not the whole stack. Just the part that was always wrong for this workload.

Read the full technical breakdown of why vanilla Postgres hits these limits, or start a Tiger Cloud trial and see how TimescaleDB handles your workload directly.

Related posts

Yes, You Can Do Hybrid Search in Postgres (And You Probably Should)

Yes, You Can Do Hybrid Search in Postgres (And You Probably Should)

pg_textsearchCloud

Apr 20, 2026

Most search stacks run four systems to answer one question. You don't need any of them. Build production hybrid search in Postgres with pg_textsearch for BM25, pgvectorscale for vector similarity, and Reciprocal Rank Fusion to combine them. One query. One database.

Read more

The Best Time to Migrate Was at 10M Rows. The Second Best Time Is Now.

The Best Time to Migrate Was at 10M Rows. The Second Best Time Is Now.

PostgreSQLPostgreSQL Performance

Apr 08, 2026

Migration cost scales with data volume. The optimization tax you pay while waiting scales faster.

Read more

Stay updated with new posts and releases.

Receive the latest technical articles and release notes in your inbox.

Share

Start a free trial