---
title: "The Best Time to Migrate Was at 10M Rows. The Second Best Time Is Now."
published: 2026-04-08T13:45:12.000-04:00
updated: 2026-04-08T13:45:12.000-04:00
excerpt: "Migration cost scales with data volume. The optimization tax you pay while waiting scales faster."
tags: PostgreSQL, PostgreSQL Performance
authors: Matty Stratton
---

> **TimescaleDB is now Tiger Data.**

There's a pattern that plays out across almost every team running high-volume append workloads on vanilla Postgres. I've watched it happen enough times that I can practically set a timer.

At 10M rows, everything is fine. Queries are fast. The team is shipping features. Nobody is thinking about the database.

At 50M, queries start getting slow. Someone opens a Jira ticket about dashboard latency. The fix is usually an index or two. Takes an afternoon.

At 100M, someone proposes partitioning. Or read replicas. Or bumping the instance size. These are reasonable ideas. They work for a while.

At 500M, the team is spending one to two days per sprint on database performance work. Not building product. Tuning `autovacuum_vacuum_cost_delay` and rewriting queries and having meetings about whether to re-partition.

At every stage, migration to a purpose-built solution feels like it can wait. The current optimizations are working. The pain is manageable. Next quarter has fewer deadlines. (Next quarter never has fewer deadlines.)

Then the table hits a billion rows, and migration is now a project, not a task. What would have been a weekend of `CREATE TABLE ... USING hypertable` and a data backfill is now a phased migration plan with rollback strategies and a project manager.

This post makes the case for migrating earlier than feels necessary. Not because your current setup is broken, but because the cost of waiting (what I'll call the optimization tax) compounds in ways that aren't visible until you're deep in them.

## The migration cost curve

Let's put real numbers on this. The technical steps of a migration don't change much as data grows. What changes is how long they take and how many people you need in the room.

**At 10M rows:** Migration is essentially `CREATE TABLE`, convert to hypertable, backfill data. A single engineer, a weekend. Downtime measured in minutes if you use a blue-green approach. Risk is near zero because you can run both instances in parallel and compare results before cutting over.

```sql
-- This is the scary migration at 10M rows
SELECT create_hypertable('sensor_data', 'time');

INSERT INTO sensor_data_new SELECT * FROM sensor_data_old;
-- Go get coffee. You'll be done before it's cool enough to drink.
```

**At 100M rows:** Backfill takes hours, not minutes. You need to plan for write continuity during migration. Indexes need rebuilding. Compression policies need configuring. Testing is more involved because edge cases in the data are now visible. One engineer, one to two weeks.

**At 500M+ rows:** Full migration project. Data transfer takes days. You need a parallel write strategy (dual-write or CDC). Query compatibility testing across the application layer. A rollback plan. Stakeholder communication. Two to four engineers, four to eight weeks.

The work itself doesn't get harder. There's just more of it, and the blast radius of a mistake gets larger. The migration at 10M rows and the migration at 500M rows are the same technical steps. The difference is entirely in scale and coordination overhead.

That's worth repeating: you're not doing something different at 500M rows. You're doing the same thing, slower, with more people watching.

## The optimization tax (the part people don't calculate)

Migration has a visible cost. You can put it on a roadmap. Estimate it. Schedule it. Argue about it in sprint planning.

Staying has an invisible cost. I call it the optimization tax: the cumulative engineering time, infrastructure spend, and incident burden you pay every month to keep vanilla Postgres performing on a workload it wasn't designed for. And unlike migration, the optimization tax never stops. It just goes up.

**Engineering time on optimization.** How many hours per sprint does your team spend on query tuning, partition management, autovacuum configuration, index strategy reviews? Track it for a month. I've seen teams burning 15-20% of their engineering capacity on database maintenance work that wouldn't exist on a purpose-built system. Most of them didn't realize it until they measured. (If you're wondering whether your team has [signs that tuning won't fix the problem](https://www.tigerdata.com/blog/six-signs-postgres-tuning-wont-fix-performance-problems), I wrote about that too.)

**Instance cost escalation.** The progression from `db.r6g.xlarge` to `db.r6g.4xlarge` to `db.r6g.8xlarge` happens gradually enough that nobody raises a flag. Each upgrade is individually justified. "We need more memory for the working set." "The CPU is pegged during dashboard queries." "Read replicas need a bigger instance too." The aggregate cost curve is something else entirely. I wrote a whole post about [why vertical scaling buys time you can't afford](https://www.tigerdata.com/blog/vertical-scaling-buying-time-you-cant-afford). The short version: each instance upgrade gets you less headroom than the last.

**Opportunity cost.** Every sprint hour spent on database maintenance is a sprint hour not spent on product features. This compounds. The team spending 15% of engineering time on database operations ships 15% fewer features than the team that doesn't. Over 12 months, that gap is visible to customers.

**Incident burden.** Slow query alerts at 2 AM. Autovacuum blocking production writes. Replication lag during write spikes. These aren't catastrophic. They're erosive. They train the team to accept degraded baseline performance as normal. "Oh, the dashboard is slow on Mondays because of the weekly aggregation job." That sentence should make you uncomfortable.

Add these up over 12 months. That's your optimization tax. Compare it to the migration cost at your current data volume. The math almost always favors migrating now over migrating later. And the longer you wait, the more lopsided that comparison gets.

## Why teams delay (and why those reasons stop holding)

I want to be fair about this. The reasons teams wait are legitimate. I've used most of them myself. But each one has a shelf life.

**"We don't have time right now."** This is the most common one, and it contains a cruel irony. Migration time increases with data volume. Delaying to find a better window means the work itself gets bigger. The window never gets better. The task gets worse. The team that "doesn't have time" for a weekend migration at 10M rows will somehow need to find time for an eight-week migration at 500M rows.

**"The current optimizations are working."** They're working now. Each optimization has a ceiling. When you hit it, you need the next one. The sequence is predictable: indexes, then partitioning, then read replicas, then instance upgrades, then custom vacuum tuning. Each step buys less time than the last. You're running up a down escalator.

**"Migration is risky."** This is the one that sounds the most reasonable and holds up the least. Migration from Postgres to TimescaleDB is lower risk than most database migrations because TimescaleDB is Postgres. Same SQL. Same wire protocol. Same drivers. Same `pg_dump`. Your application code changes are minimal. The risk profile is closer to "adding an extension" than "switching databases." You're not leaving Postgres. You're giving it better tools.

```sql
-- Your existing queries still work. This isn't a rewrite.
SELECT time_bucket('1 hour', ts) AS hour,
       avg(temperature),
       max(temperature)
FROM sensor_data
WHERE ts > now() - interval '7 days'
GROUP BY hour
ORDER BY hour;
```

**"We'd need to convince stakeholders."** Quantify your optimization tax. That's the whole pitch. "We stop spending X hours per month on database maintenance and reclaim that for product work." If your team is spending two days per sprint on database performance, that's roughly 20% of engineering capacity. Put a dollar figure on it. The conversation gets short.

## Why migration risk is lower than you think

This is the part where I'm supposed to list seven migration steps and make you feel calm about them. I'll skip the list and give you the summary instead: install the extension, create hypertables, backfill data, configure compression and retention, update connection strings, validate. The [migration guide](https://docs2.tigerdata.com/docs/migrate) walks through each one. The [live migration tool](https://www.tigerdata.com/blog/postgresql-migration-made-easier) handles the hard parts at scale.

The core point is this: the steps are the same whether you have 10M rows or 500M rows. What changes is the logistics around them. At 10M rows, the backfill is a single `INSERT...SELECT` that finishes while you're refilling your coffee. At 500M rows, it's a parallel `COPY` job that runs for days and needs monitoring, dual-write strategies, and a rollback plan.

The steps don't scale. The logistics do. And that's exactly why doing it earlier is the move.

## The compound benefit of migrating early

Everything above has been about the cost of staying. But there's a positive version of this story too.

**You skip the treadmill entirely.** The team that migrates at 10M rows never learns what autovacuum tuning feels like at 500M rows. They never have the "should we add ClickHouse?" meeting. They never build the CDC pipeline. They never debug replication lag during write spikes. Those problems simply don't exist in their world. That's the real dividend.

**Compression from the start.** TimescaleDB's native compression typically achieves 90-95% compression ratios on time-series data. One of our customers, Latitude, saves $12,000 per month on database costs from compression alone. Your storage costs grow at 5-10% of the raw data rate instead of 100%. Over 12 months on a high-ingest workload, that's a line item your finance team will notice.

**Fast dashboards without the custom plumbing.** Continuous aggregates give you incrementally-updating materialized views that combine precomputed rollups with the newest raw data. FlightAware went from 6.4-second query times to 30 milliseconds. Your dashboards are fast from day one, not "fast after we spent two weeks building and maintaining custom materialized view refresh jobs." Hypertables handle partitioning automatically in the background, so you also never write another `CREATE TABLE sensor_data_2026_q2 PARTITION OF ...` DDL statement. One less thing.

That's the real cost of waiting. The migration effort is the obvious part. The intermediate suffering you accumulate between now and when you eventually migrate anyway? That's the part nobody budgets for.

## So, about that timing

The best time to migrate was when the table was small and the effort was trivial. If that window has passed, the second best time is now, before the data volume makes the migration larger and the optimization tax makes the ROI case embarrassingly obvious.

Every month of delay increases both costs: the migration gets bigger and the tax keeps compounding. The crossover point where "just optimize" costs more than "migrate and stop optimizing" is earlier than most teams think.

If you're reading this and nodding, you probably already know what you need to do. The question isn't whether to migrate. It's whether you do it this quarter while it's a task, or next year when it's a project.

[Get started with the migration guide →](https://docs2.tigerdata.com/docs/migrate)