---
title: "Indexing Your Way into a Performance Bottleneck"
published: 2026-04-10T12:21:48.000-04:00
updated: 2026-04-10T12:21:48.000-04:00
excerpt: "Every index on a Postgres table multiplies your write I/O. Past 500M rows, that tax becomes a crisis. Learn to find zombie indexes and fix write amplification."
tags: PostgreSQL Performance, PostgreSQL Tips
authors: NanoHertz Solutions - Jake Hertz
---

> **TimescaleDB is now Tiger Data.**

Every developer knows the rush of fixing a slow query. You identify a sequential scan, add a B-tree index, and watch a three-minute execution time drop to five milliseconds. It feels like a win. But in high-ingestion environments, this win is often a loan with high interest rates. As your data volume crosses the 500M row mark, those same indexes that saved your read latency start to strangle your write throughput.

In this article, you will learn how to identify when your indexing strategy has turned into a performance liability. We will cover the mechanics of write amplification in B-tree structures and present a framework for detecting zombie indexes that consume resources without providing value. Finally, we will look at how specialized solutions like Tiger Data handle these scale requirements without the traditional indexing penalty.

## Why It Matters

When you operate a general-purpose database like PostgreSQL, every index you add creates a permanent tax on every insert. For a transactional database handling e-commerce checkouts, this is an acceptable tradeoff. For a system ingesting thousands of sensor readings or financial events per second, it becomes a crisis.

​This tax manifests as ingestion lag. As the database spends more CPU and I/O cycles rebalancing B-tree structures, the front door for new data opens more slowly. You might see p95 write latency creep from 8ms to 45ms as your tables grow. Beyond latency, indexes consume massive amounts of storage. In time-series workloads, indexes often grow faster than the raw data itself because B-trees do not reclaim space efficiently for time-ordered entries. This [bloat](https://www.tigerdata.com/blog/preventing-silent-spiral-table-bloat) leads to a resource conflict in which autovacuum and index maintenance compete for the same disk I/O as your production queries.

## The Maintenance Cycle: B-Tree Rebalancing

To understand why writes slow down, you have to look at what happens inside the 8KB pages of a [PostgreSQL index](https://www.tigerdata.com/blog/mvcc-feature-youre-paying-for-but-not-using). A B-tree index is a balanced tree structure. When you insert a new row, the database must find the correct leaf page for that value. If that page is full, the database performs a page split. It allocates a new page, moves half the data over, and updates the parent page to point to the new location.

​For [time-series data](https://www.tigerdata.com/learn/types-of-data-supported-by-postgresql-and-timescale), this is particularly painful. Because timestamps increase monotonically, every new insert targets the right-hand edge of the index. This creates a hot spot where the same few leaf pages are constantly splitting and rebalancing. If you have five indexes on a single table, one logical insert actually triggers six physical write operations: one for the heap and five for the B-trees. This is write amplification: the ratio of physical I/O operations to logical write operations, which grows linearly with the number of indexes on a table. You are multiplying your disk I/O requirements by the number of indexes you maintain.

## The Locking Penalty

Index operations are not free from concurrency issues. While PostgreSQL uses Multi-Version Concurrency Control ([MVCC](https://www.tigerdata.com/blog/mvcc-feature-youre-paying-for-but-not-using)) to allow readers and writers to coexist, the index structures require internal locks to maintain integrity. When a page split occurs, the database must take a short-term lock on that part of the tree. At high ingestion rates, these micro-locks add up.

​If your ingestion is continuous, there is no quiet period for the database to catch up. Every insert competes for the same internal locks. This contention imposes a ceiling on the number of rows your instance can process per second. You can solve this by using more CPU or RAM, but vertical scaling only masks the architectural friction for a few months. The underlying problem is that the B-tree was not designed for a write-heavy, append-only data stream.

## Usage Auditing: Finding Zombie Indexes

A zombie index is an index that is still being updated by every insert but is never used by the query planner. These often accumulate during development or after a schema change. Because every index carries a maintenance cost, keeping even one unused index can reduce your maximum ingestion rate by 10% to 15%.

​You can detect these by querying the pg\_stat\_user\_indexes table. You want to compare the idx\_scan count (how many times the index was used) against the idx\_tup\_read count and the physical size of the index. If an index has zero scans but millions of tuple insertions, it is a prime candidate for removal.

## Auditing Your Index Cost

The following SQL script helps you identify which indexes are costing you the most in terms of write maintenance relative to their actual utility.

```SQL
SELECT
   schemaname || '.' || relname AS table_name,
   indexrelname AS index_name,
   pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,
   idx_scan AS total_scans,
   -- Identify indexes with high maintenance (size) but low utility (scans)
   CASE
       WHEN idx_scan = 0 THEN 'ZOMBIE - REMOVE'
       ELSE 'ACTIVE'
   END AS status
FROM
   pg_stat_user_indexes i
JOIN
   pg_index x ON i.indexrelid = x.indexrelid
WHERE
   relname = 'hardware_telemetry'
   AND x.indisunique IS FALSE
ORDER BY
   pg_relation_size(i.indexrelid) DESC;
```

​​This query highlights non-unique indexes. Unique indexes are excluded because they serve a functional constraint role even if they are not used for queries. If you find a large index with zero scans, you have found a performance bottleneck that can be cleared immediately.

## High-Ingestion Alternatives

If you find that you need those indexes for performance but cannot afford the write penalty, you are hitting the architectural limits described in the [Optimization Treadmill article](https://www.tigerdata.com/blog/postgres-optimization-treadmill). This is where specialized storage engines come into play.

​Tiger Data handles this by using [columnar storage](https://www.tigerdata.com/blog/building-columnar-compression-in-a-row-oriented-database) and batched ingestion. Instead of updating a B-tree for every single row, it collects data into compressed chunks. These chunks are indexed by time range at a metadata level. When a query comes in, the system uses chunk exclusion, a query optimization that reads each chunk's time-range metadata to skip entire partitions that fall outside the query window, without scanning a single row inside them, to ignore data that does not match the time filter. This provides the speed of an index without the row-by-row write amplification. By moving away from B-trees for time-series data, you can reduce WAL volume by up to 90%. Tiger Data's [Direct Compress](https://www.tigerdata.com/blog/introducing-direct-compress-up-to-40x-faster-leaner-data-ingestion-for-developers-tech-preview) feature takes this further, compressing data in-memory at ingest time to eliminate background compression jobs entirely.

## Next Step

Open your production database console and run the audit query provided above. Find one non-unique secondary index with a high update count that has not been scanned in the last 30 days. Drop that index in a staging environment to measure the immediate improvement in your ingestion latency.

If your ingestion pipeline is already showing the symptoms described here, Tiger Cloud lets you run a hypertable alongside your existing Postgres setup with no migration required. [Start a free trial](https://console.cloud.tigerdata.com/signup) and benchmark your write latency on the same workload within an afternoon.