---
title: "How Sequential UUIDv7 Boosts Ingestion Performance"
published: 2026-05-01T10:56:08.000-04:00
updated: 2026-05-01T10:56:08.000-04:00
excerpt: "Random UUIDv4 keys cause B-tree page splits and bloat indexes up to 35%. See how UUIDv7's time-ordered IDs keep Postgres ingestion fast at scale. "
tags: PostgreSQL Performance, Database
authors: NanoHertz Solutions - Jake Hertz
---

> **TimescaleDB is now Tiger Data.**

Choosing a primary key is a minor implementation detail. However, for high-growth databases, this single decision dictates how your storage engine handles every write. Randomly generated identifiers eventually degrade performance as tables scale. This guide explains how switching to UUIDv7 stabilizes ingestion and maintains index efficiency.

## What You Will Learn

We break down the mechanical differences between random UUIDv4 and time-ordered UUIDv7. You will see how UUIDv7 keeps index size and write latency stable as tables scale beyond RAM. We will examine the relationship between ID selection and disk I/O efficiency to explain why sequential IDs keep your index responsive.

## Why It Matters

Database performance often hits a wall when tables grow beyond the size of available RAM. When you use random UUIDs, every new row lands in a random location within the [B-Tree](https://www.tigerdata.com/learn/postgresql-performance-tuning-optimizing-database-indexes) index. This randomness forces the database to perform frequent page splits. A page split occurs when the database must reorganize data to fit a new entry into a full storage block.

Random inserts create a "random walk" across your storage media. As the index grows, the database can no longer keep the entire B-Tree in memory. It must constantly swap index pages from the disk to RAM to find the correct insertion point. This process increases [write latency](https://www.tigerdata.com/blog/why-adding-more-indexes-eventually-makes-things-worse) and causes spikes in disk I/O. Sequential IDs solve this by ensuring new entries always target the "right-hand side" of the index. This locality of reference keeps the active portion of your index in memory, allowing high ingestion rates even as the dataset grows.

## Append-Only Behavior vs. Random Walk

Standard [B-Tree indexes thrive on sequential data](https://www.tigerdata.com/blog/why-adding-more-indexes-eventually-makes-things-worse). When identifiers are chronological, the database engine appends new entries to the end of the index structure. This "append-only" behavior keeps storage pages full and compact because the database does not need to hunt for an insertion point within existing data. In contrast, random UUIDs force a "random walk" across your storage media. This randomness requires the database to insert data into the middle of full pages, triggering [frequent page splits](https://www.tigerdata.com/blog/indexing-your-way-into-a-performance-bottleneck). These splits leave partially filled pages and fragmented storage, which wastes disk space and slows subsequent write operations.

## The 48-bit Timestamp Prefix and Chronological Sorting

The technical advantage of UUIDv7 is its structured bit layout. While UUIDv4 relies on 122 bits of randomness, UUIDv7 allocates the first 48 bits to a Unix timestamp with millisecond precision. This prefix provides a natural chronological order to every identifier generated by your application. When these IDs arrive at the database, they follow a predictable, increasing sequence that the storage engine can process linearly. This ensures that the most significant bits of the identifier change in a steady, upward progression, which is the ideal state for maintaining sorted data structures.

## Locality of Reference and Buffer Cache Efficiency

Append-only behavior is half the win. Locality of reference is the other half. Locality of reference describes the tendency of a database to access the same storage locations repeatedly over a short period. Sequential IDs maximize this principle. Because new writes target the same few index pages at the "right-hand" edge of the B-Tree, those specific pages remain active in the database buffer cache. The system avoids the expensive task of fetching old index pages from slow disk storage into RAM. By keeping the "hot" portion of the index in memory, the database maintains [high ingestion rates and stable latency](https://www.tigerdata.com/blog/postgres-performance-why-peak-throughput-benchmarks-miss-real-problem) even as the total dataset grows beyond the available physical RAM.

## The Path to UUIDv7 Adoption

Adopting UUIDv7 does not require a complete overhaul of your existing application logic or data types. Since UUIDv7 follows the 128-bit RFC 4122 format, most database schemas and drivers treat it as a standard UUID type without modification. You can begin generating UUIDv7 for new records immediately while retaining existing UUIDv4 records. The database handles the mix of legacy random IDs and new sequential IDs within the same index structure. As your table grows, the most frequently accessed part of your index will become sequential, stabilizing your ingestion performance over time.

## Implementation and Performance Comparison

With the pg\_uuidv7 extension installed, you can implement these identifiers and measure the performance gap they create during [high-volume ingestion](https://www.tigerdata.com/blog/benchmarking-postgresql-batch-ingest).

### Generating UUIDv7 in SQL

Using the extension, you can generate these identifiers directly within your insertion queries or as a column default.

```SQL
INSERT INTO sensor_readings (id, device_id, value)
VALUES (uuid_generate_v7(), 'dev_01', 42.5);
```

### Measuring Index Size and Bloat

The following script simulates a high-volume ingestion task by inserting one million rows into two separate tables and comparing the final index size alongside the I/O load required to maintain them.

```SQL
-- Track execution latency
\timing on

-- Setup tables for comparison
CREATE TABLE test_uuid_v4 (id UUID PRIMARY KEY, val TEXT);
CREATE TABLE test_uuid_v7 (id UUID PRIMARY KEY, val TEXT);

-- Insert 1,000,000 rows with random UUIDv4
INSERT INTO test_uuid_v4 (id, val)
SELECT gen_random_uuid(), 'data'
FROM generate_series(1, 1000000);

-- Insert 1,000,000 rows with sequential UUIDv7 
INSERT INTO test_uuid_v7 (id, val)
SELECT uuid_generate_v7(), 'data'
FROM generate_series(1, 1000000);

-- Analyze buffer usage for a single lookup to see cache pressure
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM test_uuid_v4 WHERE id = gen_random_uuid();
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM test_uuid_v7 WHERE id = uuid_generate_v7();

-- Compare index size and read pressure from the buffer cache
SELECT 
    relname AS table_name, 
    pg_size_pretty(pg_relation_size(relid)) AS index_size,
    idx_blks_read AS blocks_read_from_disk,
    idx_blks_hit AS blocks_found_in_cache
FROM pg_stat_user_indexes WHERE relname IN ('test_uuid_v4_pkey', 'test_uuid_v7_pkey');
```

In high-volume environments, this difference manifests as a measurable gap in storage overhead. In large-scale tests involving 100 million rows, a UUIDv4 index can occupy up to 35% more disk space than a UUIDv7 index. This extra footprint stems from index bloat, the empty gaps left behind when the database must split a full page to accommodate a random insert. The higher blocks\_read\_from\_disk count in the UUIDv4 test reveals the true performance tax: because random IDs scatter data, the database is forced to bypass the cache and fetch blocks from slow disk storage. Because UUIDv7 follows a sequential path, it achieves a nearly 100% fill factor, ensuring the index stays compact and provides faster write speeds by minimizing unnecessary disk interaction.

## Next Steps

Audit the primary key strategies for your largest tables. [Measure the index bloat ratio](https://www.tigerdata.com/blog/indexing-your-way-into-a-performance-bottleneck) on tables that currently use random UUIDs. If your [write latency is climbing as your data grows](https://www.tigerdata.com/blog/postgres-optimization-treadmill), consider a move to UUIDv7 to reclaim your ingestion performance.

Tiger Cloud extends Postgres with native UUIDv7 support, automatic time-based partitioning, and compression for high-ingest tables. Start a [free Tiger Cloud trial](https://console.cloud.tigerdata.com/signup) to test it on your workload.