TigerData logo
TigerData logo
  • Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    Agentic Postgres

    Postgres for Agents

    TimescaleDB

    Postgres for time-series, real-time analytics and events

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    About

    Timescale

    Partners

    Security

    Careers

Contact usLog InTry for free

Products

Time Series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Forum Tutorials Changelog Success Stories Time Series Database

Company

Contact Us Careers About Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2025 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Categories

All posts

AI

Analytics

Announcements & Releases

Benchmarks & Comparisons

Data Visualization

Developer Q&A

Engineering

General

IoT

Open Source

PostgreSQL

PostgreSQL Performance

PostgreSQL Tips

State of PostgreSQL

Time Series Data

Tutorials

Ready to try Tiger Data?

See Tiger Data's performance for yourself.

Get started for free

Category: All posts

Engineering

Oct 17, 2025

You, Too, Can Scale Postgres to 2.45 PB and 2.5 T Metrics per Day

Scaling PostgreSQL to Petabyte Scale: many white squares (data) into a single bigger square (Timescale Cloud service)

Posted by

Rob Kiefer

Rob Kiefer

01

Insights Recap: Scaling Postgres for Query Monitoring

02

Just Give Me the TL;DR (a.k.a. the Big Numbers)

03

How We Scale Postgres and Stay Fast

04

Final Words

After launching our Insights feature in late 2023, our most ambitious dogfooding effort yet, where we scaled Postgres to give users in-depth performance analytics on their database queries, we’re back with an update. And good news: we'll be sharing these metrics quarterly. 

The TL;DR? One Tiger Cloud database service is now ingesting over 2.5 trillion metrics per day and storing over two petabytes of data, challenging all assumptions that Postgres can’t scale.

This massive operation runs entirely on Tiger Cloud using the same features available to all our users. There is no special treatment, no hidden infrastructure: you, too, could run a Postgres database at this scale with the offerings on Tiger Cloud. 

Insights Recap: Scaling Postgres for Query Monitoring

To understand the scale of the problem we’re trying to solve, let’s quickly recap the feature being powered here by Tiger Data. Insights provides Tiger Cloud users with comprehensive query analytics, capturing execution times, memory usage, I/O statistics, and TimescaleDB feature utilization.

This means we capture every query running in our Cloud, gather relevant statistics, and store them in a fully queryable Tiger Cloud instance. Initially collecting about a dozen metrics per query, we've since tripled that number to enhance user visibility. The data volume expands along three dimensions: growing customer base, increasing per-customer query loads, and an expanding metrics collection.

Yet we continue to track this data in Tiger Cloud, on a Postgres-based database, accomplishing Tiger Data's original goal of creating a faster, more scalable Postgres.

Just Give Me the TL;DR (a.k.a. the Big Numbers)

When we bragged talked about building Insights, the headline numbers were storing 350+ TBs of data and ingesting 10 billion records a day. Today, we’ll change the headline numbers to more than 625 trillion metrics recorded, almost 2.45 petabytes stored, and over 2.5 trillion daily metrics ingested.

image

Since launching the feature, we’ve refined how we measure Insights data, focusing on metrics rather than records—a record is a set of metrics for a query—with each query now capturing significantly more data points than in 2023.

That said, the numbers are impressive: we’ve moved into multi-petabyte territory, adding roughly a petabyte of data per year since launching 2 years ago. Today, we stand at nearly 2.5 petabytes, with over 2.25 petabytes of data efficiently stored in Tiger Data’s tiered storage architecture, which is more easily accessible and query-able than ever.

image

Our daily ingest skyrocketed from 100 billion to 1.5 trillion metrics per day, totaling 400 trillion metrics collected. Despite this massive growth in data, queries, and more metrics per query, we still use the same size bowl for our dogfooding effort: a vanilla Tiger Cloud instance.

How We Scale Postgres and Stay Fast

Much of our architecture remains the same as described in our original post. We still ingest two main types of objects: a detailed “raw” record and a set of UDDSketches that represent a distribution of values for a given metric (“sketch” record). 

A raw record contains the metrics for a single query invocation, along with some more detailed information, like a full breakdown of nodes used to execute the query. Conversely, the set of UDDSketches represents multiple query invocations. This allows us to store orders of magnitude more queries’ stats than if we stored only raw records. 

Since launching the feature, we have generally sampled fewer raw records, now only collecting about 25% of queries in this form. The node breakdown of execution can be useful in understanding how custom nodes we’ve created for TimescaleDB are performing across the fleet.

Adding new metrics to track has been straightforward—just new columns on our existing hypertables. Because we’ve essentially tripled the amount of metrics we collect, this does put more pressure on storage.

For raw records, as previously mentioned, we have just reduced the amount of sampling while continuing to aggressively tier data. For the sketch records, we’ve also begun using tiering for this table. This lets us keep our active dataset for the database around 12 TB (60 TB of pre-compressed data before using Timescale's row-columnar storage engine), with the rest (1.5+ PB) tiered.

To allow for aggressive tiering and quick responses to queries from our Insights page, we use continuous aggregates (our enhanced version of Postgres materialized views) heavily. UDDSketches “roll up” very nicely: you can combine a set of UDDSketches into a new UDDSketch representing the entire group. This allows us to go from the ingested UDDSketches into a hierarchical continuous aggregate tree with groupings at several levels (minutes, hours, days). 

With a bit of planning, we’ve been able to have stats available at all the granularities we need to serve users without needing to go to the original hypertables. Inserts stay fast, queries stay fast, and we can tier without fear.

In the future, we may need to deploy read replicas to scale the solution, allowing us to separate the high write ingesting and aggregation workload from the high read workload that comes from customer usage. But as it stands today, we don’t need that; we have this billions-of-metrics-a-day pipeline running perfectly without scaling out.

Final Words

Since its inception, Insights has grown in both scale and impact, proving that Postgres—when engineered for scale—can handle immense workloads. 

We’ve gone from tracking tens of billions of metrics daily to collecting a trillion of metrics while storing petabytes of data—all on a Tiger Cloud instance. The power of Tiger Data’s tiered storage, hypertables, and continuous aggregates has allowed us to not just scale but to stay fast and efficient. 

If you’ve been thinking about taking your Tiger Cloud database to the next level, rest assured, we’re showing it’s entirely possible—our Cloud is your Cloud. And remember, you will never walk alone. Top-notch support is available for free for all Tiger Cloud customers, and our expert team is ready to guide you every step of the way, all the way to petabyte scale. 

Start scaling—create a free Tiger Cloud account today.

Date updated

Oct 17, 2025

Posted by

Rob Kiefer

Rob Kiefer

Share

Get Started Free with Tiger CLI

Subscribe to the Tiger Data Newsletter

By submitting you acknowledge Tiger Data's Privacy Policy.

Date updated

Oct 17, 2025

Posted by

Rob Kiefer

Rob Kiefer

Share

Get Started Free with Tiger CLI

Subscribe to the Tiger Data Newsletter

By submitting you acknowledge Tiger Data's Privacy Policy.