---
title: "Start on Postgres, Scale on Postgres: How TimescaleDB 2.25 Continues to Improve the Way Postgres Scales"
published: 2026-02-17T12:33:46.000-05:00
updated: 2026-03-16T10:45:09.000-04:00
excerpt: "Start on Postgres, scale on Postgres: TimescaleDB 2.25 delivers 289× faster queries, better chunk pruning, and lower-cost continuous aggregates at scale."
tags: Announcements & Releases, TimescaleDB, PostgreSQL
authors: Mike Freedman
---

> **TimescaleDB is now Tiger Data.**

Most developers start building with Postgres because it’s simple, reliable, and flexible. You get a clear relational model, transactional semantics you can trust, and an ecosystem that lets teams move quickly without committing to a complex architecture early. The challenge is keeping that simplicity as systems grow. Higher ingest, larger datasets, and increasingly real-time analytical workloads can push teams toward a second system long before they want one.

This pressure is most visible in time-series workloads that demand real-time performance. High write rates, append-heavy tables, and repeated queries over recent windows stress both storage and execution paths. Without reducing the amount of work required per query, scale quickly becomes an architectural problem rather than a [performance optimization](https://timescale.ghost.io/blog/postgres-optimization-treadmill/) one, shifting effort from incremental tuning to changes in system design.

TimescaleDB is designed to change that trajectory. “Start on Postgres, scale on Postgres” is a promise, but it is grounded in a specific architectural approach: performance at scale comes from reducing the work the database must do as data grows, then parallelizing what remains. TimescaleDB 2.25 continues this evolution by tightening the execution and maintenance paths that dominate cost at scale, so common workloads become cheaper and operationally steadier under sustained growth.

This release focuses on three outcomes: faster queries without constant tuning, efficient scaling to larger datasets and higher ingest, and real-time analytics that stays current and trustworthy without introducing a second system.

## Faster Postgres queries at scale, with less tuning

Compression, chunk pruning, and columnar execution already reduce query cost by limiting how much data needs to be read and processed. In 2.25, more queries can avoid work entirely, and the planner is more consistent about selecting those cheaper plans.

A clear example is aggregation on compressed data. In earlier releases, queries using functions like `MIN`, `MAX`, `FIRST`, or `LAST` benefited from compression and metadata, but they still required scanning compressed batches and performing aggregation during execution. The scan was cheaper than a row-oriented approach, but it was still work proportional to the data touched.

In 2.25, these aggregates can often be answered directly from sparse metadata maintained for compressed chunks. The planner can choose a custom execution path that reads summaries rather than scanning or decompressing data. This is implemented via the new `ColumnarIndexScan` plan node (see [PR #9088](https://github.com/timescale/timescaledb/pull/9088), [PR #9103](https://github.com/timescale/timescaledb/pull/9103), and [PR #9108](https://github.com/timescale/timescaledb/pull/9108)). On workloads where this applies, the 2.25 release notes report this class of queries speeding up by up to 289x. For teams running dashboards or monitoring queries over large compressed datasets, this can translate into dramatically faster response times with no query changes required.

The important shift here is in cost structure. Once an answer can be derived from metadata, performance is no longer tied to the number of rows stored inside a chunk. It is tied to the minimum work required to identify relevant chunks and read their summaries, which becomes more valuable as datasets grow.

A complementary improvement applies the same idea to another common pattern: time-filtered queries that do not need to materialize column values. For queries like `SELECT COUNT(*) FROM events WHERE time > ...`, previously, the execution path could still require decompressing the time column to evaluate the predicate, even though the query does not need to read time values for every row. In 2.25, the time column can often be skipped entirely for these cases, reducing CPU and memory pressure while preserving the same result (see [PR #9094](https://github.com/timescale/timescaledb/pull/9094)). The release notes describe this pattern as up to 50x faster for the example query.

As these fast paths expand, plan stability becomes just as important as peak speed. Even when an efficient path exists, teams feel it when the planner chooses it inconsistently or when small changes in query shape lead to surprising regressions. In 2.25, planner improvements around columnar scan paths and ordering help make compression-aware execution more predictable (see [PR #8986](https://github.com/timescale/timescaledb/pull/8986) and [PR #9133](https://github.com/timescale/timescaledb/pull/9133)). Fewer surprises mean less time spent tuning and diagnosing why a query slowed down as data evolved.

## Efficient scaling for high-ingest Postgres workloads

A hard part of scaling is not only achieving good performance at a given size, but preserving efficiency as data volume, ingest rate, and concurrency grow together over time. In practice, scaling pressure shows up in two ways. Some costs grow gradually, such as planning and execution work increasing with the number of partitions. Others appear more abruptly, when accumulated complexity makes execution brittle and small changes in data or query shape trigger different plans and sudden slowdowns.

TimescaleDB’s scaling model is designed to address both. It relies on clear boundaries: partitioning data into chunks, using metadata to prune irrelevant chunks, and compressing data to reduce the work required within each chunk. In 2.25, several refinements make these boundaries behave more efficiently and consistently under sustained growth.

One pressure point is that chunk counts rise over long retention windows, making pruning and constraint handling increasingly important. Earlier versions already used constraints and metadata to skip irrelevant chunks, but there were cases where constraint handling became more permissive than necessary, causing queries to consider more chunks than required as datasets aged. In 2.25, constraint handling improves for fully covered chunks, helping keep both planning and execution costs more tightly bounded as data volume increases (see [PR #9127](https://github.com/timescale/timescaledb/pull/9127)).

Planning behavior under high partition counts is another area where inefficiency and brittleness can emerge together. As hypertables accumulate thousands of chunks, planning time and plan quality can matter as much as execution speed, especially for joins and more complex query shapes. TimescaleDB 2.25 includes fixes for a planning performance regression on Postgres 16 and later affecting some join queries (see [PR #8706](https://github.com/timescale/timescaledb/pull/8706)). These changes reduce both how quickly planning cost grows and how likely it is to tip into unstable behavior as workloads evolve.

The result is more efficient scaling in practice. Costs still grow with data, but they grow more slowly and with fewer surprises, allowing Postgres to continue scaling in place rather than forcing architectural changes to manage accumulated overhead.

## Real-time analytics in Postgres, without a split architecture

As refresh frequency increases and datasets grow, keeping analytics fresh inside the primary database can create background pressure. That pressure grows unless refresh and maintenance paths stay efficient. TimescaleDB has long supported real-time analytics inside Postgres through continuous aggregates, compression, and retention policies. In 2.25, the focus is on lowering the operational footprint of staying current as systems run continuously.

One improvement is compressed continuous aggregate refresh. Earlier versions supported refreshing into compressed hypertables, but the refresh path could include intermediate steps that added extra I/O and CPU work. In 2.25, direct compression on continuous aggregate refresh is enabled via a configuration option, reducing unnecessary data movement when keeping aggregates up to date (see [PR #8777](https://github.com/timescale/timescaledb/pull/8777) and [PR #9038](https://github.com/timescale/timescaledb/pull/9038)). The semantics are unchanged, but the cost of maintaining freshness is lower, especially for frequent refresh schedules.

This is complemented by refinements to batching. Large refresh transactions can temporarily increase WAL volume and create uneven load. In 2.25, the default `buckets_per_batch` for continuous aggregate refresh policies is adjusted to keep transactions smaller (from 1 to 10 buckets), reducing WAL holding and making refresh behavior steadier under sustained ingest (see [PR #9031](https://github.com/timescale/timescaledb/pull/9031)).

The release also includes incremental improvements that reduce background churn from lifecycle operations like retention and deletes on long-running datasets, along with correctness and robustness fixes for compressed and partitioned workloads. For example, support for retention policies on UUIDv7-partitioned hypertables expands the set of configurations where lifecycle management remains reliable over time (see [PR #9102](https://github.com/timescale/timescaledb/pull/9102)). These changes are small individually, but they matter for trust. Real-time analytics only works if results stay aligned with transactional truth as schemas and workloads evolve.

## Closing

TimescaleDB 2.25 continues to make Postgres a better place to run real-time analytics at scale: faster queries through less work, smoother behavior as data and ingest grow, and lower operational overhead for keeping analytics current and correct. 

All in service of a simple yet powerful idea: **start on Postgres, scale on Postgres.** [**Learn why vanilla Postgres hits performance ceilings at scale**](https://timescale.ghost.io/blog/postgres-optimization-treadmill/)**.**

**_To learn more, check out the_** [**_full release notes_**](https://github.com/timescale/timescaledb/releases) **_or_** [**_try Tiger Cloud for free_**](https://console.cloud.timescale.com/signup) **_and experience TimescaleDB 2.25 on your largest hypertables._** [**_Learn how Plexigrid consolidated 4 databases into Postgres and got 350x faster queries._**](https://www.tigerdata.com/blog/from-4-databases-to-1-how-plexigrid-replaced-influxdb-got-350x-faster-queries-tiger-data)