Category: All posts
Nov 26, 2025

Posted by

Brandon Purcell
TimescaleDB 2.22 and 2.23 deliver advances across performance, storage efficiency, and developer experience—combining high-speed analytics with a streamlined, zero-config onboarding path for modern time-series and event-driven systems.
The latest releases of TimescaleDB (2.22 and 2.23) mark another leap forward in making PostgreSQL the most capable database for time-series, events, and analytics workloads, combining the raw performance of vectorized query execution with the full expressiveness of SQL. This release focuses on three pillars that define the next generation of modern PostgreSQL: Configurability, Adaptability, and Performance.
Developers gain greater control over how data is organized and queried with support for UUIDv7-based partitioning and compression and new configurable indexes that bring precision tuning to the columnstore. Compression is now more adaptive and dynamic, adjusting intelligently as data patterns evolve. Query performance reaches new levels of efficiency with multi-column SkipScan, delivering up to 90x faster DISTINCT and deduplication queries across billions of rows.
Together, these advances make TimescaleDB a faster and more flexible foundation for workloads that demand scale, control, and insight across time-series and event-driven systems.
Getting started with TimescaleDB has never been easier.
In 2.23, the database automatically chooses the optimal partition column, enables the columnstore, and creates a compression policy—all within a single CREATE TABLE statement.
CREATE TABLE crypto_ticks (
time TIMESTAMPTZ,
symbol TEXT,
price DOUBLE PRECISION,
volume NUMERIC
)
WITH (
tsdb.hypertable,
tsdb.partition_column = 'time',
tsdb.segmentby = 'symbol'
);
SELECT add_compression_policy('crypto_ticks', compress_after => '7 days');CREATE TABLE crypto_ticks (
time TIMESTAMPTZ,
symbol TEXT,
price DOUBLE PRECISION,
volume NUMERIC
) WITH (tsdb.hypertable);That’s it.
TimescaleDB automatically:
The result is a dramatically simplified developer experience—one that removes configuration friction without sacrificing performance or control.
PostgreSQL 18 introduces asynchronous I/O and internal parallelism improvements that deliver meaningful speedups in sequential scans and I/O-bound queries.
TimescaleDB 2.23 adds full compatibility with PostgreSQL 18 while continuing support for 15, 16, and 17.
The general availability of PostgreSQL 18 arrived on September 25th, with TimescaleDB 2.23 shipping on October 29th .
Availability for Tiger Cloud will follow once dependent components are upgraded—expect a formal announcement in #product-releases once ready.
Direct-to-Columnstore (aka Direct Compress) lets you ingest data directly into the columnstore at write time, bypassing the rowstore and post-compression jobs.
In 2.23, the feature now supports standard INSERT statements (not just COPY).
Enable it using:
SET timescaledb.enable_compressed_insert = on;
SET timescaledb.enable_compressed_insert_sort_batches = on;
SET timescaledb.enable_compressed_insert_client_sorted = on;For high-throughput ingestion pipelines, this feature can reduce disk I/O by up to 80%, with COPY operations achieving 40× faster loads in internal tests.
The feature remains in Tech Preview while we add automatic batch organization reducing the need to order your ingested data. For more details on getting started with Direct-to-Columnstore see this blog post.
Continuous Aggregates (CAggs) are getting a major internal upgrade.
The invalidation trigger mechanism has been fully removed, replaced by tuple-routing logic inside Timescale’s core.
This eliminates redundant trigger overhead and improves DML throughput by 10–20% on average .
CAggs now support set-returning functions such as unnest() and json_array_elements().
This community-requested feature (originally filed in 2020) enables richer aggregations and simpler data transformation pipelines directly in SQL.
Example:
CREATE MATERIALIZED VIEW sensor_summary
WITH (timescaledb.continuous)
AS
SELECT
device_id,
unnest(tags) AS tag,
avg(value) AS avg_value,
time_bucket('1h', time) AS bucket
FROM sensor_data
GROUP BY 1, 2, 4;TimescaleDB 2.23 improves the internal locking model for chunk merging, removing a long-standing source of deadlocks that previously blocked merges from running safely in the background. By relaxing these locks, merges can now proceed concurrently with regular ingest and query activity, reducing operational friction and making automated chunk maintenance far more reliable. This enhancement is especially impactful for workloads with large chunk counts, or constant ingestion, where background merging must run without interrupting normal operations.
This enhancement is especially impactful for workloads with large chunk counts, or constant ingestion, where background merging must run without interrupting normal operations.
Hypertables can now be set to UNLOGGED, skipping WAL writes for high-speed ingestion or temporary staging data.
Example:
ALTER TABLE my_hypertable SET UNLOGGED;
ALTER TABLE my_hypertable SET LOGGED;Unlogged hypertables offer higher throughput and lower latency but trade durability—data will be lost if PostgreSQL crashes.
This feature was a long-standing community request dating back to 2018 and is now fully supported for TimescaleDB hypertables .
UUIDv7 provides an identifier format that combines a timestamp with random bits, creating globally unique, time-sortable IDs. Unlike UUIDv4, which is entirely random, UUIDv7 embeds a millisecond-precision timestamp (with optional microsecond precision) directly within the UUID. This design makes it possible to order events chronologically without maintaining a separate timestamp column—simplifying schema design while improving index locality and write performance.
Why it matters:
Compatibility Advantage: While general UUIDv7 support was introduced upstream in PostgreSQL 18, TimescaleDB brings full UUIDv7 functionality—including compression, partitioning, and vectorized query support all the way back to PostgreSQL 15. That means developers can immediately benefit from time-ordered UUIDs without waiting for the latest Postgres release.

With this release, TimescaleDB extends PostgreSQL’s native partitioning capabilities (previously limited to timestamp, integer, and date columns) to include UUIDv7 partitioning. Each chunk is organized based on the embedded timestamp inside the UUID, which improves data locality and reduces index fragmentation critical for workloads like event streams, logs, telemetry, and high-throughput distributed systems.
Create Hypertable:
CREATE TABLE events (
event_id UUID PRIMARY KEY,
device_id INT,
temperature FLOAT
)
WITH (
tsdb.hypertable,
tsdb.partition_column='event_id'
);
With UUIDv7 partitioning, chunks are created based on the embedded timestamp. You can query by time without explicit timestamp columns:
SELECT COUNT(*)
FROM events
WHERE event_id >= to_uuidv7_boundary(now() - interval '1 hour');Understanding boundary functions
When using UUIDv7 as a partitioning key, TimescaleDB leverages the embedded timestamp within each UUID to determine chunk boundaries and execute time-based range queries.
Boundary functions like to_uuidv7_boundary() convert a timestamp into a UUIDv7 value representing the lower bound for that specific point in time. This allows SQL queries to behave as if they were filtering on a native timestamp column, while maintaining the uniqueness and distribution benefits of UUIDv7.
How it works:
to_uuidv7_boundary('2025-08-01'::timestamptz, true), TimescaleDB constructs a UUID where the timestamp bits represent August 1, 2025, and the random bits are filled with all zeros for the lower boundary.SELECT *
FROM events
WHERE event_id >= to_uuidv7_boundary('2025-08-01'::timestamptz)
AND event_id < to_uuidv7_boundary('2025-08-02'::timestamptz)
ORDER BY event_id;In the above example, even though event_id is a UUID column, the query effectively behaves like a time-based range filter. TimescaleDB uses the encoded timestamp to identify only the relevant chunks for August 1, 2025, skipping all others. This combines the advantages of temporal partitioning with globally unique identifiers.
Don’t use uuid_timestamp() in filters: It may be tempting to extract timestamps from UUIDv7 values using uuid_timestamp() and filter on those results but this approach defeats the purpose of UUIDv7 partitioning.
SELECT *
FROM events
WHERE uuid_timestamp(event_id) >= '2025-08-01'
AND uuid_timestamp(event_id) < '2025-08-02'
ORDER BY event_id;While syntactically correct, this query forces TimescaleDB to scan and decode every UUID to evaluate the condition, because the timestamp must be extracted row by row. Unlike the boundary method, it cannot leverage chunk-level metadata or perform chunk exclusion. On large datasets, this approach will be orders of magnitude slower and eliminate the benefits of UUIDv7-based partitioning.
A full list of functions can be found in the TimescaleDB documentation.
*Support for UUIDv7 time bucketing and Continuous Aggregates is actively under development and will arrive in an upcoming release, bringing full compatibility for time-based rollups on UUIDv7-partitioned tables.
TimescaleDB 2.22 delivers up to 2× faster queries and 30% better storage efficiency for UUIDv7 columns through advanced compression and vectorized query execution. These improvements unlock full columnstore performance for UUID-based workloads, dramatically accelerating equality and range filters across billions of records.
Under the hood, TimescaleDB splits each UUIDv7 into its two components—the timestamp bits and the random bits and compresses them separately. This structure allows the columnstore engine to use vectorized operations, processing many UUIDs at once with a single CPU instruction. The result is faster filtering, improved scan efficiency, and reduced I/O for both analytic and transactional queries.
While this feature was introduced in 2.22, it was disabled by default and is enabled by default in 2.23 and can be managed inline or through a GUC.
To disable UUIDv7 compression explicitly (enabled by default in 2.23):
SET timescaledb.enable_uuid_compression=off;
Sparse indexes are lightweight metadata structures on compressed columnstore chunks that accelerate queries without the heavy storage and maintenance overhead of traditional B-tree indexes. In the rowstore, PostgreSQL relies on full B-tree indexes which are large, take significant space, and become increasingly expensive as data grows. The columnstore instead uses sparse indexes such as Bloom filters and Min/Max ranges, which are dramatically smaller and allow the engine to skip entire compressed batches during scans. TimescaleDB 2.22 enhances your control over these indexes: while TimescaleDB still creates them automatically based on heuristics, you can now explicitly define which columns should use Bloom or Min/Max indexes to finely tune query performance for your workload.
It is possible to create sparse indexes on multiple columns, or a single column can have multiple index types defined simultaneously. This is important if you require multiple indexes depending on your workload, e.g. filters on multiple columns with rare values like UUIDs.
The indexes can be set on CREATE TABLE or later through ALTER TABLE.
-- Set indexes on create
CREATE TABLE metrics (
time timestamptz NOT NULL,
device text,
value float
)
WITH (
tsdb.hypertable,
tsdb.index = 'bloom(value), minmax(value)'
);
-- adjust indexes
ALTER TABLE metrics SET (
timescaledb.compress_index = 'bloom(device), minmax(value)'
);
In internal tests with a 50-million-row dataset, user-defined sparse indexes achieved up to 8.7× faster range queries and 20× faster multi-region queries (e.g., filtering data by device clusters, tenant partitions, or geographic regions). Sparse indexes let you fine-tune performance for your access patterns rather than relying solely on defaults.
To inspect the columnstore configuration of a hypertable, including which sparse indexes are system-defined and which were added manually, use the following views:
SELECT * FROM timescaledb_information.hypertable_columnstore_settings;
SELECT * FROM timescaledb_information.chunk_columnstore_settings;
Timescaledb_information.hypertable_columnstore_settings: Displays the default compression and index configuration applied at the hypertable level (the “template” used for future chunks). These definitions represent your explicit configuration choices.
Timescaledb_information.chunk_columnstore_settings: Lists the actual settings applied to each compressed chunk. Each row shows the current segmentby, orderby, and index configuration for that chunk. The index column includes a JSON-like array describing the type (bloom or minmax), target column, and source:
For more details on usage and options refer to this comment.
Originally introduced in TimescaleDB 2.2 for rowstore tables, SkipScan was later optimized for the columnstore in 2.20, delivering up to 2,500× faster distinct queries. With 2.22, SkipScan now supports multi-column DISTINCT operations, providing millisecond-level deduplication across billions of rows.
SkipScan works by jumping directly from one distinct value to the next, skipping redundant data instead of performing full scans or decompressions. Multi-column support extends this to compound keys—such as (site_id, unit_type, unit_id) making it ideal for IoT, metrics, and analytics workloads where you need the latest reading per entity.
Unlike PostgreSQL’s built-in Loose Index Scan, which only accelerates certain equality predicates, TimescaleDB’s SkipScan operates directly on the columnstore’s internal metadata, enabling vectorized execution and compressed-batch skipping for even greater performance.
Example:
SELECT DISTINCT ON (site_id, unit_type, unit_id)
site_id, unit_type, unit_id, metric_value
FROM metrics
WHERE time BETWEEN '2024-01-01' AND '2024-01-31'
ORDER BY site_id, unit_type, unit_id, time DESC;Results: 904 ms → 10 ms (90x faster). Perfect for IoT, metrics, and event workloads where latest readings per entity matter.
(Read more in our full SkipScan post: SkipScan in TimescaleDB)
Default segmentby and orderby settings are now applied when a chunk is being compressed, rather than at hypertable-level configuration time. This allows compression settings to adapt dynamically to the data available at the time of compression. If no orderby is specified, a default will be applied automatically based on internal heuristics.
Before this change, if a customer did not specify a compression setting for segmentby and orderby, it was derived from the first run of the columnstore policy and locked in for future chunks. As data changes over time, the settings might no longer be optimal. Now, the assessment runs everything, the columnstore policy compressed a chunk, adapting to the new state.
2.22 featured WAL-based invalidation, but this is retired for 2.23. It did not meet our IOPS reduction goals.
The refactored invalidation framework introduced in 2.23 lays the foundation for smarter, lightweight CAgg refresh tracking moving forward.
Together, these changes make CAggs more reliable under high ingest loads and simpler to maintain— paving the way for a future of truly “zero-config” Continuous Aggregates
TimescaleDB will support PostgreSQL 15 until June 2026.
Starting then, upgrades from PostgreSQL 15 to future TimescaleDB releases will be blocked.
We’re communicating this early to give users ample time to plan migrations to PostgreSQL 16 or 17 ahead of time.
TimescaleDB continues to evolve PostgreSQL for the modern data landscape, advancing configurability, adaptability, and performance across the stack. Together, these releases introduce time-ordered UUIDv7 partitioning and compression, configurable sparse indexes, adaptive compression settings, and multi-column SkipScan. These features give developers more control and deliver dramatically faster analytics on massive datasets.
The onboarding experience has also taken a major step forward. Zero-config hypertables, automatic columnstore policies, and simpler continuous aggregate invalidation make it easier than ever to get started and stay performant at scale. Direct-to-Columnstore inserts, relaxed chunk-merge locking, and unlogged hypertables further improve ingest throughput and reduce operational friction for high-volume workloads.
Across storage, indexing, ingest, and analytics, these improvements reflect a single direction: enabling developers to shape PostgreSQL around their data rather than forcing their data to fit the database. For Tiger Cloud users on PostgreSQL 15 and above, these enhancements are already live. Self-hosted users can upgrade today to take full advantage of the new capabilities introduced in TimescaleDB 2.22 and 2.23.
Across storage, indexing, ingest, and analytics, these improvements reflect a single direction: enabling developers to shape PostgreSQL around their data rather than forcing their data to fit the database.
Check out our release notes (2.22 & 2.23) for a full list of improvements and bug fixes. As always, we welcome your feedback in our Community Forum or on GitHub.