---
title: "TimescaleDB 2.26: 3.5x Faster time_bucket() Aggregations, 70x Faster Summary Queries, and Faster Multi-Column Lookups"
published: 2026-04-22T09:00:14.000-04:00
updated: 2026-04-22T09:00:37.000-04:00
excerpt: "TimescaleDB 2.26 delivers 3.5x faster time_bucket() aggregations, 70x faster summary queries, and 2x faster multi-column lookups. No query rewrites needed."
tags: Announcements & Releases, TimescaleDB
authors: Brandon Purcell
---

> **TimescaleDB is now Tiger Data.**

## Introduction

TimescaleDB is built on a simple idea: you should be able to start on Postgres and keep scaling on Postgres, without introducing a second system.

At scale, performance comes from two places: more hardware and more efficient execution. As datasets grow, faster queries increasingly depend on reducing the amount of work the database has to do by reading less data, using metadata where possible, and staying in efficient execution paths.

TimescaleDB 2.26 pushes this further by extending the vectorized columnstore engine into more query patterns. More time\_bucket() aggregations stay in the vectorized path, summary queries can be answered directly from metadata with ColumnarIndexScan, and multi-column lookups can skip more data before decompression.

## TL;DR

-   **`time_bucket()` vectorization improvements (**[**PR #9117**](https://github.com/timescale/timescaledb/pull/9117)**):** Aggregations that group by time\_bucket() stay in the columnar execution path end-to-end, reducing runtime from 350 ms to 85 ms in our benchmark (3.5x faster).
-   **`ColumnarIndexScan` summary queries (**[**PR #9267**](https://github.com/timescale/timescaledb/pull/9267)**):** COUNT, MIN, MAX, FIRST(partial), and LAST(partial) can now be answered directly from chunk-level sparse index metadata, avoiding decompression of individual rows and delivering up to 70x faster performance.
-   **Composite bloom filters for `SELECT` and `UPSERT` (**[**PR #9372**](https://github.com/timescale/timescaledb/pull/9372)**,** [**PR #9374**](https://github.com/timescale/timescaledb/pull/9374)**):** Multi-column lookups can skip more compressed batches before decompression, improving applicable SELECT and UPSERT workloads by 2x+. 

## `time_bucket()` Fully Vectorized in the Columnstore Pipeline

time\_bucket() is a foundational element of time-series analytics in TimescaleDB, used in most aggregation queries.

Before 2.26, queries that used time\_bucket() in a GROUP BY or aggregation expression could fall out of the vectorized columnstore execution path. Even when the underlying data was stored in columnar format, part of the query would be evaluated using row-based processing.

In TimescaleDB 2.26, the vectorized aggregation engine can evaluate time\_bucket() directly on columnar data in these cases, allowing more queries to stay in the fast execution path.

 ([PR #9117](https://github.com/timescale/timescaledb/pull/9117)).

```SQL
SELECT time_bucket(interval '1 day', time), AVG(value)
FROM ht_metrics_compressed
WHERE device IN (1, 3, 7)
AND time BETWEEN '2020-01-08' AND '2020-01-22'
GROUP BY 1
ORDER BY 1 DESC;
```

This query completed in **350 ms** in TimescaleDB 2.25.2. In 2.26.0, it completes in **85 ms** (**3.5x faster**). 

The next step is extending this to WHERE clause filters, bringing the same vectorized coverage to predicate evaluation.

## `ColumnarIndexScan`: Summary Queries Read From Metadata, Not Data

As datasets grow, improving query performance increasingly comes down to reducing the amount of work the database has to do. That means avoiding unnecessary scans, skipping decompression, and using metadata whenever possible.

![TimescaleDB Aggregate Processing comparison with ColumnarIndexScan](https://storage.ghost.io/c/6b/cb/6bcb39cf-9421-4bd1-9c9d-fa7b6755ba0e/content/images/2026/04/TimescaleDB-aggregate-processing-comparison.png)

TimescaleDB already maintains sparse index metadata for columnstore chunks, including min/max ranges. But before 2.26, summary queries like COUNT, MIN, MAX, FIRST, and LAST required decompressing data batches to compute results. TimescaleDB 2.26 introduces ColumnarIndexScan, a new, more efficient plan node that allows the database to answer these queries directly from chunk-level sparse index metadata.

This fundamentally changes how these queries scale: instead of depending on the number of rows stored, query cost now scales with the number of relevant chunks.

 ([PR #9267](https://github.com/timescale/timescaledb/pull/9267)).

```SQL
SELECT FIRST(time, time), LAST(time, time)
FROM ht_metrics_compressed
GROUP BY device;
```

In our benchmark, this query improved from 940 ms to 13 ms (up to 70x faster).

This optimization is enabled by default for COUNT, MIN, MAX, FIRST(partial), and LAST(partial) on compressed hypertables. Support for SUM and AVG is in development.

## Composite Bloom Filters for `SELECT` and `UPSERT`

Another way to reduce query cost is to skip data before the database has to touch it.

Before 2.26, TimescaleDB could use bloom filters to prune columnstore batches for single-column predicates. But queries with compound conditions, such as (sensor\_id, location\_id), still had to decompress batches to evaluate the combined filter.

With TimescaleDB 2.26, composite bloom filters extend this pruning to multi-column predicates. The query engine can now check whether a columnstore batch could contain a match before decompression. If not, that batch is skipped entirely.

The result is reduced unnecessary work during execution and over 2x faster performance for applicable SELECT and UPSERT workloads. ([PR #9372](https://github.com/timescale/timescaledb/pull/9372), [PR #9374](https://github.com/timescale/timescaledb/pull/9374))

By default, TimescaleDB derives composite bloom filters from existing rowstore index patterns, while still allowing users to define additional filters manually as needed. Specifically, when users do not create filters manually, TimescaleDB scans existing indexes for composite patterns and replicates those as composite bloom filters.

For SELECT queries, the planner pushes down all applicable filters in selectivity order. For UPSERT queries, the engine picks the most restrictive applicable bloom filter to validate the unique constraint, as shown in this EXPLAIN output:

```SQL
EXPLAIN (ANALYZE, BUFFERS OFF, COSTS OFF, TIMING OFF, SUMMARY OFF)
INSERT INTO explain_test VALUES ('2024-01-01 00:05:30', 5, 'temp', 100)
ON CONFLICT (device_id, metric, ts) DO NOTHING;
--- QUERY PLAN ---
 Custom Scan (ModifyHypertable) (actual rows=0.00 loops=1)
   Batches checked by bloom: 2
   Batches pruned by bloom: 2
   ->  Insert on explain_test (actual rows=0.00 loops=1)
         Conflict Resolution: NOTHING
         Conflict Arbiter Indexes: idx_explain
         Tuples Inserted: 1
         Conflicting Tuples: 0
         ->  Result (actual rows=1.00 loops=1)
```

You can also configure composite bloom filters explicitly:

```SQL
ALTER TABLE address_book SET (
  timescaledb.compress,
  timescaledb.compress_index = 'bloom(postal_code, country)'
);
```

Or at table creation:

```SQL
CREATE TABLE t(
  ts int,
  x text,
  u uuid,
  d date)
WITH (
  tsdb.hypertable,
  tsdb.partition_column = 'ts',
  tsdb.compress, tsdb.orderby = 'ts',
  tsdb.sparse_index = 'bloom(x), bloom(u,d)'
);
```

There are a few constraints to keep in mind: 

-   segmentby columns cannot have bloom filters because their value is constant per batch.
-   The double and real data types are not supported
-   The minimum aggregate column width for a composite filter is 4 bytes. 

The feature can also be toggled with the enable\_composite\_bloom\_indexes GUC, which is on by default.

For compound predicates like (sensor\_id, location\_id), the improvement is over 2x faster with no manual configuration required in cases where TimescaleDB can derive the filter automatically from existing index patterns. EXPLAIN output now also includes batch pruning statistics and false-positive rates, making it easier to see how much work is being skipped. 

Support for UPDATE and DELETE is planned for 2.27.  

## Also in TimescaleDB 2.26

-   Faster text aggregates: MIN and MAX on text columns (C collation) now run in the vectorized columnstore engine, delivering up to 3x faster performance.  ￼
-   Improved background worker reliability: advisory locks have been replaced with graceful cancellation, reducing contention and preventing deadlocks in high-concurrency environments.
-   Replication and stability improvements: fixes to ensure reliable chunk creation in certain replication scenarios.
-   Developer experience improvements: you can now drop and recreate the TimescaleDB extension within the same session.  

## Conclusion: Upgrade to TimescaleDB 2.26 Today

TimescaleDB 2.26 continues our focus on [scaling Postgres](https://www.tigerdata.com/blog/start-on-postgres-scale-on-postgres) by making analytical queries faster and more efficient.

time\_bucket() aggregations now stay in the vectorized path, ColumnarIndexScan allows summary queries to read directly from chunk metadata, and composite bloom filters help multi-column lookups skip more data before decompression. Together, these improvements extend the columnstore execution pipeline across a broader set of analytical workloads.

The result is up to **3.5x faster** time\_bucket() aggregations, up to **70x faster** summary queries, and over **2x faster** multi-column lookups. These improvements take effect on upgrade, with no query changes required in most cases.

To learn more, check out the [full release notes](https://github.com/timescale/timescaledb/releases/tag/2.26.0) for a complete list of improvements **_or_** [**_try Tiger Cloud for free_**](https://console.cloud.timescale.com/signup) **_and experience TimescaleDB 2.26 on your largest hypertables_**. We welcome your feedback on [GitHub](https://github.com/timescale/timescaledb).