TigerData logo
TigerData logo
  • Product

    Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    Open source

    TimescaleDB

    Time-series, real-time analytics and events on Postgres

    Search

    Vector and keyword search on Postgres

  • Industry

    Crypto

    Energy Technology

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InStart a free trial
TigerData logo

Products

Time-series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Tutorials Changelog Success Stories Time-series Database

Company

Contact Us Careers About Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2026 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

H

By Hien Phan

3 min read

Sep 16, 2024

Announcements & Releases#CTA-signup

Table of contents

01 Hypercore: Making Postgres Powerful for Real-Time Analytics02 New Features and Optimizations03 Try Timescale’s New Performance Boosts Today

Ready to try Tiger Data?

See Tiger Data's performance for yourself.

Get started for free

Making Postgres Faster: New Features for 7x Faster Queries and 500x Faster Updates

The Timescale logo with a rocket in the background: Making Postgres Faster
Announcements & Releases

H

By Hien Phan

3 min read

Sep 16, 2024

Table of contents

01 Hypercore: Making Postgres Powerful for Real-Time Analytics02 New Features and Optimizations03 Try Timescale’s New Performance Boosts Today

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

Ready to try Tiger Data?

See Tiger Data's performance for yourself.

Get started for free

As we close out a brat summer, we’ve been pushing Postgres beyond its limits to help developers more easily manage time-series workloads and real-time analytics. This August, we introduced updates built on our core technology, hypercore—designed to help you ship code faster, optimize performance, and confidently scale.

Before diving into the specifics, let’s go back to where it all started. Want to know what's coming up? Stay tuned for daily releases here.

Hypercore: Making Postgres Powerful for Real-Time Analytics  

Years ago, developers faced a critical challenge: traditional databases struggled with time-series data and real-time analytics. Enter hypercore, enabling Postgres to seamlessly handle all these complex data scenarios.

Hypercore’s hybrid storage approach—recent data in rows for fast ingest and lookup, older data in a columnar format for efficient querying—makes it ideal for applications like sensor data analysis, stock trades, or real-time user interactions. This architecture delivers up to 350x faster queries using 98 % less storage than AWS RDS and lays the groundwork for the advanced performance and efficiency improvements we’ll explore next.

New Features and Optimizations

Chunk-skipping: 7x faster, 87 % less storage

Ever run a query that’s frustratingly slow because it scans data you don’t even need? With chunk-skipping indexes, TimescaleDB intelligently skips irrelevant chunks (that's what we call data partitions), allowing you to query just the data you care about. This results in queries that are 7x faster and reduces storage usage by 87 %. For example, if you have a hypertable partitioned by start_time but need to filter on a secondary column like end_time, chunk-skipping indexes will dynamically prune chunks that don’t contain relevant data, significantly speeding up queries that would otherwise require scanning all partitions.

500x faster updates and deletes with compressed tuple filtering

Before compressed tuple filtering, updating or deleting compressed data was slow and inefficient, especially in environments with limited resources. Entire batches of up to 1,000 rows had to be decompressed and written to disk, even if only a small portion needed modification. 

With compressed tuple filtering, TimescaleDB uses min/max metadata filtering in the decompression pipeline to skip irrelevant batches, targeting only the necessary data. This optimization makes DML operations (inserts, deletes, and updates) up to 500x faster, drastically reducing the overhead of decompressing unnecessary data by avoiding the need to materialize irrelevant rows.

360x faster upserts with index scans 

One of our customers, Ndustrial, found upserts weren’t performing with their high-cardinality dataset. Investigation revealed that a sequential scan slowed down finding the batches of data needed for the queries. When we replaced the sequential scan with an index scan (using a pre-existing index), upserts sped up by 360x. This approach will keep Ndustrial running smoothly, even with massive data growth. 

400x faster queries with optimizations on tiered storage

Recent improvements to our tiered storage architecture address the performance challenges of querying large datasets stored in slower, cost-effective storage like S3. Chunk exclusion prunes irrelevant chunks outside the query’s time range or conditions, cutting down unnecessary scans and speeding up query execution. Row group exclusion further optimizes performance by skipping entire Parquet row groups that don’t match the query criteria, while column exclusion reduces I/O by ensuring only the relevant columns are read. These optimizations work together to deliver up to 400x faster queries when accessing tiered data in S3, allowing you to manage massive datasets at lower costs without sacrificing performance.

Try Timescale’s New Performance Boosts Today

With these new features and the power of hypercore, you're ready to handle even more demanding workloads—from gigabytes to petabytes of data. In the coming days, we’ll dive deeper into how we built these optimizations and explore real-world use cases. Stay tuned for more technical deep dives (including an update on our own Insights product, which processes an insane amount of data on a single Postgres node). 

Why wait? Sign up today and experience faster queries and more efficient storage firsthand. Your performance gains are just a query away.

Plus, don’t forget to share your insights by participating in the 2024 State of PostgreSQL Survey. Your feedback helps shape the future of PostgreSQL and the tools you rely on.


About the author

H

By Hien Phan

Related posts

Start on Postgres, Scale on Postgres: How TimescaleDB 2.25 Continues to Improve the Way Postgres Scales

Start on Postgres, Scale on Postgres: How TimescaleDB 2.25 Continues to Improve the Way Postgres Scales

Announcements & ReleasesTimescaleDB

Feb 17, 2026

Start on Postgres, scale on Postgres: TimescaleDB 2.25 delivers 289× faster queries, better chunk pruning, and lower-cost continuous aggregates at scale.

Read more

TimescaleDB 2.22 & 2.23 – 90x Faster DISTINCT Queries, Postgres 18 Support, Configurable Columnstore Indexes, and UUIDv7 for Event-Driven Analytics

TimescaleDB 2.22 & 2.23 – 90x Faster DISTINCT Queries, Postgres 18 Support, Configurable Columnstore Indexes, and UUIDv7 for Event-Driven Analytics

Announcements & ReleasesTimescaleDB

Nov 26, 2025

TimescaleDB 2.22 & 2.23: 90× faster DISTINCT queries, zero-config hypertables, UUIDv7 partitioning, Postgres 18 support, and configurable columnstore indexes.

Read more

Stay updated with new posts and releases.

Receive the latest technical articles and release notes in your inbox.

Share

Get Started Free with Tiger CLI