---
title: "How TimescaleDB Expands the PostgreSQL IIoT Performance Envelope"
published: 2026-04-10T14:26:34.000-04:00
updated: 2026-04-10T14:26:34.000-04:00
excerpt: "Benchmark data showing how TimescaleDB expands PostgreSQL ingest capacity, query speed, and storage efficiency for IIoT workloads at scale."
tags: PostgreSQL Performance, IoT
authors: Doug Pagnutti
---

> **TimescaleDB is now Tiger Data.**

Earlier in the week I looked at how [upgrading hardware](https://www.tigerdata.com/blog/how-hardware-affects-iiot-workloads) can expand the PostgreSQL performance envelope. Now I’m going to show another solution: TimescaleDB.

TimescaleDB is an open-source PostgreSQL extension that adds time-series capabilities to PostgreSQL. It was first released in 2017 by Tiger Data (originally Timescale) and is available for free as a self-hosted extension or as a managed service (Tiger Cloud). Under the hood, TimescaleDB automatically partitions tables into time-based chunks (called [hypertables](https://docs2.tigerdata.com/docs/reference/timescaledb/hypertables)), adds native compression with [hypercore](https://docs2.tigerdata.com/docs/reference/timescaledb/hypercore), [continuous aggregates](https://docs2.tigerdata.com/docs/reference/timescaledb/continuous-aggregates) for pre-computed rollups, and lifecycle automation for managing data as it ages.

The typical IIoT workload is a continuous stream of data from a fixed number of tags reporting at fixed intervals. At the same time, dashboards are querying that data to present to operators and analytic engines are running in the background to optimize production. In order to know what the limits of an IIoT system are, you need to define 3 things: the maximum number of tags you can ingest at a fixed rate, the maximum times allowed for a set of queries, and the maximum cost of storing data. 

The neat thing about TimescaleDB is that it expands all three of these boundaries at the same time, and does so without introducing a new database. Here’s the results I got from comparing vanilla PostgreSQL to PostgreSQL with TimescaleDB

## Raising the Maximum Ingest Rate

Every database has a maximum ingest rate: the fastest it can reliably accept new data. Exceed it, and your IIoT system falls behind and may never recover. In my case, I got to a point in my manufacturing facility where I wanted to add more tags (there’s always more data) but couldn’t because it put too much stress on the database. If it was really important, I’d drop some other tags (or reduce their frequency), but I might’ve been giving up some valuable insight, and the worst part is I’ll never really know.

Vanilla PostgreSQL is not necessarily bad at ingesting data (~200,000 tags/s on decent hardware) but that’s the limit. Once the indices are too big to fit in memory -which happens very quickly for serious IIoT workloads, then every ingest has to do all the work on disk. This is an architectural problem, and a reason why many companies invest in 3rd party databases like [InfluxDB](https://www.influxdata.com/).

The nice thing about TimescaleDB is that it fixes the architecture issue while still being PostgreSQL. Instead of one massive table, your data is automatically partitioned into time-based chunks. Each chunk is actually a standard PostgreSQL table, but small enough that indices still fit in memory. When new data arrives, it only touches the current chunk's index, not a giant index that spans the whole range of data.

The two charts below illustrate this perfectly. With standard PostgreSQL, the max ingest rate starts to drop around 25 millions rows (which is only a couple minutes at 100,000 rows per second). With TimescaleDB, this drop never happens and the maximum ingest rate at the start continues forever.

![](https://timescale.ghost.io/blog/content/images/2026/04/data-src-image-5bb16420-8256-43ea-b3b3-1874fff6c6ce.png)![](https://timescale.ghost.io/blog/content/images/2026/04/data-src-image-04a8d550-983c-416b-9847-719feb846f18.png)

## Queries That Don't Slow Down With Scale

The second dimension of the performance envelope is query speed, and TimescaleDB addresses this from two directions.

First, chunk exclusion. Because data is partitioned by time, the query planner can focus only on the chunks that contain the relevant time range. For example, a query for the last 24 hours doesn't touch data from last month or last year. For the post about hardware, I estimated the query speed based on the hourly averages for the past 100 hours (and I excluded the first 3 queries and took the median of the remaining 7). If you look at the comparison in the chart below, the same query with hypertables is roughly 50% faster.

![](https://timescale.ghost.io/blog/content/images/2026/04/data-src-image-aca38e52-66e3-445e-95b9-8462b6a19391.png)

Second, continuous aggregates. These are incrementally-updated materialized views that pre-compute your most common rollups (hourly averages, daily min/max, shift summaries) in the background. When a dashboard requests that data, it reads from the pre-computed result instead of scanning raw rows. Make sure you look at the vertical scale in the chart below. If I put it on the same chart as above, it would just look like a line at 0.

![](https://timescale.ghost.io/blog/content/images/2026/04/data-src-image-c8dca965-870a-4ec9-ab3e-32c075452f8f.png)

Seriously, that’s 0.4ms for continuous aggregate queries vs 250ms to 1500ms. All for no perceptible change in ingest or storage capacity.

## Storing More History, Spending Less

All of this performance improvement might be for naught if the cost of storing data is prohibitive. For example, storing 100,000 tags per second leads to almost 375 Tb of data. At $0.08/GB-Month, a year’s worth of data would cost a little under $150,000 (and it increases quadratically with retention time). There’s a lot less need for more ingest capacity if you can’t afford the data you’re currently ingesting.

TimescaleDB's native compression changes this calculation too. It converts older data from row-oriented storage into a columnar format, which allows for dramatic compression because time-series data is highly repetitive (same tag IDs, similar values, sequential timestamps). Compression ratios of 80-95% are common in IIoT workloads. 

📊

Here's a case study from [Flowco](https://www.tigerdata.com/blog/how-flogistix-by-flowco-reduced-infrastructure-management-costs-by-66-with-tiger-data), where they saw 84% compression in their production database.

What this means in practice is that a terabyte of raw IIoT data compresses down to about 50-100 GB. The cost for the example above with a 90% compression ratio would lead to a cost around only $15,000 for that first year. Now you can make use of that improved ingest capacity without overwhelming the project’s fiscal capacity. 

The storage performance envelope goes from "we need to start deleting old data or buy bigger hardware" to "let’s record more data and discover new insights"

## A much bigger performance envelope

Unlike adding hardware, which primarily improves query performance, adding the TimescaleDB extension improves all three dimensions at once: ingest capacity, queries, and storage costs. And because TimescaleDB is a PostgreSQL extension, not a separate database, you get these benefits without changing code or tooling.

If you're just starting an IIoT proof of concept, you can define your performance envelope using TimescaleDB and be confident that the database will work even as the project scales up. If you’ve already deployed the system and are nearing the performance limits, Timescale could be a godsend. 

Try it yourself with a [free Tiger Cloud trial](https://www.tigerdata.com/cloud) or by installing [TimescaleDB](https://docs2.tigerdata.com/docs/get-started/choose-your-path/install-timescaledb) on your self-hosted system.