---
title: "Ignition and TimescaleDB: the Perfect Pairing"
published: 2026-04-17T12:41:35.000-04:00
updated: 2026-04-17T12:42:52.000-04:00
excerpt: "Ignition's SQL Historian stores tag data in Postgres, which isn't built for time-series workloads. TimescaleDB fixes that in the storage engine."
tags: Time Series Data, Integrations
authors: Doug Pagnutti
---

> **TimescaleDB is now Tiger Data.**

Ignition's SQL Historian is already built to store tag history in PostgreSQL. However, vanilla PostgreSQL is not built for time-series workloads, so the team at Inductive Automation had to create a work-around, which makes direct queries difficult and still has limited performance. TimescaleDB is a PostgreSQL extension that works under the hood to make PostgreSQL one of the fastest time series databases. Put them together and you have a match made in Industry 4.0 heaven.

This post covers the reason using TimescaleDB addresses the shortcomings in Ignition’s default historian as well as providing an overview of how to combine these tools. If you want the full walkthrough (creating the database connection in Ignition, configuring the SQL Historian, and optimizing TimescaleDB), the integration guide is linked at the end.

## The classic manufacturing stack

The standard industrial data pattern has been around for decades, and Ignition is right in the middle of it:

**PLC / RTU → SCADA / Historian → Database**

PLCs and sensors generate continuous streams of tag data. Ignition's SQL Historian captures that data and writes it to a relational database, where it becomes available for trend analysis, dashboards, and compliance reporting.

The main historian table that ignition creates (called sqlth\_1\_data) is essentially a time-series table. Every row has a timestamp (t\_stamp, stored as milliseconds since epoch) and a tag ID. A deployment running 10,000 tags at one-second intervals produces around 864 million rows per day. To avoid massive tables that take forever to update indices and to query, ignition defines a partition time, and creates more tables (ie. sqlth\_2\_data, sqlth\_3\_data, etc…) with the same structure. So now you're stuck with a tradeoff: either you define really small partition times to keep indices small and ingest fast or you define long partition times and create a ton of partitions that makes queries a challenge.

The real problem is that Vanilla PostgreSQL is not built for that.

## What actually goes wrong

Most relational databases were built around row-oriented storage because their original purpose was transactional work — the kind where you insert, update, or fetch a single record at a time. Storing all of a row's columns contiguously on disk means one read pulls the entire record into memory, which is ideal when you're booking a flight, updating an order, or looking up a user's profile. B-trees, page-level locking, MVCC, and write-ahead logging all map cleanly onto this layout, and for OLTP workloads it's the reason Postgres and its peers are fast. IIoT workloads are different. A PLC (or other devices) generate enormous volumes of narrow, append-only time-series data. The queries on this data -what was the tank level for the past day- touch a small number of columns across a huge range of rows. 

The Ignition historian fixes most of this by writing data in column-format, but this is done on the surface, which leads to additional overhead, and a much more complex set of schemas.

TimescaleDB, on the other hand, solves this natively inside PostgreSQL. It does this by automatically partitioning a table into chunks along a time dimension, then letting each chunk adopt the storage layout that fits its age and access pattern. Recent chunks stay in the familiar row-based heap so ingest stays fast and transactional semantics are preserved, but older chunks can be converted to columnar compression. Queries see a single logical table and the planner automatically prunes chunks by time and reads only the columns a query actually needs. You get the transactional guarantees and SQL surface of PostgreSQL for the fast data and scan efficiency of a time-series specific tool.

And the best part: how easy all of this is to set up.

## Setting it up

The setup breaks down into three steps. Connect Ignition to a [Tiger Cloud service](https://console.cloud.tigerdata.com) (or a self-hosted TimescaleDB instance) using the built-in PostgreSQL driver, set up a SQL Historian pointing to that connection, and then optimize the TimescaleDB table.

The ideal time to do all of this is before you start writing data. Converting an existing historian table works (more on that below), but on a large table that conversion takes time and locks the table while it runs.

### First: Connect 

Adding a connection to a TimescaleDB database is the exact same as adding a connection to a PostgreSQL database. TimescaleDB isn’t just like PostgreSQL, it **is** PostgreSQL

If you already have a PostgreSQL database connected, just make sure you install the [TimescaleDB extension](https://www.tigerdata.com/docs/get-started/choose-your-path/install-timescaledb)**.**

### Second: Setup the historian and disable partitioning.

Don’t forget to disable the partitioning, or it'll cause problems. TimescaleDB handles partitioning internally, so having two competing partition schemes on the same data is going to be a mess.

### Third: convert the table to a [hypertable](https://www.tigerdata.com/learn/is-postgres-partitioning-really-that-hard-introducing-hypertables). 

Once Ignition has created the historian table, run this:

```SQL
SELECT create_hypertable(
   'sqlth_1_data', 't_stamp',
   migrate_data => 'true',
   chunk_time_interval => 86400000
);
```

The chunk interval is in milliseconds because Ignition stores timestamps that way (`t_stamp` is milliseconds since epoch). At 10,000 tags per minute, a day-sized chunk (86,400,000 ms) is a reasonable starting point. If you're running at higher tag density, reduce the chunk size so each chunk stays in the 1-20 million row range. The goal is chunks small enough that a time-range query only has to touch a handful of them.

**Optional: add compression and a retention policy.**

Once Ignition has created the historian table, run this:

```SQL
ALTER TABLE sqlth_1_data SET (
  timescaledb.compress,
  timescaledb.compress_segmentby = 'tagid',
  timescaledb.compress_orderby = 't_stamp DESC'
);
SELECT add_compression_policy('sqlth_1_data', INTERVAL '7 days');
SELECT add_retention_policy('sqlth_1_data', INTERVAL '1 year');
```

Segmenting by `tagid` is efficient because most historian queries filter or group on tag. Ordering by `t_stamp DESC` keeps recent-data access fast within each segment. Set the retention interval to match your operational or regulatory requirements (or honestly, with compression it might be feasible to keep it all).

## The full integration guide

This post covers the reasoning and the key steps for using TimescaleDB with Ignition. The step-by-step integration guide covers the full process in greater detail

[Read the Ignition + Tiger Cloud integration guide →](https://www.tigerdata.com/docs/integrate/data-ingestion-streaming/ignition)

Got questions, or want to talk through your specific setup? Drop a comment below or find me in the [Tiger Data community](https://www.tigerdata.com/community).