
Back to blog
5 min read
Mar 12, 2026
I was doing some validation tests for an essay about the performance envelope for an IIoT PostgreSQL database and realized that measuring a database table is not as straightforward as I assumed it would be.
The general idea was that I would insert IIoT data into a table and then measure the size and performance of the table as it grows. But how do you actually read the size of a table? What is performance? How can we quantify these values in a way that’s useful for us engineers?
Here’s what I did.
There are two key measurements that define a table’s size: How many rows does it have and how much disk space does it occupy.
For small tables, this is straightforward:
SELECT COUNT(*) FROM <table_name>However, that query requires scanning every row in the table. For typical IIoT tables, like the ones I was testing, that might be billions of rows and might take minutes to execute.
Instead there’s a much faster query:
SELECT reltuples::bigint AS row_count
FROM pg_class
WHERE relname = ‘<table_name>’This is the row count that PostgreSQL uses for the query planner. It’s not guaranteed to match the row count exactly, because it’s not continuously updated, but it’s close enough and returns almost instantly.
PostgreSQL stores table data across several components: the heap (the main table data), the indices, and TOAST storage (where large values get stashed). All three contribute to the table size and overall storage requirements as shown in the following image.

Here’s the query I used to get the three separate components.
SELECT pg_relation_size(‘<table_name>') AS heap_size,
pg_indexes_size('<table_name>') AS indexes_size,
pg_table_size('<table_name>')
- pg_relation_size('<table_name>') AS toast_size;This will return the sizes in bytes, but you can also use the function pg_size_pretty() to get a more human readable output.
Ingest capacity is critical to IIoT workflows, and it’s where a lot of systems run into serious trouble. How do you measure capacity? You can either get an approximate value from current ingest speeds, or push your database to the limit.
If your data stream is already connected, you can look at how long ingests are taking and figure out the capacity from that.
This requires the built-in tool pg_stat_statements which is essential for any serious database. To enable it (it ships with PostgreSQL, so it’s always available) run the following query:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;Once it’s running, it creates a table called pg_stat_statements that you can query for your INSERT performance:
SELECT
query,
calls,
rows,
total_exec_time / 1000 AS total_time_sec,
mean_exec_time AS avg_ms_per_call,
rows / NULLIF(calls, 0) AS avg_rows_per_call,
rows / NULLIF(total_exec_time / 1000, 0) AS rows_per_sec
FROM pg_stat_statements
WHERE query ILIKE '%INSERT%<table_name>%'
ORDER BY total_exec_time DESC;This gives you a picture of real ingest performance based on what your application is actually doing. You'll see how many rows each call inserts (obviously you’re batching), the average time per call, and a rough rows-per-second figure. The time it takes to insert a batch divided by the period of your desired insertions gives you a rough estimate of how much ingest capacity you’re using.
You can reset the stats whenever you want for a fresh baseline:
SELECT pg_stat_statements_reset();By measuring this as your table grows, you’ll get a good sense of how your ingest capacity is evolving and you’ll be able to deal with it well before it becomes an issue.
If you don’t mind really pushing your table to its limits (and maybe breaking it), you can try to ingest as much as possible and see if the database keeps up. I wrote a full walkthrough for this, including the SQL for generating realistic IIoT data and a scripted test loop, in How to Break Your PostgreSQL IIoT Database and Learn Something in the Process.
Query speed is the most obvious metric for a database, as it affects everyone using the data. However, I found it to be one of the most difficult to generalize. Every application will have specific queries that are important, and different definitions of what is ‘fast enough’. It’s also something that tends to degrade over time and only become an issue well into the life of the table.
If you already have dashboards running, or your analysis workflow in place, you can again use pg_stat_statements. Here's how to pull information for the 20 slowest queries:
SELECT
query,
calls,
rows,
mean_exec_time AS avg_ms,
total_exec_time / 1000 AS total_time_sec,
stddev_exec_time AS stddev_ms,
rows / NULLIF(calls, 0) AS avg_rows_returned
FROM pg_stat_statements
WHERE query ILIKE '%SELECT%<table_name>%'
ORDER BY total_exec_time DESC
LIMIT 20;IIoT queries tend to fall into two categories: wide (what is the state of all devices at a specific time?) and deep (what is the history of a particular device?). By running at least one example from each type, you’ll get a sense of how quickly these types of queries will return.
Generic Wide Query
SELECT DISTINCT ON (tag_id)
tag_id,
time,
value
FROM <table_name>
ORDER BY tag_id, time DESC
LIMIT 100
This returns the most recent value from 100 tags.
Generic Deep Query
SELECT
tag_id,
DATE_TRUNC('hour',time) as hour,
AVG(value) as hourly_average
FROM <table_name>
WHERE tag_id = <specific tag_id>
GROUP BY DATE_TRUNC('hour',time)
ORDER BY hour DESC
LIMIT 100This returns the past 100 hourly averages from one specific tag.
The real value comes from combining these measurements as the table grows. Here's the general approach I followed for my essay:
If I was instead using a real production system, I would rely more on pg_stat_statements to track query and ingest rates. Doing this every day when the system is new and then a weekly check will ensure you know exactly how your table is evolving.

How TimescaleDB Expands the PostgreSQL IIoT Performance Envelope
Benchmark data showing how TimescaleDB expands PostgreSQL ingest capacity, query speed, and storage efficiency for IIoT workloads at scale.
Read more
Receive the latest technical articles and release notes in your inbox.