TigerData logo
TigerData logo
  • Product

    Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    TimescaleDB Enterprise

    Self-managed TimescaleDB for on-prem, edge and private cloud

    Open source

    TimescaleDB

    Time-series, real-time analytics and events on Postgres

    Search

    Vector and keyword search on Postgres

  • Industry

    Crypto

    Energy Telemetry

    Oil & Gas Operations

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InStart a free trial
TigerData logo

Products

Time-series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Tutorials Changelog Success Stories Time-series Database

Company

Contact Us Careers About Newsroom Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2026 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Back to blog

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

D

By Doug Pagnutti

5 min read

Mar 12, 2026

IoTPostgreSQL

Table of contents

01 Table Size02 Ingest Capacity03 Query Speed04 Putting It All Together

How to Measure Your IIoT PostgreSQL Table

How to Measure Your IIoT PostgreSQL Table

Back to blog

IoT

D

By Doug Pagnutti

5 min read

Mar 12, 2026

Table of contents

01 Table Size02 Ingest Capacity03 Query Speed04 Putting It All Together

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

I was doing some validation tests for an essay about the performance envelope for an IIoT PostgreSQL database and realized that measuring a database table is not as straightforward as I assumed it would be. 

The general idea was that I would insert IIoT data into a table and then measure the size and performance of the table as it grows. But how do you actually read the size of a table? What is performance? How can we quantify these values in a way that’s useful for us engineers?

Here’s what I did.

Table Size

There are two key measurements that define a table’s size: How many rows does it have and how much disk space does it occupy. 

Row Count

For small tables, this is straightforward:

SELECT COUNT(*) FROM <table_name>

However, that query requires scanning every row in the table. For typical IIoT tables, like the ones I was testing, that might be billions of rows and might take minutes to execute.

Instead there’s a much faster query:

SELECT reltuples::bigint AS row_count 
FROM pg_class 
WHERE relname = ‘<table_name>’

This is the row count that PostgreSQL uses for the query planner. It’s not guaranteed to match the row count exactly, because it’s not continuously updated, but it’s close enough and returns almost instantly.

⚠️
For hypertables created with TimescaleDB, reltuples should not be used. Instead use approximate_row_count()

Size on Disk

PostgreSQL stores table data across several components: the heap (the main table data), the indices, and TOAST storage (where large values get stashed). All three contribute to the table size and overall storage requirements as shown in the following image. 

image

Here’s the query I used to get the three separate components.

SELECT pg_relation_size(‘<table_name>') AS heap_size,
pg_indexes_size('<table_name>') AS indexes_size,
pg_table_size('<table_name>')
  - pg_relation_size('<table_name>') AS toast_size;

This will return the sizes in bytes, but you can also use the function pg_size_pretty() to get a more human readable output.

Ingest Capacity

Ingest capacity is critical to IIoT workflows, and it’s where a lot of systems run into serious trouble. How do you measure capacity? You can either get an approximate value from current ingest speeds, or push your database to the limit.

If You Already Have a Data Source Connected

If your data stream is already connected, you can look at how long ingests are taking and figure out the capacity from that.

This requires the built-in tool pg_stat_statements which is essential for any serious database. To enable it (it ships with PostgreSQL, so it’s always available) run the following query:

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

Once it’s running, it creates a table called pg_stat_statements that you can query for your INSERT performance:

SELECT 
    query,
    calls,
    rows,
    total_exec_time / 1000 AS total_time_sec,
    mean_exec_time AS avg_ms_per_call,
    rows / NULLIF(calls, 0) AS avg_rows_per_call,
    rows / NULLIF(total_exec_time / 1000, 0) AS rows_per_sec
FROM pg_stat_statements
WHERE query ILIKE '%INSERT%<table_name>%'
ORDER BY total_exec_time DESC;

This gives you a picture of real ingest performance based on what your application is actually doing. You'll see how many rows each call inserts (obviously you’re batching), the average time per call, and a rough rows-per-second figure. The time it takes to insert a batch divided by the period of your desired insertions gives you a rough estimate of how much ingest capacity you’re using.

You can reset the stats whenever you want for a fresh baseline:

SELECT pg_stat_statements_reset();

By measuring this as your table grows, you’ll get a good sense of how your ingest capacity is evolving and you’ll be able to deal with it well before it becomes an issue.

The actual ingest capacity

If you don’t mind really pushing your table to its limits (and maybe breaking it), you can try to ingest as much as possible and see if the database keeps up. I wrote a full walkthrough for this, including the SQL for generating realistic IIoT data and a scripted test loop, in How to Break Your PostgreSQL IIoT Database and Learn Something in the Process. 

Query Speed

Query speed is the most obvious metric for a database, as it affects everyone using the data. However, I found it to be one of the most difficult to generalize. Every application will have specific queries that are important, and different definitions of what is ‘fast enough’.  It’s also something that tends to degrade over time and only become an issue well into the life of the table.

For queries you’re already running

If you already have dashboards running, or your analysis workflow in place, you can again use pg_stat_statements. Here's how to pull information for the 20 slowest queries:

SELECT
    query,
    calls,
    rows,
    mean_exec_time AS avg_ms,
    total_exec_time / 1000 AS total_time_sec,
    stddev_exec_time AS stddev_ms,
    rows / NULLIF(calls, 0) AS avg_rows_returned
FROM pg_stat_statements
WHERE query ILIKE '%SELECT%<table_name>%'
ORDER BY total_exec_time DESC
LIMIT 20;

For more general queries

IIoT queries tend to fall into two categories: wide (what is the state of all devices at a specific time?) and deep (what is the history of a particular device?). By running at least one example from each type, you’ll get a sense of how quickly these types of queries will return.

Generic Wide Query

SELECT DISTINCT ON (tag_id) 
  tag_id, 
  time, 
  value
FROM <table_name>
ORDER BY tag_id, time DESC
LIMIT 100

This returns the most recent value from 100 tags.

Generic Deep Query

SELECT 
    tag_id,
    DATE_TRUNC('hour',time) as hour,
    AVG(value) as hourly_average
FROM <table_name>
WHERE tag_id = <specific tag_id>
GROUP BY DATE_TRUNC('hour',time)
ORDER BY hour DESC
LIMIT 100

This returns the past 100 hourly averages from one specific tag.

💡
It's important to run these queries multiple times to get a robust measurement. There are a lot of internal optimizations that PostgreSQL uses to speed up common queries and it’s therefore likely to run faster after a few executions.

Putting It All Together

The real value comes from combining these measurements as the table grows. Here's the general approach I followed for my essay:

  1. Create a simple IIoT table schema and common index.
  2. Measure table size (rows + disk space), query times, and ingest time for a couple standard batches.
  3. Insert many batches as fast as possible so the table grows quickly.
  4. Repeat steps 2 and 3 until some predefined limit (usually disk space or query time)

If I was instead using a real production system, I would rely more on pg_stat_statements to track query and ingest rates. Doing this every day when the system is new and then a weekly check will ensure you know exactly how your table is evolving.

Related posts

How TimescaleDB Expands the PostgreSQL IIoT Performance Envelope

How TimescaleDB Expands the PostgreSQL IIoT Performance Envelope

PostgreSQL PerformanceIoT

Apr 10, 2026

Benchmark data showing how TimescaleDB expands PostgreSQL ingest capacity, query speed, and storage efficiency for IIoT workloads at scale.

Read more

Upgrading the IIoT Performance Envelope: How Hardware Affects IIoT Workloads

Upgrading the IIoT Performance Envelope: How Hardware Affects IIoT Workloads

PostgreSQL PerformanceIoT

Apr 06, 2026

Hardware upgrades help IIoT query speeds but barely move ingest capacity. The bottleneck is I/O, not compute. Here's the data to prove it.

Read more

Stay updated with new posts and releases.

Receive the latest technical articles and release notes in your inbox.

Share

Start a free trial