---
title: "Upgrading the IIoT Performance Envelope: How Hardware Affects IIoT Workloads"
published: 2026-04-06T08:16:09.000-04:00
updated: 2026-04-06T08:16:09.000-04:00
excerpt: "Hardware upgrades help IIoT query speeds but barely move ingest capacity. The bottleneck is I/O, not compute. Here's the data to prove it."
tags: PostgreSQL Performance, IoT
authors: Doug Pagnutti
---

> **TimescaleDB is now Tiger Data.**

In a [previous post](https://www.tigerdata.com/blog/the-iiot-postgresql-performance-envelope), I laid out the concept of the IIoT performance envelope: the three constraints (storage, ingest rate, and query speed) that define whether or not your database is suitable for your workload. 

I mentioned in that post that one way to expand the performance envelope is with more hardware. For example, if your database is struggling under 10,000 tags/s, does doubling the RAM or adding more CPU cores buy you proportionally more capacity? Can you just buy your way out of trouble? The answer is … sometimes. Ingest capacity is mostly based on I/O, so more RAM and CPUs won’t have a huge effect. Queries, on the other hand, can benefit greatly from added compute and it’s usually worth the expense.

I used docker compose and some custom python code to run a series of tests and see how the performance envelope changes with different hardware configurations. The general approach I used can be found [here](https://www.tigerdata.com/blog/measure-your-iiot-postgresql-table). More specifically, I tested ingest capacity by inserting a batch of data, calculating the theoretical ingest capacity per second, and then inserting 70% of that theoretical max in the next batch. For query speeds, I measured the time it takes to return the past 100 hourly averages for a single tag on a year’s worth of data.

## Ingest Capacity

Ingest capacity tends to be a make-or-break factor for IIoT workloads. Data arrives continuously and either the system can handle it or it can’t.

The first few batches made RAM seem very important. The system with 2G of RAM immediately dropped from 250,000 tags/s to 175,000 tags/s whereas the system with 8G of RAM seemed to keep running at 250,000 tags/s. A few minutes later though, the high RAM system dropped down to 233,000 tags/s and oscillated around there for the rest of the test.

What’s happening is that the indices are initially kept entirely in RAM. This allows for much faster updates as each batch is inserted. However, whether it’s 2G of RAM or 32G of RAM, eventually the index doesn’t fit anymore and it must be read from disk. At that point, disk access is the limiting factor and while having lots of spare RAM helps (remember there’s other things going on in the database) the effect is less dramatic.

Increasing the CPU count didn’t help much either. Updating indices is a serial process, so there isn’t much opportunity to parallelize the work across multiple processors. And similar to the discussion about RAM, most of the time is spent reading and writing to disk so I/O is really the bottleneck. 

|  | 1 vCPU | 2 vCPU | 4 vCPU |
| 2G of RAM | 176 ktags/s | 198 ktags/s | 181 ktags/s |
| 4G of RAM | 223 ktags/s | 228 ktags/s | 232 ktags/s |
| 8G of RAM | 232 ktags/s | 232 ktags/s | 233 ktags/s |

Fig. 1: Maximum ingest capacity (after reaching a steady state)

⚠️

In the tests with 2G of RAM, going from 2 to 4 vCPUs actually decreased performance. This is likely an example of [thrashing](https://en.wikipedia.org/wiki/Thrashing_\(computer_science\)) and a reason why it's always suggested to increase RAM along with CPUs

\[Note\] In the test with 2G of RAM and 4 vCPUs, adding the vCPUs without increasing RAM actually decreased the ingest capacity. Likely due to [Thrashing](https://en.wikipedia.org/wiki/Thrashing_\(computer_science\)).

## Query Speed

Hardware doesn’t have a huge impact on ingest capacity, but it’s the opposite for query speeds.

RAM helps query speed because PostgreSQL loads data into memory before processing it. More RAM means that more of the data stays in memory so it’s easier to access the next time. This is one reason why it’s important to run a query multiple times to get a good estimate of the execution time under real world scenarios. Beyond the data, PostgreSQL also builds a hash table for aggregations (a common strategy for GROUP BY queries). If there’s enough RAM to keep this all in memory, then it won’t have to spill it to disk and use up more I/O resources.

CPU count also directly speeds up aggregate queries because PostgreSQL can parallelize both the scan and the aggregation itself. With multiple CPUs, PostgreSQL will split the table into chunks and assign each chunk to a worker process running on a separate core. Each worker then computes a partial aggregate and they’re combined at the end. A GROUP BY aggregation over a billion-row table that takes 120 seconds on a single core can complete in 15-20 seconds across 8 workers. The speedup is genuine here because aggregate queries are actually CPU-bound once data is cached; workers spend their time hashing, sorting, and accumulating rather than waiting on I/O.

If you have enough RAM and CPUs, you can perform complicated aggregate queries without ever having to read from disk. 

|  | 1 vCPU | 2 vCPU | 4 vCPU |
| 2G of RAM | 5715ms | 3152ms | 3049ms |
| 4G of RAM | 4166ms | 1729ms | 1225ms |
| 8G of RAM | 3967ms | 1640ms | 1151ms |

Fig. 2: Average Query speed for the past 100 hourly averages on a years’ worth of data.

## Cost

Database storage, and by extension cost, doesn’t change based on how much RAM or CPUs are used. However, it’s worth discussing how much the additional RAM and CPU might affect the total cost of an IIoT project.

Here’s a sample of compute costs for [Amazon RDS](https://aws.amazon.com/rds/pricing/) (as of the time of publishing). It gives a rough idea of how much it costs to upgrade your hardware.

| Amazon Instance Name | RAM | vCPU | Annual Cost (USD) / vCPU |
| db.t3.micro | 1 | 2 | $158 |
| db.t3.small | 2 | 2 | $315 |
| db.t3.medium | 4 | 2 | $631 |
| db.t3.large | 8 | 2 | $1270 |
| db.t3.xlarge | 16 | 4 | $2540 |
| db.t3.2xlarge | 32 | 8 | $5072 |

Compared to the cost of storage, the price of compute is inexpensive. For example, an IIoT system that’s generating 100,000 rows per second for a year will lead to about 375 Tb of data. At $0.08/GB-month, that’s around $360,000 for a year’s worth of storage (which is why [compression](https://www.tigerdata.com/compression) is essential).

## Conclusion

Hardware upgrades can expand the IIoT performance envelope, but the gains are uneven. For ingest, throwing RAM and CPU at the problem yields diminishing returns quickly. Once indices spill to disk, I/O becomes the bottleneck and no amount of extra cores or memory changes that fundamentally. For queries, it’s different: both RAM and CPU deliver real, compounding speedups.

So what part of the performance envelope are you hitting? If it's ingest capacity, then increasing hardware isn’t likely to save you. Instead you’ll need to look at other tools that are more specifically built for time-series workloads. I’m very biased, but I think [TimescaleDB](https://www.tigerdata.com/) is the perfect next step. If your queries are too slow (despite your best optimizations), then scaling up the compute is likely a legitimate, cost-effective strategy.

I had this exact experience when I was working in manufacturing. Buying a new server for our plant bought me an extra few years of snappy dashboards and analytics. However, I continued to struggle adding tags, eventually having to remove one to make room for another.