TigerData logo
TigerData logo
  • Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    Agentic Postgres

    Postgres for Agents

    TimescaleDB

    Postgres for time-series, real-time analytics and events

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InTry for free
TigerData logo

Products

Time-series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Forum Tutorials Changelog Success Stories Time-series Database

Company

Contact Us Careers About Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2026 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

NanoHertz Solutions - Jake Hertz

By NanoHertz Solutions - Jake Hertz

6 min read

Jan 28, 2026

IoTTimescaleDB

Table of contents

01 Example: Machine Vibration Telemetry02 Real Operational Questions03 Step 1: Convert the Table to a Hypertable04 Step 2: Add a Composite Index for Faster Filters05 Step 3: Tune Chunk Intervals for High-Frequency Data06 Step 4: Continuous Aggregates for Feature Extraction07 Step 5: Compress Historical Vibration Data08 Overall Optimization Results09 Conclusion

TimescaleDB for Manufacturing IoT: Optimizing for High-Volume Production Data

TimescaleDB for Manufacturing IoT: Optimizing for High-Volume Production Data
IoT
NanoHertz Solutions - Jake Hertz

By NanoHertz Solutions - Jake Hertz

6 min read

Jan 28, 2026

Table of contents

01 Example: Machine Vibration Telemetry02 Real Operational Questions03 Step 1: Convert the Table to a Hypertable04 Step 2: Add a Composite Index for Faster Filters05 Step 3: Tune Chunk Intervals for High-Frequency Data06 Step 4: Continuous Aggregates for Feature Extraction07 Step 5: Compress Historical Vibration Data08 Overall Optimization Results09 Conclusion

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

In industrial environments with hundreds of machines streaming sensor signals every second, optimizing your database for analytics is critical. In a follow-up to our initial tutorial on building a data pipeline, we’ll take you step-by-step from slow, unoptimized queries to highly performant analytical queries using TimescaleDB’s advanced features.

We’ll use the example of high-frequency machine vibration telemetry to show you how to tune TimescaleDB to improve query performance, measured using EXPLAIN ANALYZE. We will be using the same service we created in the previous tutorial, which has a compute size of 8 CPU / 32 GiB Memory and is hosted in AWS US East (Ohio).  All the performance metrics you will see in this tutorial are specific to this service instance and data set, so your results may differ.

Example: Machine Vibration Telemetry

Imagine you’re collecting vibration data from 50 rotating machines at 10 samples per second, 24/7 for 6 months.  With this volume, the raw table quickly grows into billions of rows—a perfect example of workloads that reward TimescaleDB optimizations.

Our initial table schema looks like this:

CREATE TABLE vibration_readings (
  time TIMESTAMPTZ NOT NULL,
  machine_id TEXT NOT NULL,
  vibration_rms DOUBLE PRECISION NOT NULL
);

For this tutorial, we will generate randomized data:

INSERT INTO vibration_readings (time, machine_id, vibration_rms)
SELECT
  t.ts,
  m.machine_id,
  2.5
  + 0.5 * sin(extract(epoch from t.ts) / 3600)
  + 0.3 * random()                    
  + (m.machine_id * 0.005)
FROM generate_series(
       now() - INTERVAL '180 days',
       now(),
       INTERVAL '100 milliseconds'
     ) AS t(ts)
CROSS JOIN generate_series(1,50) AS m(machine_id);

At this stage, queries against this table are correct, but slow and non-scalable.

Real Operational Questions

Before we optimize, it’s important to define a few operational questions that your maintenance team may need to answer. These questions will drive how we tune TimescaleDB features.

Here are a few key questions for our machine vibration telemetry example:

Question 1 - Trending: Is vibration increasing over time for any machine?

We want to monitor trend changes that could indicate bearing wear or other mechanical degradation.

Baseline query:

SELECT
  machine_id,
  time_bucket('1 hour', time) AS hour,
  AVG(vibration_rms) AS avg_vibration
FROM vibration_readings
WHERE time > now() - INTERVAL '7 days'
GROUP BY machine_id, hour;

Observation: On the unoptimized table, this query will do a sequential scan of billions of rows, which is slow and inefficient.

Question 2 - Monitoring: How long did each machine operate above the alarm threshold this week?

Maintenance teams often need to know how much time each machine spent above a critical vibration threshold, which drives alerts and prioritization.

Baseline query:

SELECT
  machine_id,
  COUNT(*) * 0.1 AS seconds_above_threshold
FROM vibration_readings
WHERE time > now() - INTERVAL '7 days'
  AND vibration_rms > 5.0
GROUP BY machine_id;

Observation: This query also scans large amounts of historical data, which will be incredibly slow without indexing or chunking.

Question 3 - Comparing: Which machine has the most unstable vibration signal?

Detecting high variance in a machine’s signal helps identify mechanical imbalance or failing sensors.

Baseline query:

SELECT
  machine_id,
  STDDEV(vibration_rms) AS vibration_stddev
FROM vibration_readings
WHERE time > now() - INTERVAL '7 days'
GROUP BY machine_id;

Observation: Without optimization, aggregation over billions of rows will be slow, making real-time anomaly detection impractical.

By defining these three queries first, we now have specific performance goals. Every optimization we introduce—hypertables, indexes, chunk interval tuning, continuous aggregates, and compression—can be measured against these queries using EXPLAIN ANALYZE.

Step 1: Convert the Table to a Hypertable

TimescaleDB’s hypertables partition your data by time and (optionally) by another key like machine_id. This allows the database to prune irrelevant chunks and execute queries in parallel across partitions.

Convert the table to a hypertable:

SELECT create_hypertable(
  'vibration_readings',
  'time',
  'machine_id',
  8
);

Running the trend query from Question 1 now completes in about half the time as the query on the original table.

Instead of scanning one huge table, TimescaleDB is scanning only the relevant chunks—reducing overall execution time.

Step 2: Add a Composite Index for Faster Filters

After converting the table to a hypertable, TimescaleDB can prune irrelevant chunks—but inside each chunk, PostgreSQL is still forced to scan rows sequentially.  With high-frequency telemetry, each chunk can contain millions of rows, so we need to improve how each chunk is accessed.

Our queries consistently filter and group by time and machine_id, which makes a composite index a natural fit:

CREATE INDEX ON vibration_readings (machine_id, time DESC);

Consider the baseline query from Question 2. If we run the query after an EXPLAIN ANALYZE statement, then we’ll be able to see exactly how PostgreSQL is finding the data.

EXPLAIN ANALYZE
SELECT
  machine_id,
  COUNT(*) * 0.1 AS seconds_above_threshold
FROM vibration_readings
WHERE
  time > now() - INTERVAL '7 days'
  AND vibration_rms > 5.0
GROUP BY machine_id;

The query plan now shows an index scan with an execution time of 265 milliseconds.

This index allows the database to quickly locate relevant rows for both time and machine_id filters without scanning the whole dataset—delivering faster queries and enabling predictable performance as data volume grows.

Step 3: Tune Chunk Intervals for High-Frequency Data

With 10 samples per second, default chunk intervals (e.g. weekly) can lead to overly large chunks. A smaller, daily chunk interval often works better by reducing scan times:

SELECT set_chunk_time_interval(
  'vibration_readings',
  INTERVAL '1 day'
);

Running the rolling average query from Question 1 yields an execution time of 17,185 milliseconds, which is significantly shorter than even the query time on the hypertable (after Step 1).

By pruning more chunks during planning, TimescaleDB reduces both planning time and scan effort—an easy win for high-frequency data.

Step 4: Continuous Aggregates for Feature Extraction

Calculating hourly or daily summaries from raw data can still be expensive. Continuous aggregates let TimescaleDB incrementally materialize that summary ahead of time:

CREATE MATERIALIZED VIEW vibration_hourly
WITH (timescaledb.continuous) AS
SELECT
  machine_id,
  time_bucket('1 hour', time) AS hour,
  AVG(vibration_rms) AS avg_vibration,
  STDDEV(vibration_rms) AS vibration_stddev
FROM vibration_readings
GROUP BY machine_id, hour;

Run the query from Question 3 on this view:

EXPLAIN ANALYZE
SELECT
  machine_id,
  STDDEV(avg_vibration) AS vibration_instability
FROM vibration_hourly
WHERE hour > now() - INTERVAL '7 days'
GROUP BY machine_id;

Compared to the baseline query, our new query using continuous aggregates runs in just 1.4 milliseconds.

This gives you pre-aggregated features at lightning speed—essential for dashboards and ML workflows.

Step 5: Compress Historical Vibration Data

While raw recent data is useful for anomaly detection, older data can often be stored in compressed form without analytical access.

Compressing vibration data by machine_id, and leaving the last 3 days uncompressed maximizes query performance for both recent and historical data:

ALTER TABLE vibration_readings
SET (
  timescaledb.compress,
  timescaledb.compress_segmentby = 'machine_id'
);

SELECT add_compression_policy(
  'vibration_readings',
  INTERVAL '3 days'
);

Running a query on historical vibration data still works:

EXPLAIN ANALYZE
SELECT
  machine_id,
  AVG(vibration_rms)
FROM vibration_readings
WHERE time BETWEEN now() - INTERVAL '90 days'
             AND now() - INTERVAL '30 days'
GROUP BY machine_id;

Because we used compression, our query runs in just 13 milliseconds.

Compression is transparent to queries, meaning nothing changes from the query’s perspective. However, under-the-hood far less data is read from disk while keeping analytical access.

Overall Optimization Results

Here’s how database query performance improved as we applied each optimization:

image

Every optimization step, from indexing and chunking, to continuous aggregation and compression, delivers a significant performance win.

Conclusion

In this tutorial, we started by defining three real operational questions that a maintenance team may need to answer:

  1. Is vibration increasing over time for any machine?
  2. How long did each machine operate above the alarm threshold this week?
  3. Which machine has the most unstable vibration signal?

These questions set the stage for every optimization we applied. By converting the raw table into a hypertable, we enabled TimescaleDB to prune irrelevant chunks and scan only the data that mattered. Adding a composite index allowed queries to target individual machines and recent time ranges without scanning all rows. Tuning chunk intervals reduced planning and execution overhead for high-frequency data.  Continuous aggregates pre-computed summaries for rapid access to hourly and daily metrics, and compression reduced storage for historical data while keeping it accessible for analytics.

The result of these optimizations combined was a dramatic performance improvement across all queries. What initially took many seconds now runs in just a few milliseconds.  Hourly averages over a week for Question 1 dropped from over 67 seconds to 1.8 milliseconds. Alarm threshold duration queries for Question 2 dropped from over 5 seconds to 13 milliseconds. Historical aggregates over months for Question 3 dropped from over 15 seconds to just 1.4 milliseconds. While your query execution times may differ, you will see an undeniable benefit from these optimizations on your analytical query performance.

TimescaleDB is built to handle high-volume, high-frequency machine telemetry efficiently, providing both immediate insights for operational alerts, and long-term, accessible history for predictive maintenance modeling. 

Related posts

TimescaleDB for Manufacturing IoT: Building a Data Pipeline

TimescaleDB for Manufacturing IoT: Building a Data Pipeline

IoTTimescaleDB

Dec 11, 2025

Build a manufacturing IoT pipeline with TimescaleDB: create hypertables, ingest sensor data, enable hypercore compression, and run fast analytical queries.

Read more

IoT Energy Data at Scale: Engineering Solutions Beyond Legacy Historians

IoT Energy Data at Scale: Engineering Solutions Beyond Legacy Historians

IoT

May 06, 2025

Let’s explore the intersection of IoT and energy and how Timescale provides powerful solutions for energy innovators.

Read more

Stay updated with new posts and releases.

Receive the latest technical articles and release notes in your inbox.

Share

Get Started Free with Tiger CLI