---
title: "How Relational Complexity Crushes Real-Time Dashboards"
published: 2026-05-08T11:33:00.000-04:00
updated: 2026-05-08T11:33:00.000-04:00
excerpt: "Joins on billion-row Postgres tables crush real-time dashboards. Flatten your schema to cut shared buffer hits 70–90% and restore dashboard speed."
tags: PostgreSQL, real time analytics
authors: NanoHertz Solutions - Jake Hertz
---

> **TimescaleDB is now Tiger Data.**

When your Shared Buffer Hits cross 100,000 for a simple dashboard refresh, you have hit the wall of relational scaling. A simple query that used to return in 100ms now takes 1.5 seconds. You’ve [added indexes](https://www.tigerdata.com/learn/postgresql-performance-tuning-optimizing-database-indexes) and tuned the buffer pool, but the performance gains are fleeting. Your largest tables [crossed 500M rows](https://www.tigerdata.com/blog/indexing-your-way-into-a-performance-bottleneck), and the joins that once felt elegant are now a weight dragging down your entire system.

## What You Will Learn

High-frequency data and deep relational hierarchies eventually collide. By following this guide, you will be able to:

-   Identify the metadata tax in your current query plans using buffer metrics.
-   Implement a flattened schema to bypass the Join Explosion.
-   Automate data synchronization between relational and flattened tables.
-   Compare the performance gap between relational and denormalized architectures.

## Why It Matters

Relational schemas work during small-scale pilots but struggle when dashboards must stitch ten tables together for every refresh, hundreds of times per second, against billions of rows. Complex joins consume excessive CPU cycles, leading to the query speed degradation typical of the [optimization treadmill](https://www.tigerdata.com/learn/postgres-performance-best-practices).

Each join forces the database to navigate multiple B-tree indexes and load disparate pages into memory. By flattening data, you reduce the database's computational burden and enable much higher query concurrency.

## The Join Explosion Problem

A [normalized schema](https://www.tigerdata.com/learn/how-to-use-postgresql-for-data-normalization) fragments telemetry into separate tables for readings, hardware IDs, and site data. While efficient for storage, this creates a “join explosion” at scale. On tables with a billion rows, the query planner has to navigate multiple B-tree indexes and perform nested loops, or hash joins, just to assemble a single dashboard view. This forces the database to jump between disparate disk locations, incurring a heavy computational tax every time a user refreshes their screen.

The solution is to move the join cost from read-time to write-time. Instead of stitching tables together during every query, you flatten the data by pre-joining the metadata to the raw reading during ingestion.

## Moving From Relational to Flattened

Moving from relational to flattened means transitioning from a passive storage model to an active processing model. Instead of a lean ingestion phase followed by expensive reads, you perform the join logic exactly once when the data enters the system. Storing the result in a [single, wide table](https://www.tigerdata.com/learn/designing-your-database-schema-wide-vs-narrow-postgres-tables) allows the database to perform a single index scan rather than a multi-way join, reducing I/O load and restoring dashboard responsiveness.

The following sections illustrate how to move to a flattened data model, with example SQL commands for a smart building management system. In this scenario, we will move the computational cost of joining sensor data (e.g. temperature, air quality) with building and regional metadata from query-time to write-time.

### Step 1: Define the Flattened Structure

Start by creating a table that includes the metadata as native columns. This removes the need for downstream joins.

This example query creates the flattened table and a composite index so the database can locate time-series readings and their metadata without expensive lookups across separate tables. This structure allows the database to instantly locate specific data, like North region telemetry, without performing costly table scans.

```SQL
CREATE TABLE readings_flattened (
   ts TIMESTAMPTZ NOT NULL,
   sensor_name TEXT,
   building_name TEXT,
   region TEXT,
   value DOUBLE PRECISION
);
CREATE INDEX idx_flattened_ts_region ON readings_flattened (ts DESC, region);
```

### Step 2: Backfill Existing Data in Batches

Moving a billion rows at once will lock your database. Use a batch approach to migrate data from your relational tables.

This example query creates a path to migrate historical records from fragmented tables into the new wide format **one window at a time**, starting with the past thirty days. By pre-joining the data during this migration, you eliminate the need for the database ever to stitch these specific records together again.

```SQL
INSERT INTO readings_flattened (ts, sensor_name, building_name, region, value)
SELECT
   r.ts,
   s.sensor_name,
   l.building_name,
   l.region,
   r.value
FROM readings r
JOIN sensors s ON r.sensor_id = s.id
JOIN locations l ON s.location_id = l.id
WHERE r.ts > now() - interval '30 days';
```

### Step 3: Automate Maintenance via Triggers

To keep the flattened table up to date without changing your application code, [use a database trigger](https://www.tigerdata.com/blog/speed-up-triggers-by-7x-with-transition-tables). This ensures that every new reading is pre-joined as it enters the system.

This example query creates an automated trigger that flattens every incoming sensor reading in real time. This ensures that, as new sensor data arrives, it is immediately linked to its building and regional context before being stored.

```SQL
CREATE OR REPLACE FUNCTION flatten_reading_trigger()
RETURNS TRIGGER AS $$
BEGIN
   INSERT INTO readings_flattened (ts, sensor_name, building_name, region, value)
   SELECT
       NEW.ts, s.sensor_name, l.building_name, l.region, NEW.value
   FROM sensors s
   JOIN locations l ON s.location_id = l.id
   WHERE s.id = NEW.sensor_id;
   RETURN NEW;
END;
​
CREATE TRIGGER trg_flatten_reading
AFTER INSERT ON readings
FOR EACH ROW EXECUTE FUNCTION flatten_reading_trigger();
```

## ​Measuring the Performance Gap

To see the mechanical benefits of your new flattened table, you need to move beyond measuring execution time, which can fluctuate based on concurrent load. Instead, use the [BUFFERS metric in your query plan](https://www.tigerdata.com/learn/postgresql-performance-tuning-how-to-size-your-database) to observe the physical I/O the database performs. This provides a stable, repeatable measure of the work required to retrieve your data.

### The Relational Approach (Join at Read)

To identify the join tax in your system, establish a performance baseline using your existing relational schema. This example query uses the EXPLAIN (ANALYZE, BUFFERS) command to measure the physical memory and I/O work required by a standard multi-way join.

In our example scenario, this query tracks how many data blocks your database engine must touch to assemble the North region dashboard.

```SQL
EXPLAIN (ANALYZE, BUFFERS)
SELECT r.ts, s.sensor_name, l.building_name, r.value
FROM readings r
JOIN sensors s ON r.sensor_id = s.id
JOIN locations l ON s.location_id = l.id
WHERE r.ts > now() - interval '1 hour' AND l.region = 'North';
```

Focus on the shared hit count. Each hit represents one 8KB block of data the database had to find and load. If you see tens of thousands of hits for one hour of data, your I/O is the bottleneck.

### The Flattened Approach (Pre-Joined)

Validate the architectural shift by running a comparative diagnostic against the new flattened structure. This example query proves the efficiency of data co-location by measuring shared buffer hits. Unlike the relational model, which hunts across multiple indexes, this approach allows the engine to find all necessary data within a single storage layer, significantly reducing the number of blocks touched.

In our example scenario, this query will show the efficiency of your database engine in assembling the North region dashboard with a flattened structure.

```SQL
EXPLAIN (ANALYZE, BUFFERS)
SELECT ts, sensor_name, building_name, value
FROM readings_flattened
WHERE ts > now() - interval '1 hour' AND region = 'North';
```

​Look for a 70–90% drop in shared hits — the foundation of any [real-time analytics](https://www.tigerdata.com/blog/real-time-analytics-for-time-series-continuous-aggregates) workload. This reduction in blocks touched is what allows your dashboard to scale to hundreds of concurrent users without CPU saturation. 

### Trade-offs: Consistency vs. Speed

Flattening introduces redundancy. If a building is renamed, the flattened table will still contain the old name for historical rows. However, for [telemetry and IIoT](https://www.tigerdata.com/learn/a-beginners-guide-to-iiot-and-industry-4-0), metadata is usually static. The massive gain in query concurrency and dashboard speed almost always outweighs the cost of a rare historical update.

## ​Next Step

Identify the slowest query in your dashboard. Test a flattened version of that dataset using a temporary table to see the I/O savings immediately:

```SQL
CREATE TEMPORARY TABLE test_flattened AS
SELECT r.ts, s.sensor_name, l.building_name, l.region, r.value
FROM readings r
JOIN sensors s ON r.sensor_id = s.id
JOIN locations l ON s.location_id = l.id
LIMIT 1000000;
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM test_flattened WHERE region = 'North';
```

Compare the shared hit count of this test against your production query. If the savings are significant, it is time to move toward a [denormalized architecture](https://www.tigerdata.com/learn/real-time-analytics-in-postgres). For workloads exceeding billions of rows, see how Tiger Data [](https://www.tigerdata.com/docs/about/latest/whitepaper)does this flattening for you with hybrid row-columnar storage. Start a [Tiger Cloud free trial](https://console.cloud.timescale.com/signup) today to see these savings for yourself.