TigerData logo
TigerData logo
  • Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    Agentic Postgres

    Postgres for Agents

    TimescaleDB

    Postgres for time-series, real-time analytics and events

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InTry for free
Home
AWS Time-Series Database: Understanding Your OptionsStationary Time-Series AnalysisThe Best Time-Series Databases ComparedTime-Series Analysis and Forecasting With Python Alternatives to TimescaleWhat Are Open-Source Time-Series Databases—Understanding Your OptionsWhy Consider Using PostgreSQL for Time-Series Data?Time-Series Analysis in RWhat Is Temporal Data?What Is a Time Series and How Is It Used?Is Your Data Time Series? Data Types Supported by PostgreSQL and TimescaleUnderstanding Database Workloads: Variable, Bursty, and Uniform PatternsHow to Work With Time Series in Python?Tools for Working With Time-Series Analysis in PythonGuide to Time-Series Analysis in PythonUnderstanding Autoregressive Time-Series ModelingCreating a Fast Time-Series Graph With Postgres Materialized Views
Understanding PostgreSQLOptimizing Your Database: A Deep Dive into PostgreSQL Data TypesUnderstanding FROM in PostgreSQL (With Examples)How to Address ‘Error: Could Not Resize Shared Memory Segment’ How to Install PostgreSQL on MacOSUnderstanding FILTER in PostgreSQL (With Examples)Understanding GROUP BY in PostgreSQL (With Examples)PostgreSQL Join Type TheoryA Guide to PostgreSQL ViewsStructured vs. Semi-Structured vs. Unstructured Data in PostgreSQLUnderstanding Foreign Keys in PostgreSQLUnderstanding PostgreSQL User-Defined FunctionsUnderstanding PostgreSQL's COALESCE FunctionUnderstanding SQL Aggregate FunctionsUsing PostgreSQL UPDATE With JOINHow to Install PostgreSQL on Linux5 Common Connection Errors in PostgreSQL and How to Solve ThemUnderstanding HAVING in PostgreSQL (With Examples)How to Fix No Partition of Relation Found for Row in Postgres DatabasesHow to Fix Transaction ID Wraparound ExhaustionUnderstanding LIMIT in PostgreSQL (With Examples)Understanding PostgreSQL FunctionsUnderstanding ORDER BY in PostgreSQL (With Examples)Understanding WINDOW in PostgreSQL (With Examples)Understanding PostgreSQL WITHIN GROUPPostgreSQL Mathematical Functions: Enhancing Coding EfficiencyUnderstanding DISTINCT in PostgreSQL (With Examples)Using PostgreSQL String Functions for Improved Data AnalysisData Processing With PostgreSQL Window FunctionsPostgreSQL Joins : A SummaryUnderstanding OFFSET in PostgreSQL (With Examples)Understanding PostgreSQL Date and Time FunctionsWhat Is Data Compression and How Does It Work?What Is Data Transformation, and Why Is It Important?Understanding the Postgres string_agg FunctionWhat Is a PostgreSQL Left Join? And a Right Join?Understanding PostgreSQL SELECTSelf-Hosted or Cloud Database? A Countryside Reflection on Infrastructure ChoicesUnderstanding ACID Compliance Understanding percentile_cont() and percentile_disc() in PostgreSQLUnderstanding PostgreSQL Conditional FunctionsUnderstanding PostgreSQL Array FunctionsWhat Characters Are Allowed in PostgreSQL Strings?Understanding WHERE in PostgreSQL (With Examples)What Is a PostgreSQL Full Outer Join?What Is a PostgreSQL Cross Join?What Is a PostgreSQL Inner Join?Data Partitioning: What It Is and Why It MattersStrategies for Improving Postgres JOIN PerformanceUnderstanding the Postgres extract() FunctionUnderstanding the rank() and dense_rank() Functions in PostgreSQL
Guide to PostgreSQL PerformanceHow to Reduce Bloat in Large PostgreSQL TablesDesigning Your Database Schema: Wide vs. Narrow Postgres TablesBest Practices for Time-Series Data Modeling: Single or Multiple Partitioned Table(s) a.k.a. Hypertables Best Practices for (Time-)Series Metadata Tables A Guide to Data Analysis on PostgreSQLA Guide to Scaling PostgreSQLGuide to PostgreSQL SecurityHandling Large Objects in PostgresHow to Query JSON Metadata in PostgreSQLHow to Query JSONB in PostgreSQLHow to Use PostgreSQL for Data TransformationOptimizing Array Queries With GIN Indexes in PostgreSQLPg_partman vs. Hypertables for Postgres PartitioningPostgreSQL Performance Tuning: Designing and Implementing Your Database SchemaPostgreSQL Performance Tuning: Key ParametersPostgreSQL Performance Tuning: Optimizing Database IndexesDetermining the Optimal Postgres Partition SizeNavigating Growing PostgreSQL Tables With Partitioning (and More)Top PostgreSQL Drivers for PythonWhen to Consider Postgres PartitioningGuide to PostgreSQL Database OperationsUnderstanding PostgreSQL TablespacesWhat Is Audit Logging and How to Enable It in PostgreSQLGuide to Postgres Data ManagementHow to Index JSONB Columns in PostgreSQLHow to Monitor and Optimize PostgreSQL Index PerformanceSQL/JSON Data Model and JSON in SQL: A PostgreSQL PerspectiveA Guide to pg_restore (and pg_restore Example)PostgreSQL Performance Tuning: How to Size Your DatabaseAn Intro to Data Modeling on PostgreSQLExplaining PostgreSQL EXPLAINWhat Is a PostgreSQL Temporary View?A PostgreSQL Database Replication GuideHow to Compute Standard Deviation With PostgreSQLHow PostgreSQL Data Aggregation WorksBuilding a Scalable DatabaseRecursive Query in SQL: What It Is, and How to Write OneGuide to PostgreSQL Database DesignHow to Use Psycopg2: The PostgreSQL Adapter for Python
Best Practices for Scaling PostgreSQLHow to Design Your PostgreSQL Database: Two Schema ExamplesHow to Handle High-Cardinality Data in PostgreSQLHow to Store Video in PostgreSQL Using BYTEABest Practices for PostgreSQL Database OperationsHow to Manage Your Data With Data Retention PoliciesBest Practices for PostgreSQL AggregationBest Practices for Postgres Database ReplicationHow to Use a Common Table Expression (CTE) in SQLBest Practices for Postgres Data ManagementBest Practices for Postgres PerformanceBest Practices for Postgres SecurityBest Practices for PostgreSQL Data AnalysisTesting Postgres Ingest: INSERT vs. Batch INSERT vs. COPYHow to Use PostgreSQL for Data Normalization
PostgreSQL Extensions: amcheckPostgreSQL Extensions: Unlocking Multidimensional Points With Cube PostgreSQL Extensions: hstorePostgreSQL Extensions: ltreePostgreSQL Extensions: Secure Your Time-Series Data With pgcryptoPostgreSQL Extensions: pg_prewarmPostgreSQL Extensions: pgRoutingPostgreSQL Extensions: pg_stat_statementsPostgreSQL Extensions: Install pg_trgm for Data MatchingPostgreSQL Extensions: Turning PostgreSQL Into a Vector Database With pgvectorPostgreSQL Extensions: Database Testing With pgTAPPostgreSQL Extensions: PL/pgSQLPostgreSQL Extensions: Using PostGIS and Timescale for Advanced Geospatial InsightsPostgreSQL Extensions: Intro to uuid-ossp
Columnar Databases vs. Row-Oriented Databases: Which to Choose?Data Analytics vs. Real-Time Analytics: How to Pick Your Database (and Why It Should Be PostgreSQL)How to Choose a Real-Time Analytics DatabaseUnderstanding OLTPOLAP Workloads on PostgreSQL: A GuideHow to Choose an OLAP DatabasePostgreSQL as a Real-Time Analytics DatabaseWhat Is the Best Database for Real-Time AnalyticsHow to Build an IoT Pipeline for Real-Time Analytics in PostgreSQL
When Should You Use Full-Text Search vs. Vector Search?HNSW vs. DiskANNA Brief History of AI: How Did We Get Here, and What's Next?A Beginner’s Guide to Vector EmbeddingsPostgreSQL as a Vector Database: A Pgvector TutorialUsing Pgvector With PythonHow to Choose a Vector DatabaseVector Databases Are the Wrong AbstractionUnderstanding DiskANNA Guide to Cosine SimilarityStreaming DiskANN: How We Made PostgreSQL as Fast as Pinecone for Vector DataImplementing Cosine Similarity in PythonVector Database Basics: HNSWVector Database Options for AWSVector Store vs. Vector Database: Understanding the ConnectionPgvector vs. Pinecone: Vector Database Performance and Cost ComparisonHow to Build LLM Applications With Pgvector Vector Store in LangChainHow to Implement RAG With Amazon Bedrock and LangChainRetrieval-Augmented Generation With Claude Sonnet 3.5 and PgvectorRAG Is More Than Just Vector SearchPostgreSQL Hybrid Search Using Pgvector and CohereImplementing Filtered Semantic Search Using Pgvector and JavaScriptRefining Vector Search Queries With Time Filters in Pgvector: A TutorialUnderstanding Semantic SearchWhat Is Vector Search? Vector Search vs Semantic SearchText-to-SQL: A Developer’s Zero-to-Hero GuideNearest Neighbor Indexes: What Are IVFFlat Indexes in Pgvector and How Do They WorkBuilding an AI Image Gallery With OpenAI CLIP, Claude Sonnet 3.5, and Pgvector
Understanding IoT (Internet of Things)A Beginner’s Guide to IIoT and Industry 4.0Storing IoT Data: 8 Reasons Why You Should Use PostgreSQLMoving Past Legacy Systems: Data Historian vs. Time-Series DatabaseWhy You Should Use PostgreSQL for Industrial IoT DataHow to Choose an IoT DatabaseHow to Simulate a Basic IoT Sensor Dataset on PostgreSQLFrom Ingest to Insights in Milliseconds: Everactive's Tech Transformation With TimescaleHow Ndustrial Is Providing Fast Real-Time Queries and Safely Storing Client Data With 97 % CompressionHow Hopthru Powers Real-Time Transit Analytics From a 1 TB Table Migrating a Low-Code IoT Platform Storing 20M Records/DayHow United Manufacturing Hub Is Introducing Open Source to ManufacturingBuilding IoT Pipelines for Faster Analytics With IoT CoreVisualizing IoT Data at Scale With Hopara and TimescaleDB
What Is ClickHouse and How Does It Compare to PostgreSQL and TimescaleDB for Time Series?Timescale vs. Amazon RDS PostgreSQL: Up to 350x Faster Queries, 44 % Faster Ingest, 95 % Storage Savings for Time-Series DataWhat We Learned From Benchmarking Amazon Aurora PostgreSQL ServerlessTimescaleDB vs. Amazon Timestream: 6,000x Higher Inserts, 5-175x Faster Queries, 150-220x CheaperHow to Store Time-Series Data in MongoDB and Why That’s a Bad IdeaPostgreSQL + TimescaleDB: 1,000x Faster Queries, 90 % Data Compression, and Much MoreEye or the Tiger: Benchmarking Cassandra vs. TimescaleDB for Time-Series Data
Alternatives to RDSWhy Is RDS so Expensive? Understanding RDS Pricing and CostsEstimating RDS CostsHow to Migrate From AWS RDS for PostgreSQL to TimescaleAmazon Aurora vs. RDS: Understanding the Difference
5 InfluxDB Alternatives for Your Time-Series Data8 Reasons to Choose Timescale as Your InfluxDB Alternative InfluxQL, Flux, and SQL: Which Query Language Is Best? (With Cheatsheet)What InfluxDB Got WrongTimescaleDB vs. InfluxDB: Purpose Built Differently for Time-Series Data
5 Ways to Monitor Your PostgreSQL DatabaseHow to Migrate Your Data to Timescale (3 Ways)Postgres TOAST vs. Timescale CompressionBuilding Python Apps With PostgreSQL: A Developer's GuideData Visualization in PostgreSQL With Apache SupersetMore Time-Series Data Analysis, Fewer Lines of Code: Meet HyperfunctionsIs Postgres Partitioning Really That Hard? An Introduction To HypertablesPostgreSQL Materialized Views and Where to Find ThemTimescale Tips: Testing Your Chunk Size
Postgres cheat sheet
HomeTime series basicsPostgres basicsPostgres guidesPostgres best practicesPostgres extensionsPostgres for real-time analytics
Sections

Performance

Guide to PostgreSQL Performance

Schema design

PostgreSQL Performance Tuning: Designing and Implementing Your Database Schema

Performance tuning

PostgreSQL Performance Tuning: Key ParametersPostgreSQL Performance Tuning: Optimizing Database IndexesHow to Reduce Bloat in Large PostgreSQL TablesPostgreSQL Performance Tuning: How to Size Your Database

Partitioning

Determining the Optimal Postgres Partition SizeNavigating Growing PostgreSQL Tables With Partitioning (and More)When to Consider Postgres PartitioningPg_partman vs. Hypertables for Postgres Partitioning

Database design and modeling

An Intro to Data Modeling on PostgreSQLDesigning Your Database Schema: Wide vs. Narrow Postgres TablesBest Practices for Time-Series Data Modeling: Single or Multiple Partitioned Table(s) a.k.a. Hypertables Best Practices for (Time-)Series Metadata Tables Guide to PostgreSQL Database Design

Database replication

A PostgreSQL Database Replication Guide

Data analysis

A Guide to Data Analysis on PostgreSQLHow to Compute Standard Deviation With PostgreSQL

Data transformation

How to Use PostgreSQL for Data Transformation

Data aggregation

How PostgreSQL Data Aggregation Works

Scaling postgres

A Guide to Scaling PostgreSQLBuilding a Scalable Database

Database security

Guide to PostgreSQL SecurityWhat Is Audit Logging and How to Enable It in PostgreSQL

Data management

Understanding PostgreSQL TablespacesGuide to Postgres Data ManagementHandling Large Objects in Postgres

Database operations

Guide to PostgreSQL Database Operations

JSON

How to Query JSON Metadata in PostgreSQLHow to Query JSONB in PostgreSQLHow to Index JSONB Columns in PostgreSQLSQL/JSON Data Model and JSON in SQL: A PostgreSQL Perspective

Query optimization

Explaining PostgreSQL EXPLAINWhat Is a PostgreSQL Temporary View?Optimizing Array Queries With GIN Indexes in PostgreSQLRecursive Query in SQL: What It Is, and How to Write One

Database tools and libraries

How to Use Psycopg2: The PostgreSQL Adapter for PythonTop PostgreSQL Drivers for Python

Database indexes

How to Monitor and Optimize PostgreSQL Index Performance

Database backups and restore

A Guide to pg_restore (and pg_restore Example)

Products

Time Series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Forum Tutorials Changelog Success Stories Time Series Database

Company

Contact Us Careers About Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2025 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Published at Mar 6, 2024

Metadata Tables

Best Practices for (Time-)Series Metadata Tables

A bookshelf with books representing metadata tables.

Written by Chris Engelbert

Collecting time-related data, like IoT or observability metrics, events, logs, or similar datasets, comes with massive amounts of data split into many distinct series.

One of the many lessons I learned at my previous startup is that understanding the corresponding attributes of each data series is essential and must be available quickly for many use cases.

In the last two installments of the best practices series [Narrow, Medium, or Wide Table Layout and Single vs. Multiple Table(s)], we looked at how to design your hypertables. Now, we want to start looking into how we can make things available fast, starting with separate tables for metadata and series lookups.

We’ll cover the following best practices:

  • Querying meta information

  • Referencing the hypertable data

  • Further specifying the series in the metadata table

Why These Metadata Tables Matter

The interesting thing about these metadata tables is that we can easily and quickly resolve all available data series for a specific user and time range. We'd also be able to present overlapping series, which could still be distinct, like two temperature sensors in the same device (yes, that may make sense with external, cable-connected sensors in different rooms—been there, done that 😉).

What's a Series Metadata Table?

First and foremost, it is a term I came up with while at my startup. Our use case was as simple as it was complicated. We collected IoT data from a large set of devices, but devices could have their sensors exchanged (they were pluggable) at any given point in time.

A series metadata table is often used in time-series databases, and it's a special kind of table that stores metadata about the series of data. This metadata could include information such as series names, labels, the start and end times of the series, units of measurement, or any other descriptive details that provide context to the series.

For developers dealing with time-series data, the series metadata table helps improve query efficiency and data organization. By storing this metadata separately from the actual time-series data, you can keep your database structure more compact and make it easier to manage and search through your data. It also provides a way to annotate your data with additional context, which can be useful when analyzing or visualizing the data.

Metadata table vs. series metadata table

A metadata table, in general, stores information (metadata) about other data in the database. This could be information about the data's structure, such as the names and types of columns in a table, or it could be more descriptive information about the data itself, such as the source of the data, when it was last updated, etc.

So, the main difference between a metadata table and a series metadata table is what they store information about: a generic metadata table can store information about any kind of data, while a series metadata table specifically stores information about time-series data. The purpose of both tables is to provide additional context and organization to the data stored in the database, which can make it easier to manage and analyze the data.

In a time-series database, like TimescaleDB, you might store the actual time-series data in a Timescale hypertable (essentially a partitioned table that is divided into smaller partitions called "chunks") and then use a series metadata table to store additional information about that data.

Example: Implementing a Series Metadata Table

After this series's first two blog posts, we already know what the table layouts can look like and how we can implement a system for GDPR or similar regulations. That's "easy" by now. The more complicated part of the design is to tell the user what data series are available at any given building at any given time.

To clarify a bit, that means that a single building may have multiple devices with temperature sensors, let's say four of them. Let's also assume that at least one of the devices had the temperature sensor exchanged for damage during the given time.

Our set of available data series could look like the following example:

postgres=# SELECT * FROM series_metadata; series_id | device_id | sensor_id | ts_start | ts_end | ts_last_seen -----------+-----------+-----------+------------------------+------------------------+------------------------ 1 | 1 | 1 | 2023-01-01 00:00:00+00 | | 2023-02-01 00:00:00+00 2 | 2 | 2 | 2022-12-13 22:00:00+00 | 2023-02-01 20:00:00+00 | 2023-02-01 20:00:00+00 3 | 3 | 3 | 2023-01-15 17:00:00+00 | 2023-01-17 09:00:00+00 | 2023-01-17 09:00:00+00 4 | 3 | 4 | 2023-01-17 09:10:00+00 | | 2023-02-01 00:00:00+00 (4 rows)

We can see that we have three devices (device_id) and four sensors (sensor_id). Next, two columns (ts_start and ts_end) define the available time range for each data series. A value of null in ts_end represents an open series, an ongoing series where new data is expected.

Given that, we can see that the third device had its sensor exchanged only two days after the first series started. The last column ( ts_last_seen) finds that specific data series's last available row (timestamp). We'll come back to this later.

Querying Meta Information

In our example, we want to show all available data series for a customer and building. That said, we need two more tables. 

The initial request

For simplicity, we use one building table (with a customer_id field and an assignment table for the devices in a building).

postgres=# \d buildings Table "public.buildings" Column | Type | Collation | Nullable | Default -------------+--------+-----------+----------+------------------------------------------------ building_id | bigint | | not null | nextval('buildings_building_id_seq'::regclass) customer_id | bigint | | not null | Indexes: "buildings_pk" PRIMARY KEY, btree (building_id)

postgres=# \d assignment_buildings_devices Table "public.assignment_buildings_devices" Column | Type | Collation | Nullable | Default ---------------------------------+--------+-----------+----------+-------------------------------------------------------------------------------------- assignment_buildings_devices_id | bigint | | not null | nextval('assignment_buildings_devices_assignment_buildings_devices_i_seq'::regclass) building_id | bigint | | | device_id | bigint | | | Indexes: "assignment_buildings_devices_pk" PRIMARY KEY, btree (assignment_buildings_devices_id) "assignment_buildings_devices_building_id_device_id_uindex" UNIQUE, btree (building_id, device_id) "assignment_buildings_devices_device_id_uindex" UNIQUE, btree (device_id)

I omit the actual data in those two tables but give a quick overview as a combined table.

Customer

Building

Device(s)

1

1

1

2

2

2, 3

With those three tables, we can search all available series with a simple query and only get the available data series of customer 2.

postgres=# SELECT sm.* postgres-# FROM series_metadata sm postgres-# RIGHT JOIN assignment_buildings_devices abd ON sm.device_id = abd.device_id postgres-# RIGHT JOIN buildings b ON abd.building_id = b.building_id postgres-# WHERE b.customer_id = 2; series_id | device_id | sensor_id | ts_start | ts_end | ts_last_seen -----------+-----------+-----------+------------------------+------------------------+------------------------ 2 | 2 | 2 | 2022-12-13 22:00:00+00 | 2023-02-01 20:00:00+00 | 2023-02-01 20:00:00+00 3 | 3 | 3 | 2023-01-15 17:00:00+00 | 2023-01-17 09:00:00+00 | 2023-01-17 09:00:00+00 4 | 3 | 4 | 2023-01-17 09:10:00+00 | | 2023-02-01 00:00:00+00 (3 rows)

That's already awesome. The alternative would be to crawl through all the rows stored in our hypertables and try to find those values on the fly. Doesn't sound like a great alternative.

Anyhow, that was just part of the initial request. 

Finding data series in a certain time range

The second part is to find data series in a certain time range. And here is where this approach really shines. We can quickly filter our series with a small helper function (which I highly recommend) and the magic of PostgreSQL tzrange data type.

CREATE OR REPLACE FUNCTION series_in_range(         series series_metadata,         rangeStart timestamptz, rangeEnd timestamptz )     RETURNS bool     LANGUAGE SQL     PARALLEL SAFE     STABLE AS $$     SELECT NOT isempty(         tstzrange(rangeStart, rangeEnd) *         tstzrange(series.ts_start, series.ts_end, '[]')     ); $$;

This function checks if a given series candidate is in a requested time range. As a note, the function is marked as language sql to give the query planner the option to fully optimize it with the query we build.

With all that in place, we can now extend the query above with a potential time range—let's say everything after 2023-01-18, and it returns only the matching rows. It also automatically considers the ts_end columns with NULL to be open-ended.

postgres=# SELECT sm.* postgres-# FROM series_metadata sm postgres-# RIGHT JOIN assignment_buildings_devices abd ON sm.device_id = abd.device_id postgres-# RIGHT JOIN buildings b ON abd.building_id = b.building_id postgres-# WHERE b.customer_id = 2 postgres-# AND series_in_range(sm, '2023-01-18', now()); series_id | device_id | sensor_id | ts_start | ts_end | ts_last_seen -----------+-----------+-----------+------------------------+------------------------+------------------------ 2 | 2 | 2 | 2022-12-13 22:00:00+00 | 2023-02-01 20:00:00+00 | 2023-02-01 20:00:00+00 4 | 3 | 4 | 2023-01-17 09:10:00+00 | | 2023-02-01 00:00:00+00 (2 rows)

To simplify querying later on, I add one more tidbit to the query above and have the open and closing resolved to actual timestamps that can be passed back to the database later, making it easier to query. I drop that into a function:

CREATE OR REPLACE FUNCTION series_in_range_by_customer( customer_id int8, rangeStart timestamptz, rangeEnd timestamptz ) RETURNS SETOF series_metadata LANGUAGE SQL PARALLEL SAFE STABLE AS $$ SELECT sm.series_id, sm.device_id, sm.sensor_id, greatest(ts_start, rangeStart) AS ts_start, least(sm.ts_end, rangeEnd) AS ts_end, sm.ts_last_seen FROM series_metadata sm RIGHT JOIN assignment_buildings_devices abd ON sm.device_id = abd.device_id RIGHT JOIN buildings b ON abd.building_id = b.building_id WHERE b.customer_id = series_in_range_by_customer.customer_id AND series_in_range(sm, rangeStart, rangeEnd); $$;

Again, make sure this function is defined as language sql for the query planner to fully optimize it in place.

The result changed slightly, exchanging each data series's start and end times to align with our requested time window.

postgres=# SELECT * FROM series_in_range_by_customer(2, '2023-01-18', now()); series_id | device_id | sensor_id | ts_start | ts_end | ts_last_seen -----------+-----------+-----------+------------------------+-------------------------------+------------------------ 2 | 2 | 2 | 2023-01-18 00:00:00+00 | 2023-02-01 20:00:00+00 | 2023-02-01 20:00:00+00 4 | 3 | 4 | 2023-01-18 00:00:00+00 | 2023-03-31 08:12:44.033446+00 | 2023-02-01 00:00:00+00 (2 rows)

Referencing the Hypertable Data

Now that we know what the external definition looks like, we need to understand how we reference the actual hypertable values. Without going into the actual layout of the hypertable (if you're interested in this, please find more information in the Narrow, Medium, or Wide Table Layout article), let's go with the medium table layout style.

Our hypertable may look like the following:

postgres=# \d temperatures Table "public.temperatures" Column | Type | Collation | Nullable | Default -------------+--------------------------+-----------+----------+------------------------------------------------- metric_id | bigint | | not null | nextval('temperatures_metric_id_seq'::regclass) series_id | bigint | | not null | recorded_at | timestamp with time zone | | not null | value | double precision | | not null | Indexes: "temperatures_pk" PRIMARY KEY, btree (metric_id) "temperatures_series_id_recorded_at_uindex" UNIQUE, btree (series_id DESC, recorded_at DESC)

Customer

Building

Device(s)

1

1

1

2

2

2, 3

The hypertable stores data according to the series_id and the values timestamp. The order on the index temperatures_series_id_recorded_at_uindex is interesting, though, because it is deliberately chosen to be DESC ordering on both columns.

In most cases, you want the latest series and the latest data. That said, building the index in the order of most common query patterns is a real magic speed-up!

Further Specifying the Series in the Metadata Table

The other really interesting element about these types of external series definitions is that we can extend the series_metadata with additional columns, helping us further specify the series. The possibilities are endless. One additional meta information is the ts_last_seen column, which is updated every time we write a new row to our data hypertable.

At my startup, clevabit, we used it to quickly find all series that could be closed after a given time window of inactivity (for us, it was after seven days). To achieve that, we had a small cron job running every hour or so, selecting all series with the ts_last_seen smaller than now() - interval ‘7 days’ and ts_end being NULL, and updating the row with ts_end being set to ts_last_seen.

Alternative approach: incorporating series metrics 

An alternative could be a field providing information about the metric the series stores, such as "cpu_usage" or "mem_free." If you choose the multi-table approach, it could also provide information about which hypertable the series is stored in. As mentioned earlier, the possibilities are extensive.

There are quite a few more reasons why you'd want to use that setup. Let's look at them as questions or optimizations they could bring.

  • Does a device have a specific sensor in time range [x, y]?

  • Does a device have a specific metric in time range [x, y]?

  • Does a building have a specific metric in time range [x, y]?

  • To provide information about similar parallel but distinct data series in time range [x, y].

  • To optimize queries by knowing the exact available time ranges for each series instead of just using the provided search window.

  • To require less space for metadata since the alternative to the above approach would be to store all necessary metadata with every row in the hypertable, massively duplicating information.

  • To have a quick lookup of available data series time ranges.

  • To find previous assignments of devices and sensors (in case you have a use case where, for example, sensor demo units are sent to different customers for testing).

So, Should I Do It?

Finally, the question you waited for, should you do it?

My answer is: yes, do it.

There are very limited use cases where you wouldn't be interested in the disk space savings or the possibility of finding distinct data series quickly. It also didn't make sense to provide a table for ease of use, as I did in the previous episodes.

Learn more

  • Best Practices for Time-Series Data Modeling: Single or Multiple Partitioned Table(s) a.k.a. Hypertables

  • Designing Your Database Schema: Narrow, Medium or Wide Table Layout

Next Steps for Metadata Table Optimization

For more table optimizations impacting your performance, I recommend you check out PostgreSQL Performance Tuning: Designing and Implementing Your Database Schema.

On that quick ending note, I hope you found this installment interesting. If you're eager to try it out now, create a free Timescale account or install TimescaleDB on-prem.

On this page