Hypertable operations
Create, alter, and drop a hypertable, and speed up data ingest with direct compress
Hypertables are designed for real-time analytics, they are PostgreSQL tables that automatically partition your data by
time. Typically, you partition hypertables on columns that hold time values.
Best practice is to use timestamptz column type. However, you can also partition on
date, integer, timestamp and UUIDv7 types.
Prerequisites for this tutorial
To follow the steps on this page:
-
Create a target Tiger Cloud service with the Real-time analytics capability.
You need your connection details. This procedure also works for self-hosted TimescaleDB.
Create a hypertable
Section titled “Create a hypertable”Create a hypertable for your time-series data using CREATE TABLE.
For efficient queries, remember to segmentby the column you will use most often to filter your
data:
CREATE TABLE conditions ( time TIMESTAMPTZ NOT NULL, location TEXT NOT NULL, device TEXT NOT NULL, temperature DOUBLE PRECISION NULL, humidity DOUBLE PRECISION NULL) WITH ( tsdb.hypertable, tsdb.segmentby = 'device', tsdb.orderby = 'time DESC');When you create a hypertable using CREATE TABLE … WITH …, the default partitioning
column is automatically the first column with a timestamp data type. Also, TimescaleDB creates a
columnstore policy that automatically converts your data to the columnstore, after an interval equal to the value of the chunk_interval, defined through after in the policy. This columnar format enables fast scanning and
aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the
columnstore conversion, hypertable chunks are compressed by up to 98%, and organized for efficient, large-scale queries.
You can customize this policy later using alter_job. However, to change after or
created_before, the compression settings, or the hypertable the policy is acting on, you must
remove the columnstore policy and add a new one.
You can also manually convert chunks in a hypertable to the columnstore.
To convert an existing table with data in it, call create_hypertable on that table with
migrate_data to true. However, if you have a lot of data, this may take a long time.
Speed up data ingestion
Section titled “Speed up data ingestion”When you set timescaledb.enable_direct_compress_copy your data gets compressed in memory during ingestion with COPY statements.
By writing the compressed batches immediately in the columnstore, the IO footprint is significantly lower.
Also, the columnstore policy you set is less important, INSERT already produces compressed chunks.
Please note that this feature is a tech preview and not production-ready. Using this feature could lead to regressed query performance and/or storage ratio, if the ingested batches are not correctly ordered or are of too high cardinality.
To enable in-memory data compression during ingestion:
SET timescaledb.enable_direct_compress_copy=on;Important facts
- High cardinality use cases do not produce good batches and lead to degreaded query performance.
- The columnstore is optimized to store 1000 records per batch, which is the optimal format for ingestion per segment by.
- WAL records are written for the compressed batches rather than the individual tuples.
- Currently only
COPYis support,INSERTwill eventually follow. - Best results are achieved for batch ingestion with 1000 records or more, upper boundary is 10.000 records.
- Continous Aggregates are not supported at the moment.
Alter a hypertable
Section titled “Alter a hypertable”You can alter a hypertable, for example to add a column, by using the PostgreSQL
ALTER TABLE command. Some operations are not supported for hypertables with columnstore enabled. See Altering hypertables with columnstore enabled.
Add a column to a hypertable
Section titled “Add a column to a hypertable”You add a column to a hypertable using the ALTER TABLE command. In this
example, the hypertable is named conditions and the new column is named
humidity:
ALTER TABLE conditions ADD COLUMN humidity DOUBLE PRECISION NULL;Adding a column is fast regardless of the default value. PostgreSQL 11+ stores the default in the catalog without rewriting existing rows.
Rename a hypertable
Section titled “Rename a hypertable”You can change the name of a hypertable using the ALTER TABLE command. In this
example, the hypertable is called conditions, and is being changed to the new
name, weather:
ALTER TABLE conditions RENAME TO weather;Change a column data type
Section titled “Change a column data type”You can change the data type of a column in a hypertable using the ALTER TABLE
command. In this example, the temperature column data type is changed from DOUBLE PRECISION
to NUMERIC:
ALTER TABLE conditions ALTER COLUMN temperature TYPE NUMERIC;The following restrictions apply:
- For time dimension columns, you can only change to
TIMESTAMPTZ,TIMESTAMP,DATE,INTEGER(smallint, integer, or bigint), orUUID. - You cannot change the type of columns with custom partitioning functions.
- You cannot change column types when the hypertable has columnstore chunks. Convert those chunks back to rowstore first. See Altering hypertables with columnstore enabled.
- For columns with statistics enabled, you can only change to integer or timestamp types.
To change to other types, first disable statistics using
disable_column_stats.
Drop a column
Section titled “Drop a column”You can drop a column from a hypertable using the ALTER TABLE command. In this
example, the humidity column is dropped from the conditions hypertable:
ALTER TABLE conditions DROP COLUMN humidity;You cannot drop partitioning columns.
Delete data
Section titled “Delete data”Prefer drop_chunks() to remove whole time
ranges instead of running large DELETE statements. Dropping chunks is faster and avoids
table bloat:
SELECT drop_chunks('conditions', older_than => INTERVAL '30 days');For automated deletion, use a data retention policy.
Vacuum and bloat
Section titled “Vacuum and bloat”Heavy INSERT, UPDATE, and DELETE operations cause table bloat over time. PostgreSQL
autovacuum runs by default, but you may need to tune its settings for high write-rate
hypertables. Run VACUUM ANALYZE manually when you need to reclaim space or refresh
query planner statistics:
VACUUM ANALYZE conditions;Drop a hypertable
Section titled “Drop a hypertable”Drop a hypertable using a standard PostgreSQL DROP TABLE
command:
DROP TABLE conditions;All chunks belonging to the hypertable are deleted. For very large hypertables
that may time out, drop chunks first with drop_chunks(), then drop the table.
Learn more
Section titled “Learn more”- Create and configure a hypertable:
CREATE TABLEoptions and configuration. - Alter and update table schemas: Detailed guide for ALTER operations with columnstore.
- Improve hypertable performance: Optimize chunk intervals and enable chunk skipping.
CREATE TABLEreference: Full API reference.drop_chunks()reference: Remove chunks by age or time range.- Data retention policies: Automate data deletion.
show_chunks()reference: List chunks for a hypertable.