Tiger Cloud: Performance, Scale, Enterprise, Free
Self-hosted products
MST
You insert data into a hypertable using the following standard SQL commands:
INSERT: single rows or small batchesCOPY: bulk data loading
To improve performance, insert time series data directly to the columnstore using direct compress.
To insert a single row into a hypertable, use the syntax INSERT INTO ... VALUES:
INSERT INTO conditions(time, location, temperature, humidity)VALUES (NOW(), 'office', 70.0, 50.0);
A more efficient method to insert row-by-row is to insert multiple rows into a hypertable using a single
INSERT call. This works even for thousands of rows at a time. TimescaleDB batches the rows by chunk, then writes to
each chunk in a single transaction.
You use the same syntax, separating rows with a comma:
INSERT INTO conditionsVALUES(NOW(), 'office', 70.0, 50.0),(NOW(), 'basement', 66.5, 60.0),(NOW(), 'garage', 77.0, 65.2);
If you INSERT unsorted data, call convert_to_columnstore('<chunk_name>', recompress => true)
on the chunk to reorder and optimize your data.
You can return some or all of the inserted data by adding a RETURNING clause to the INSERT command. For example,
to return all the inserted data, run:
INSERT INTO conditionsVALUES (NOW(), 'office', 70.1, 50.1)RETURNING *;
This returns:
time | location | temperature | humidity------------------------------+----------+-------------+----------2017-07-28 11:42:42.846621+00 | office | 70.1 | 50.1(1 row)
If you INSERT unsorted data, call convert_to_columnstore('<chunk_name>', recompress => true)
on the chunk to reorder and optimize your data.
The COPY command is the most efficient way to load large amounts of data into a hypertable. For
bulk data loading, COPY can be 2-3x faster or more than INSERT, especially when combined with
direct compress.
COPY supports loading from:
CSV files:
COPY conditions(time, location, temperature, humidity)FROM '/path/to/data.csv'WITH (FORMAT CSV, HEADER);Standard input
To load data from your application or script using standard input:
COPY conditions(time, location, temperature, humidity)FROM STDINWITH (FORMAT CSV);To signal the end of input, add
\.on a new line.Program output
To load data generated by a program or script:
COPY conditions(time, location, temperature, humidity)FROM PROGRAM 'generate_data.sh'WITH (FORMAT CSV);
If you COPY unsorted data, call convert_to_columnstore('<chunk_name>', recompress => true)
on the chunk to reorder and optimize your data.
The columnar format in the columnstore enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the columnstore conversion, hypertable chunks are compressed by up to 98%, and organized for efficient, large-scale queries.
To improve performance, compress data during the INSERT and COPY operations so that it is injected
directly into chunks in the columnstore rather than waiting for the policy. Direct compress writes data in the
compressed format in memory, significantly reducing I/O and improving ingestion performance.
When you enable direct compress, ensure that your data is already sorted by the table's compression order_by columns.
Incorrectly sorted data results in poor compression and query performance.
Enable direct compress on
INSERTSet the following GUC parameters:
SET timescaledb.enable_direct_compress_insert = true;SET timescaledb.enable_direct_compress_insert_client_sorted = true;Enable direct compress on
COPYSet the following GUC parameter:
SET timescaledb.enable_direct_compress_copy = true;SET timescaledb.enable_direct_compress_copy_client_sorted = true;- Optimal batch size: best results with batches of 1,000 to 10,000 records
- Cardinality: high cardinality datasets do not compress well and may degrade query performance
- Batch format: the columnstore is optimized for 1,000 records per batch per segment
- WAL efficiency: compressed batches are written to WAL rather than individual tuples
- Continuous aggregates: not supported with direct compress
- Unique constraints: tables with unique constraints cannot use direct compress
Keywords
Found an issue on this page?Report an issue or Edit this page
in GitHub.