---
title: Insert data | Tiger Data Docs
description: Insert single and multiple rows and bulk load data into TimescaleDB with SQL
---

Inserts into a hypertable are normal PostgreSQL writes. Use the usual commands, and lean on batching when you can:

- `INSERT`: single rows or small batches
- `COPY`: bulk data loading

To improve performance, insert time series data directly to the columnstore using [direct compress](#improve-performance-with-direct-compress).

## Insert a single row

To insert a single row into a hypertable, use the syntax `INSERT INTO ... VALUES`:

```
INSERT INTO conditions(time, location, temperature, humidity)
  VALUES (NOW(), 'office', 70.0, 50.0);
```

## Insert multiple rows

A more efficient method to insert row-by-row is to insert multiple rows into a hypertable using a single `INSERT` call. This works even for thousands of rows at a time. TimescaleDB batches the rows by chunk, then writes to each chunk in a single transaction.

You use the same syntax, separating rows with a comma:

```
INSERT INTO conditions
  VALUES
    (NOW(), 'office', 70.0, 50.0),
    (NOW(), 'basement', 66.5, 60.0),
    (NOW(), 'garage', 77.0, 65.2);
```

If you `INSERT` unsorted data, call [`convert_to_columnstore('<chunk_name>', recompress => true)`](/docs/reference/timescaledb/hypercore/convert_to_columnstore/index.md) on the chunk to reorder and optimize your data.

## Insert and return data

You can return some or all of the inserted data by adding a `RETURNING` clause to the `INSERT` command. For example, to return all the inserted data, run:

```
INSERT INTO conditions
  VALUES (NOW(), 'office', 70.1, 50.1)
  RETURNING *;
```

This returns:

```
time                          | location | temperature | humidity
------------------------------+----------+-------------+----------
2017-07-28 11:42:42.846621+00 | office   |        70.1 |     50.1
(1 row)
```

If you `INSERT` unsorted data, call [`convert_to_columnstore('<chunk_name>', recompress => true)`](/docs/reference/timescaledb/hypercore/convert_to_columnstore/index.md) on the chunk to reorder and optimize your data.

## Bulk insert with COPY

The `COPY` command is the most efficient way to load large amounts of data into a hypertable. For bulk data loading, `COPY` can be 2-3x faster or more than `INSERT`, especially when combined with [direct compress](#improve-performance-with-direct-compress).

`COPY` supports loading from:

- **CSV files**:

  ```
  COPY conditions(time, location, temperature, humidity)
    FROM '/path/to/data.csv'
    WITH (FORMAT CSV, HEADER);
  ```

- **Standard input**

  To load data from your application or script using standard input:

  ```
  COPY conditions(time, location, temperature, humidity)
    FROM STDIN
    WITH (FORMAT CSV);
  ```

  To signal the end of input, add `\.` on a new line.

- **Program output**

  To load data generated by a program or script:

  ```
  COPY conditions(time, location, temperature, humidity)
    FROM PROGRAM 'generate_data.sh'
    WITH (FORMAT CSV);
  ```

If you `COPY` unsorted data, call [`convert_to_columnstore('<chunk_name>', recompress => true)`](/docs/reference/timescaledb/hypercore/convert_to_columnstore/index.md) on the chunk to reorder and optimize your data.

## Improve performance with direct compress

Tech preview [2.23.0](https://github.com/timescale/timescaledb/releases/tag/2.23.0)

The columnar format in the columnstore enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the columnstore conversion, hypertable chunks are compressed by up to 98%, and organized for efficient, large-scale queries.

To improve performance, compress data during the `INSERT` and `COPY` operations so that it is injected directly into chunks in the columnstore rather than waiting for the policy. Direct compress writes data in the compressed format in memory, significantly reducing I/O and improving ingestion performance.

When you enable direct compress, ensure that your data is already sorted by the table’s compression `order_by` columns. Incorrectly sorted data results in poor compression and query performance.

- **Enable direct compress on `INSERT`**

  Set the following [GUC parameters](/docs/reference/timescaledb/configuration/gucs/index.md):

  ```
  SET timescaledb.enable_direct_compress_insert = true;
  ```

- **Enable direct compress on `COPY`**

  Set the following [GUC parameter](/docs/reference/timescaledb/configuration/gucs/index.md):

  ```
  SET timescaledb.enable_direct_compress_copy = true;
  ```

  - **Optimal batch size**: best results with batches of 1,000 to 10,000 records
  - **Cardinality**: high cardinality datasets do not compress well and may degrade query performance
  - **Batch format**: the columnstore is optimized for 1,000 records per batch per segment
  - **WAL efficiency**: compressed batches are written to WAL rather than individual tuples
  - **Unique constraints**: tables with unique constraints cannot use direct compress
