---
title: Tiger Cloud essentials | Tiger Data Docs
description: Technical walkthrough covering hypertables, hypercore, continuous aggregates, and tiered storage in a single workflow
---

Tiger Cloud offers managed database services that provide a stable and reliable environment for your applications.

Each Tiger Cloud service is a single optimised PostgreSQL instance extended with innovations such as TimescaleDB in the database engine, in a cloud infrastructure that delivers speed without sacrifice. A radically faster PostgreSQL for transactional, analytical, and agentic workloads at scale.

Tiger Cloud scales PostgreSQL to ingest and query vast amounts of live data. Tiger Cloud provides a range of features and optimizations that supercharge your queries while keeping the costs down. For example:

- The hypercore row-columnar engine in TimescaleDB makes queries up to 350x faster, ingests 44% faster, and reduces storage by 90%.
- Tiered storage in Tiger Cloud seamlessly moves your data from high performance storage for frequently accessed data to low cost bottomless storage for rarely accessed data.

The following figure shows how TimescaleDB optimizes your data for superfast real-time analytics:

![How TimescaleDB optimizes data across rowstore, columnstore, and object storage](/docs/_astro/mutation.J_fE-aE4_91rU4.webp)

This page shows you how to rapidly implement the features in Tiger Cloud that enable you to ingest and query data faster while keeping the costs low.

## Prerequisites

To follow the steps on this page:

- Create a target [Tiger Cloud service](/docs/get-started/quickstart/create-service/index.md) with the Real-time analytics capability.

  You need [your connection details](/docs/integrate/find-connection-details/index.md). This procedure also works for [self-hosted TimescaleDB](/docs/get-started/choose-your-path/install-timescaledb/index.md).

## Optimize time-series data in hypertables with hypercore

Time-series data represents the way a system, process, or behavior changes over time. Hypertables are PostgreSQL tables that help you improve insert and query performance by automatically partitioning your data by time. Each hypertable is made up of child tables called chunks. Each chunk is assigned a range of time, and only contains data from that range. When you run a query, TimescaleDB identifies the correct chunk and runs the query on it, instead of going through the entire table. You can also tune hypertables to increase performance even more.

![Hypertable structure](/docs/_astro/hypertable-structure.CyTvF21Z_CcUht.webp)

[Hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md) is the hybrid row-columnar storage engine in TimescaleDB used by hypertables. Traditional databases force a trade-off between fast inserts (row-based storage) and efficient analytics (columnar storage). Hypercore eliminates this trade-off, allowing real-time analytics without sacrificing transactional capabilities.

Hypercore dynamically stores data in the most efficient format for its lifecycle:

![Move from rowstore to columstore in hypercore](/docs/_astro/hypercore_intro.DutS1jP2.svg)

- **Row-based storage for recent data**: the most recent chunk (and possibly more) is always stored in the rowstore, ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage.
- **Columnar storage for analytical performance**: chunks are automatically compressed into the columnstore, optimizing storage efficiency and accelerating analytical queries.

Unlike traditional columnar databases, hypercore allows data to be inserted or modified at any stage, making it a flexible solution for both high-ingest transactional workloads and real-time analytics, within a single database.

Hypertables exist alongside regular PostgreSQL tables. You use regular PostgreSQL tables for relational data, and interact with hypertables and regular PostgreSQL tables in the same way.

This section shows you how to create regular tables and hypertables, and import relational and time-series data from external files.

1. **Import some time-series data into hypertables**

   1. Unzip [crypto\_sample.zip](https://assets.timescale.com/docs/downloads/candlestick/crypto_sample.zip) to a `<local folder>`.

      This test dataset contains:

      - Second-by-second data for the most-traded crypto-assets. This time-series data is best suited for optimization in a [hypertable](/docs/learn/hypertables/understand-hypertables/index.md).
      - A list of asset symbols and company names. This is best suited for a regular relational table.

      To import up to 100 GB of data directly from your current PostgreSQL-based database, [migrate with downtime](/docs/migrate/migrate-with-downtime/index.md) using native PostgreSQL tooling. To seamlessly import 100GB-10TB+ of data, use the [live migration](/docs/migrate/live-migration/index.md) tooling supplied by Tiger Data. To add data from non-PostgreSQL data sources, see [Import and ingest data](/docs/migrate/index.md).

   2. Upload data into a hypertable:

      To more fully understand how to create a hypertable, how hypertables work, and how to optimize them for performance by tuning chunk intervals and enabling chunk skipping, see [the hypertables documentation](/docs/learn/hypertables/understand-hypertables/index.md).

      - [Tiger Console](#tab-panel-642)
      - [psql](#tab-panel-643)

      Tiger Console data upload creates hypertables and relational tables from the data you are uploading:

      1. In [Tiger Console](https://console.cloud.tigerdata.com/dashboard/services), select the service to add data to, then click `Actions` > `Import data` > `Upload .CSV`.

      2. Click to browse, or drag and drop `<local folder>/tutorial_sample_tick.csv` to upload.

      3. Leave the default settings for the delimiter, skipping the header, and creating a new table.

      4. In `Table`, provide `crypto_ticks` as the new table name.

      5. Enable `hypertable partition` for the `time` column and click `Process CSV file`.

         The upload wizard creates a hypertable containing the data from the CSV file.

      6. When the data is uploaded, close `Upload .CSV`.

         If you want to have a quick look at your data, press `Run`.

      7. Repeat the process with `<local folder>/tutorial_sample_assets.csv` and rename to `crypto_assets`.

         There is no time-series data in this table, so you don’t see the `hypertable partition` option.

      1) In the terminal, navigate to `<local folder>` and connect to your service.

         Terminal window

         ```
         psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>"
         ```

         You use your [connection details](/docs/integrate/find-connection-details/index.md) to fill in this PostgreSQL connection string.

      2) Create tables for the data to import:

         - For the time-series data:

           In your sql client, create a hypertable:

           Create a [hypertable](/docs/learn/hypertables/understand-hypertables/index.md) for your time-series data using CREATE TABLE. For [efficient queries](/docs/build/performance-optimization/secondary-indexes/index.md), remember to `segmentby` the column you will use most often to filter your data. For example:

           ```
           CREATE TABLE crypto_ticks (
             "time" TIMESTAMPTZ,
             symbol TEXT,
             price DOUBLE PRECISION,
             day_volume NUMERIC
           ) WITH (
              tsdb.hypertable,
              tsdb.segmentby = 'symbol'
           );
           ```

           When you create a hypertable using [CREATE TABLE … WITH …](/docs/reference/timescaledb/hypertables/create_table/index.md), the default partitioning column is automatically the first column with a timestamp data type. Also, TimescaleDB creates a [columnstore policy](/docs/reference/timescaledb/hypercore/add_columnstore_policy/index.md) that automatically converts your data to the columnstore, after an interval equal to the value of the [chunk\_interval](/docs/reference/timescaledb/hypertables/set_chunk_time_interval/index.md), defined through `after` in the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the columnstore conversion, hypertable chunks are compressed by up to 98%, and organized for efficient, large-scale queries.

           You can customize this policy later using [alter\_job](/docs/reference/timescaledb/jobs-automation/alter_job/index.md). However, to change `after` or `created_before`, the compression settings, or the hypertable the policy is acting on, you must [remove the columnstore policy](/docs/reference/timescaledb/hypercore/remove_columnstore_policy/index.md) and [add a new one](/docs/reference/timescaledb/hypercore/add_columnstore_policy/index.md).

           You can also manually [convert chunks](/docs/reference/timescaledb/hypercore/convert_to_columnstore/index.md) in a hypertable to the columnstore.

         - For the relational data:

           In your sql client, create a normal PostgreSQL table:

           ```
           CREATE TABLE crypto_assets (
            symbol TEXT NOT NULL,
            name TEXT NOT NULL
           );
           ```

      3) Speed up data ingestion:

         When you set `timescaledb.enable_direct_compress_copy` your data gets compressed in memory during ingestion with `COPY` statements. By writing the compressed batches immediately in the columnstore, the IO footprint is significantly lower. Also, the [columnstore policy](/docs/reference/timescaledb/hypercore/add_columnstore_policy/index.md) you set is less important, `INSERT` already produces compressed chunks.

         Note

         Please note that this feature is a **tech preview** and not production-ready. Using this feature could lead to regressed query performance and/or storage ratio, if the ingested batches are not correctly ordered or are of too high cardinality.

         To enable in-memory data compression during ingestion:

         ```
         SET timescaledb.enable_direct_compress_copy=on;
         ```

         **Important facts**

         - High cardinality use cases do not produce good batches and lead to degreaded query performance.
         - The columnstore is optimized to store 1000 records per batch, which is the optimal format for ingestion per segment by.
         - WAL records are written for the compressed batches rather than the individual tuples.
         - Currently only `COPY` is support, `INSERT` will eventually follow.
         - Best results are achieved for batch ingestion with 1000 records or more, upper boundary is 10.000 records.
         - Continous Aggregates are **not** supported at the moment.

      4) Upload the dataset to your service:

         ```
         \COPY crypto_ticks from './tutorial_sample_tick.csv' DELIMITER ',' CSV HEADER;
         ```

         ```
         \COPY crypto_assets from './tutorial_sample_assets.csv' DELIMITER ',' CSV HEADER;
         ```

2. **Have a quick look at your data**

   You query hypertables in exactly the same way as you would a relational PostgreSQL table. Use one of the following SQL editors to run a query and see the data you uploaded:

   - **Data view**: write queries, visualize data, and share your results in [Tiger Console](https://console.cloud.tigerdata.com/dashboard/services?popsql) for all your Tiger Cloud services.
   - **SQL editor**: write, fix, and organize SQL faster and more accurately in [Tiger Console](https://console.cloud.tigerdata.com/dashboard/services) for a Tiger Cloud service.
   - **psql**: easily run queries on your Tiger Cloud services or self-hosted TimescaleDB deployment from the terminal.

   ```
   SELECT * FROM crypto_ticks
   ORDER BY time DESC
   LIMIT 10;
   ```

## Write fast and efficient analytical queries

Aggregation is a way of combing data to get insights from it. Average, sum, and count are all examples of simple aggregates. However, with large amounts of data, aggregation slows things down, quickly. Continuous aggregates are a kind of hypertable that is refreshed automatically in the background as new data is added, or old data is modified. Changes to your dataset are tracked, and the hypertable behind the continuous aggregate is automatically updated in the background.

![Reduced data calls with continuous aggregates](/docs/_astro/continuous-aggregate.dGxXBEuT_18QL0n.webp)

You create continuous aggregates on uncompressed data in high-performance storage. They continue to work on [data in the columnstore](#write-fast-and-efficient-analytical-queries) and [rarely accessed data in tiered storage](#slash-storage-charges). You can even create [continuous aggregates on top of your continuous aggregates](/docs/learn/continuous-aggregates/hierarchical-continuous-aggregates/index.md).

You use time buckets to create a continuous aggregate. Time buckets aggregate data in hypertables by time interval. For example, a 5-minute, 1-hour, or 3-day bucket. The data grouped in a time bucket uses a single timestamp. Continuous aggregates minimize the number of records that you need to look up to perform your query.

This section shows you how to run fast analytical queries using time buckets and continuous aggregates in Tiger Console. You can also do this using psql.

- [Data view](#tab-panel-644)
- [Continuous aggregate wizard](#tab-panel-645)

1. **In Tiger Console, toggle Data view at the top right**

2. **Connect to your service**

   In the connection drop-down in the top right, select your service.

3. **Create a continuous aggregate**

   For a continuous aggregate, data grouped using a time bucket is stored in a PostgreSQL `MATERIALIZED VIEW` in a hypertable. `timescaledb.continuous` ensures that this data is always up to date. In Data view, use the following code to create a continuous aggregate on the real-time data in the `crypto_ticks` table:

   ```
   CREATE MATERIALIZED VIEW assets_candlestick_daily
   WITH (timescaledb.continuous) AS
   SELECT
     time_bucket('1 day', "time") AS day,
     symbol,
     max(price) AS high,
     first(price, time) AS open,
     last(price, time) AS close,
     min(price) AS low
   FROM crypto_ticks srt
   GROUP BY day, symbol;
   ```

   This continuous aggregate creates the [candlestick chart](https://www.investopedia.com/terms/c/candlestick.asp) data you use to visualize the price change of an asset.

4. **Create a policy to refresh the view every hour**

   ```
   SELECT add_continuous_aggregate_policy('assets_candlestick_daily',
   start_offset => INTERVAL '3 weeks',
   end_offset => INTERVAL '24 hours',
   schedule_interval => INTERVAL '3 hours');
   ```

5. **Have a quick look at your data**

   You query continuous aggregates exactly the same way as your other tables. To query the `assets_candlestick_daily` continuous aggregate for all assets:

   ```
   SELECT * FROM assets_candlestick_daily
   ORDER BY day DESC, symbol
   LIMIT 10;
   ```

1) **In Tiger Console, select the service you uploaded data to**

2) **Click `Explorer` > `Continuous Aggregates` > `Create a Continuous Aggregate` next to the `crypto_ticks` hypertable**

3) **Create a view called `assets_candlestick_daily` on the `time` column with an interval of `1 day`, then click `Next step`**

   ![Continuous aggregate wizard](/docs/_astro/continuous-aggregate-wizard-tiger-console.uDYTFl9A_2cOj4P.webp)

4) **Update the view SQL with the following functions, then click `Run`**

   ```
   CREATE MATERIALIZED VIEW assets_candlestick_daily
   WITH (timescaledb.continuous) AS
   SELECT
     time_bucket('1 day', "time") AS bucket,
     symbol,
     max(price) AS high,
     first(price, time) AS open,
     last(price, time) AS close,
     min(price) AS low
   FROM "public"."crypto_ticks" srt
   GROUP BY bucket, symbol;
   ```

5) **When the view is created, click `Next step`**

6) **Define a refresh policy with the following values:**

   - `How far back do you want to materialize?`: `3 weeks`
   - `What recent data to exclude?`: `24 hours`
   - `How often do you want the job to run?`: `3 hours`

7) **Click `Next step`, then click `Run`**

   Tiger Cloud creates the continuous aggregate and displays the aggregate ID in Tiger Console. Click `DONE` to close the wizard.

To see the change in terms of query time and data returned between a regular query and a continuous aggregate, run the query part of the continuous aggregate ( `SELECT ...GROUP BY day, symbol;` ) and compare the results.

## Slash storage charges

In the previous sections, you used continuous aggregates to make fast analytical queries, and hypercore to reduce storage costs on frequently accessed data. To reduce storage costs even more, you create tiering policies to move rarely accessed data to the object store. The object store is low-cost bottomless data storage built on Amazon S3 or Azure Blob storage. However, no matter the tier, you can [query your data when you need](/docs/build/data-management/storage/query-tiered-data/index.md). Tiger Cloud seamlessly accesses the correct storage tier and generates the response.

![Data tiering from high-performance storage to low-cost object storage](/docs/_astro/tiered-storage.DM_cwWvu_Z17pQBe.webp)

To set up data tiering:

1. **Enable data tiering**

   1. In [Tiger Console](https://console.cloud.tigerdata.com/dashboard/services), select the service to modify.

   2. In `Explorer`, click `Data tiering` > `Enable tiered storage`.

      ![Enable tiered storage](/docs/_astro/enable-tiered-storage-tiger-console.DqWz7kdU_3fUN.webp)

      When tiered storage is enabled, you see the amount of data in the tiered object storage.

2. **Set the time interval when data is tiered**

   In Tiger Console, toggle Data view, then set up a tiering policy on a hypertable with the following query:

   ```
   SELECT add_tiering_policy('assets_candlestick_daily', INTERVAL '3 weeks');
   ```

3. **Query tiered data**

   You enable reads from tiered data for each query, for a session or for all future sessions. To run a single query on tiered data:

   1. Enable reads on tiered data:

      ```
      set timescaledb.enable_tiered_reads = true
      ```

   2. Query the data:

      ```
      SELECT * FROM crypto_ticks srt LIMIT 10
      ```

   3. Disable reads on tiered data:

      ```
      set timescaledb.enable_tiered_reads = false;
      ```

   For more information, see [Querying tiered data](/docs/build/data-management/storage/query-tiered-data/index.md).

## Reduce the risk of downtime and data loss

By default, all Tiger Cloud services have rapid recovery enabled. However, if your app has very low tolerance for downtime, Tiger Cloud offers high-availability replicas. HA replicas are exact, up-to-date copies of your database hosted in multiple AWS availability zones (AZ) within the same region as your primary node. HA replicas automatically take over operations if the original primary data node becomes unavailable. The primary node streams its write-ahead log (WAL) to the replicas to minimize the chances of data loss during failover.

1. **In Tiger Console, select the service to enable replication for.**
2. **Click `Operations`, then select `High availability`.**
3. **Choose your replication strategy, then click `Change configuration`.**

   ![High availability replica configuration in Tiger Console](/docs/_astro/tiger-console-ha-replicas.BsuEJHbW_Znnyey.webp)
4. **In `Change high availability configuration`, click `Change config`.**

For more information, see [High-availability replica](/docs/deploy/tiger-cloud/tiger-cloud-aws/high-availability/high-availability/index.md).

## Tiger Cloud support

Tiger Data runs a global support organization with Customer Satisfaction (CSAT) scores above 99%. Support covers all timezones and is fully staffed at weekend hours.

All paid pricing plans have free Developer Support through email with a target response time of 1 business day; we are often faster. If you need 24x7 responsiveness, talk to us about [Production Support](https://www.tigerdata.com/support).

You can open, view, reply to, and close support tickets from the `Support` tab in Tiger Console:

![Manage support in Tiger Cloud](/docs/_astro/tiger-cloud-manage-support.oAio5bba_Z1Ot3bO.webp)

What next? See [Build](/docs/build/index.md) for more advanced tutorials, interact with the data in your Tiger Cloud service using [your favorite programming language](/docs/integrate/code/start-coding-with-tigerdata/index.md), integrate your Tiger Cloud service with a range of [third-party tools](/docs/integrate/index.md), [build with Tiger Data products](/docs/build/index.md), or dive into [the API reference](/docs/reference/index.md).
