Skip to content

Tiger Cloud essentials

Technical walkthrough covering hypertables, hypercore, continuous aggregates, and tiered storage in a single workflow

Tiger Cloud offers managed database services that provide a stable and reliable environment for your applications.

Each Tiger Cloud service is a single optimised PostgreSQL instance extended with innovations such as TimescaleDB in the database engine, in a cloud infrastructure that delivers speed without sacrifice. A radically faster PostgreSQL for transactional, analytical, and agentic workloads at scale.

Tiger Cloud scales PostgreSQL to ingest and query vast amounts of live data. Tiger Cloud provides a range of features and optimizations that supercharge your queries while keeping the costs down. For example:

  • The hypercore row-columnar engine in TimescaleDB makes queries up to 350x faster, ingests 44% faster, and reduces storage by 90%.
  • Tiered storage in Tiger Cloud seamlessly moves your data from high performance storage for frequently accessed data to low cost bottomless storage for rarely accessed data.

The following figure shows how TimescaleDB optimizes your data for superfast real-time analytics:

How TimescaleDB optimizes data across rowstore, columnstore, and object storage

This page shows you how to rapidly implement the features in Tiger Cloud that enable you to ingest and query data faster while keeping the costs low.

To follow the steps on this page:

Optimize time-series data in hypertables with hypercore

Section titled “Optimize time-series data in hypertables with hypercore”

Time-series data represents the way a system, process, or behavior changes over time. Hypertables are PostgreSQL tables that help you improve insert and query performance by automatically partitioning your data by time. Each hypertable is made up of child tables called chunks. Each chunk is assigned a range of time, and only contains data from that range. When you run a query, TimescaleDB identifies the correct chunk and runs the query on it, instead of going through the entire table. You can also tune hypertables to increase performance even more.

Hypertable structure

Hypercore is the hybrid row-columnar storage engine in TimescaleDB used by hypertables. Traditional databases force a trade-off between fast inserts (row-based storage) and efficient analytics (columnar storage). Hypercore eliminates this trade-off, allowing real-time analytics without sacrificing transactional capabilities.

Hypercore dynamically stores data in the most efficient format for its lifecycle:

Move from rowstore to columstore in hypercore
  • Row-based storage for recent data: the most recent chunk (and possibly more) is always stored in the rowstore, ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage.
  • Columnar storage for analytical performance: chunks are automatically compressed into the columnstore, optimizing storage efficiency and accelerating analytical queries.

Unlike traditional columnar databases, hypercore allows data to be inserted or modified at any stage, making it a flexible solution for both high-ingest transactional workloads and real-time analytics, within a single database.

Hypertables exist alongside regular PostgreSQL tables. You use regular PostgreSQL tables for relational data, and interact with hypertables and regular PostgreSQL tables in the same way.

This section shows you how to create regular tables and hypertables, and import relational and time-series data from external files.

  1. Import some time-series data into hypertables
    1. Unzip crypto_sample.zip to a <local folder>.

      This test dataset contains:

      • Second-by-second data for the most-traded crypto-assets. This time-series data is best suited for optimization in a hypertable.
      • A list of asset symbols and company names. This is best suited for a regular relational table.

      To import up to 100 GB of data directly from your current PostgreSQL-based database, migrate with downtime using native PostgreSQL tooling. To seamlessly import 100GB-10TB+ of data, use the live migration tooling supplied by Tiger Data. To add data from non-PostgreSQL data sources, see Import and ingest data.

    2. Upload data into a hypertable:

      To more fully understand how to create a hypertable, how hypertables work, and how to optimize them for performance by tuning chunk intervals and enabling chunk skipping, see the hypertables documentation.

      Tiger Console data upload creates hypertables and relational tables from the data you are uploading:

      1. In Tiger Console, select the service to add data to, then click Actions > Import data > Upload .CSV.

      2. Click to browse, or drag and drop <local folder>/tutorial_sample_tick.csv to upload.

      3. Leave the default settings for the delimiter, skipping the header, and creating a new table.

      4. In Table, provide crypto_ticks as the new table name.

      5. Enable hypertable partition for the time column and click Process CSV file.

        The upload wizard creates a hypertable containing the data from the CSV file.

      6. When the data is uploaded, close Upload .CSV.

        If you want to have a quick look at your data, press Run.

      7. Repeat the process with <local folder>/tutorial_sample_assets.csv and rename to crypto_assets.

        There is no time-series data in this table, so you don’t see the hypertable partition option.

  2. Have a quick look at your data

    You query hypertables in exactly the same way as you would a relational PostgreSQL table. Use one of the following SQL editors to run a query and see the data you uploaded:

    • Data view: write queries, visualize data, and share your results in Tiger Console for all your Tiger Cloud services.
    • SQL editor: write, fix, and organize SQL faster and more accurately in Tiger Console for a Tiger Cloud service.
    • psql: easily run queries on your Tiger Cloud services or self-hosted TimescaleDB deployment from the terminal.
    SELECT * FROM crypto_ticks
    ORDER BY time DESC
    LIMIT 10;

Write fast and efficient analytical queries

Section titled “Write fast and efficient analytical queries”

Aggregation is a way of combing data to get insights from it. Average, sum, and count are all examples of simple aggregates. However, with large amounts of data, aggregation slows things down, quickly. Continuous aggregates are a kind of hypertable that is refreshed automatically in the background as new data is added, or old data is modified. Changes to your dataset are tracked, and the hypertable behind the continuous aggregate is automatically updated in the background.

Reduced data calls with continuous aggregates

You create continuous aggregates on uncompressed data in high-performance storage. They continue to work on data in the columnstore and rarely accessed data in tiered storage. You can even create continuous aggregates on top of your continuous aggregates.

You use time buckets to create a continuous aggregate. Time buckets aggregate data in hypertables by time interval. For example, a 5-minute, 1-hour, or 3-day bucket. The data grouped in a time bucket uses a single timestamp. Continuous aggregates minimize the number of records that you need to look up to perform your query.

This section shows you how to run fast analytical queries using time buckets and continuous aggregates in Tiger Console. You can also do this using psql.

  1. In Tiger Console, toggle Data view at the top right
  2. Connect to your service

    In the connection drop-down in the top right, select your service.

  3. Create a continuous aggregate

    For a continuous aggregate, data grouped using a time bucket is stored in a PostgreSQL MATERIALIZED VIEW in a hypertable. timescaledb.continuous ensures that this data is always up to date. In Data view, use the following code to create a continuous aggregate on the real-time data in the crypto_ticks table:

    CREATE MATERIALIZED VIEW assets_candlestick_daily
    WITH (timescaledb.continuous) AS
    SELECT
    time_bucket('1 day', "time") AS day,
    symbol,
    max(price) AS high,
    first(price, time) AS open,
    last(price, time) AS close,
    min(price) AS low
    FROM crypto_ticks srt
    GROUP BY day, symbol;

    This continuous aggregate creates the candlestick chart data you use to visualize the price change of an asset.

  4. Create a policy to refresh the view every hour
    SELECT add_continuous_aggregate_policy('assets_candlestick_daily',
    start_offset => INTERVAL '3 weeks',
    end_offset => INTERVAL '24 hours',
    schedule_interval => INTERVAL '3 hours');
  5. Have a quick look at your data

    You query continuous aggregates exactly the same way as your other tables. To query the assets_candlestick_daily continuous aggregate for all assets:

    SELECT * FROM assets_candlestick_daily
    ORDER BY day DESC, symbol
    LIMIT 10;

To see the change in terms of query time and data returned between a regular query and a continuous aggregate, run the query part of the continuous aggregate ( SELECT ...GROUP BY day, symbol; ) and compare the results.

In the previous sections, you used continuous aggregates to make fast analytical queries, and hypercore to reduce storage costs on frequently accessed data. To reduce storage costs even more, you create tiering policies to move rarely accessed data to the object store. The object store is low-cost bottomless data storage built on Amazon S3 or Azure Blob storage. However, no matter the tier, you can query your data when you need. Tiger Cloud seamlessly accesses the correct storage tier and generates the response.

Data tiering from high-performance storage to low-cost object storage

To set up data tiering:

  1. Enable data tiering
    1. In Tiger Console, select the service to modify.

    2. In Explorer, click Data tiering > Enable tiered storage.

      Enable tiered storage

      When tiered storage is enabled, you see the amount of data in the tiered object storage.

  2. Set the time interval when data is tiered

    In Tiger Console, toggle Data view, then set up a tiering policy on a hypertable with the following query:

    SELECT add_tiering_policy('assets_candlestick_daily', INTERVAL '3 weeks');
  3. Query tiered data

    You enable reads from tiered data for each query, for a session or for all future sessions. To run a single query on tiered data:

    1. Enable reads on tiered data:
      set timescaledb.enable_tiered_reads = true
    2. Query the data:
      SELECT * FROM crypto_ticks srt LIMIT 10
    3. Disable reads on tiered data:
      set timescaledb.enable_tiered_reads = false;

    For more information, see Querying tiered data.

By default, all Tiger Cloud services have rapid recovery enabled. However, if your app has very low tolerance for downtime, Tiger Cloud offers high-availability replicas. HA replicas are exact, up-to-date copies of your database hosted in multiple AWS availability zones (AZ) within the same region as your primary node. HA replicas automatically take over operations if the original primary data node becomes unavailable. The primary node streams its write-ahead log (WAL) to the replicas to minimize the chances of data loss during failover.

  1. In Tiger Console, select the service to enable replication for.
  2. Click Operations, then select High availability.
  3. Choose your replication strategy, then click Change configuration.
    High availability replica configuration in Tiger Console
  4. In Change high availability configuration, click Change config.

For more information, see High-availability replica.

Tiger Data runs a global support organization with Customer Satisfaction (CSAT) scores above 99%. Support covers all timezones and is fully staffed at weekend hours.

All paid pricing plans have free Developer Support through email with a target response time of 1 business day; we are often faster. If you need 24x7 responsiveness, talk to us about Production Support.

You can open, view, reply to, and close support tickets from the Support tab in Tiger Console:

Manage support in Tiger Cloud

What next? See Build for more advanced tutorials, interact with the data in your Tiger Cloud service using your favorite programming language, integrate your Tiger Cloud service with a range of third-party tools, build with Tiger Data products, or dive into the API reference.