Tiger Cloud: Scale, Enterprise
Self-hosted products
MST
Tiger Lake enables you to build real-time applications alongside efficient data pipeline management within a single system. Tiger Lake unifies the Tiger Cloud operational architecture with data lake architectures.
Tiger Lake is a native integration enabling synchronization between hypertables and relational tables
running in Tiger Cloud services to Iceberg tables running in Amazon S3 Tables in your AWS account.
To follow the steps on this page:
Create a target Tiger Cloud service with the Real-time analytics capability.
You need your connection details.
Note
This feature is currently not supported for Tiger Cloud on Microsoft Azure.
To connect a Tiger Cloud service to your data lake:
Records are imported in time order, from oldest to newest. Your hypertable or relational table must have a primary key, or composite primary keys as a prerequisite to sync to Iceberg.
When you start syncing, all data in the table is streamed to Iceberg in the following processes:
- Table snapshot: stream data from a snapshot of the source table to the destination Iceberg table at approximately 300.000 records a second. For larger tables, import speeds are approximately 1 billion records or 100 GB of data an hour. However, these numbers vary on table width and the complexity of the schema.
- Table changes: stream changes made to the source table (CDC) after the snapshot is taken to a branch of the destination Iceberg table. This happens at approximately 30.000 events a second. Ingest bursts exceeding this can be handled for a certain amount of time and feathered out over time. This depends on duration of the ingestion burst, and the amount of extra events to be handled.
Once the snapshot is fully imported, the snapshot and CDC Iceberg table branches are merged. Merging takes from a couple of seconds, to ten minutes for larger tables of 5TB or more. During this time, new events are held on the WAL. Once the merge is completed, events in the WAL are CDC'd to Iceberg. This implies eventual consistency of the Iceberg table after you started the the sync.
To stream data from a Postgres relational table, or a hypertable in your Tiger Cloud service to your data lake, run the following statement:
ALTER TABLE <table_name> SET (tigerlake.iceberg_sync = true | false,tigerlake.iceberg_partitionby = '<partition_specification>',tigerlake.iceberg_namespace = '<namespace>',tigerlake.iceberg_table = '<table>')
tigerlake.iceberg_sync:boolean, set totrueto start streaming, orfalseto stop the stream. A stream cannot resume after being stopped.tigerlake.iceberg_partitionby: optional property to define a partition specification in Iceberg. By default the Iceberg table is partitioned asday(<time-column of $HYPERTABLE>). This default behavior is only applicable
to hypertables. For more information, see partitioning.tigerlake.iceberg_namespace: optional property to set a namespace, the default istimescaledb.tigerlake.iceberg_table: optional property to specify a different table name. If no name is specified the Postgres table name is used.
By default, the partition interval for an Iceberg table is one day(time-column) for a hypertable. Postgres table sync does not enable any partitioning in Iceberg for non-hypertables. You can set it using tigerlake.iceberg_partitionby. The following partition intervals and specifications are supported:
| Interval | Description | Source types |
|---|---|---|
hour | Extract a date or timestamp day, as days from epoch. Epoch is 1970-01-01. | date, timestamp, timestamptz |
day | Extract a date or timestamp day, as days from epoch. | date, timestamp, timestamptz |
month | Extract a date or timestamp day, as days from epoch. | date, timestamp, timestamptz |
year | Extract a date or timestamp day, as days from epoch. | date, timestamp, timestamptz |
truncate[W] | Value truncated to width W, see options |
These partitions define the behavior using the Iceberg partition specification:
The following samples show you how to tune data sync from a hypertable or a Postgres relational table to your data lake:
Sync a hypertable with the default one-day partitioning interval on the
ts_columncolumnTo start syncing data from a hypertable to your data lake using the default one-day chunk interval as the partitioning scheme to the Iceberg table, run the following statement:
ALTER TABLE my_hypertable SET (tigerlake.iceberg_sync = true);This is equivalent to
day(ts_column).Specify a custom partitioning scheme for a hypertable
You use the
tigerlake.iceberg_partitionbyproperty to specify a different partitioning scheme for the Iceberg table at sync start. For example, to enforce an hourly partition scheme from the chunks onts_columnon a hypertable, run the following statement:ALTER TABLE my_hypertable SET (tigerlake.iceberg_sync = true,tigerlake.iceberg_partitionby = 'hour(ts_column)');Set the partition to sync relational tables
Postgres relational tables do not forward a partitioning scheme to Iceberg, you must specify the partitioning scheme using
tigerlake.iceberg_partitionbywhen you start the sync. For example, for a standard Postgres table to sync to the Iceberg table with daily partitioning , run the following statement:ALTER TABLE my_postgres_table SET (tigerlake.iceberg_sync = true,tigerlake.iceberg_partitionby = 'day(timestamp_col)');Stop sync to an Iceberg table for a hypertable or a Postgres relational table
ALTER TABLE my_hypertable SET (tigerlake.iceberg_sync = false);Update or add the partitioning scheme of an Iceberg table
To change the partitioning scheme of an Iceberg table, you specify the desired partitioning scheme using the
tigerlake.iceberg_partitionbyproperty. For example. if thesamplestable has an hourly (hour(ts)) partition on thetstimestamp column,
to change to daily partitioning, call the following statement:ALTER TABLE samples SET (tigerlake.iceberg_partitionby = 'day(ts)');This statement is also correct for Iceberg tables without a partitioning scheme. When you change the partition, you do not have to pause the sync to Iceberg. Apache Iceberg handles the partitioning operation in function of the internal implementation.
Specify a different namespace
By default, tables are created in the the timescaledb namespace. To specify a different namespace when you start the sync, use the tigerlake.iceberg_namespace property. For example:
ALTER TABLE my_hypertable SET (tigerlake.iceberg_sync = true,tigerlake.iceberg_namespace = 'my_namespace');
Specify a different Iceberg table name
The table name in Iceberg is the same as the source table in Tiger Cloud.
Some services do not allow mixed case, or have other constraints for table names.
To define a different table name for the Iceberg table at sync start, use the tigerlake.iceberg_table property. For example:
ALTER TABLE Mixed_CASE_TableNAME SET (tigerlake.iceberg_sync = true,tigerlake.iceberg_table = 'my_table_name');
- Service requires Postgres 17.6 and above is supported.
- Amazon S3 Tables Iceberg REST
catalog only is supported.
- In order to collect deletes made to data in the columstore, certain columnstore optimizations are disabled for hypertables, this includes Direct Compress.
- The
TRUNCATEstatement is not supported, and does not truncate data in the corresponding Iceberg table. - Data in a hypertable that has been moved to the low-cost object storage tier is not synced.
- Writing to the same S3 table bucket from multiple services is not supported, bucket-to-service mapping is one-to-one.
- Iceberg snapshots are pruned automatically if the amount exceeds 2500.
- A hypertable with long running continuous aggregates refresh transactions, plus 30 minutes, can cause issues with holding the replication slot too long. Please consider batching in these cases.
Keywords
Found an issue on this page?Report an issue or Edit this page
in GitHub.