---
title: Understand capabilities | Tiger Data Docs
description: Learn how TimescaleDB and Tiger Cloud capabilities work together to power time-series and analytics workloads
---

TimescaleDB and Tiger Cloud extend PostgreSQL with powerful capabilities designed specifically for time-series data, real-time analytics, and event-driven workloads. These capabilities work together to provide a complete solution for ingesting, storing, querying, and analyzing massive datasets efficiently.

## Capabilities overview

TimescaleDB and Tiger Cloud capabilities fall into the following categories:

### Data storage and organization

- **[Hypertables](/docs/learn/hypertables/understand-hypertables/index.md)**: automatically partition time-series data into chunks for efficient data management at scale.
- **[Hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md)**: compress data into columnar storage delivering 90-95% storage reduction.

### Data processing and aggregation

- **[Continuous aggregates](/docs/learn/continuous-aggregates/index.md)**: maintain pre-computed aggregations that update incrementally as new data arrives.
- **[Hyperfunctions](/docs/build/data-management/hyperfunctions/index.md)**: analyze data with specialized SQL functions including statistical aggregation and percentiles.

### Data lifecycle management

- **[Data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md)**: automatically drop old data based on time intervals, keeping storage costs under control.
- **[Data tiering](/docs/learn/data-lifecycle/storage/about-storage-tiers/index.md)** (Tiger Cloud-exclusive capability): move older data to low-cost object storage while keeping it queryable with standard SQL.

### Schema optimization and automation

- **[Schema optimization](/docs/learn/data-model/understand-database-schemas/index.md)**: use PostgreSQL features like indexes, constraints, triggers, tablespaces, and foreign data wrappers.
- **[Jobs](/docs/build/data-management/create-and-manage-jobs/index.md)**: automate recurring tasks like continuous aggregate refreshes, data retention, and custom maintenance.

## Typical workflow

Here’s how TimescaleDB capabilities work together in a typical time-series application:

1. **Data ingestion**

   Start by [creating a hypertable](/docs/learn/hypertables/understand-hypertables/index.md) for your time-series data. The hypertable automatically partitions data into time-based chunks, enabling efficient inserts and queries. For high-volume ingestion, [optimize your schema](/docs/learn/data-model/understand-database-schemas/index.md) with appropriate indexes and constraints. Use bulk insert methods like `COPY` or multi-row `INSERT` statements for best performance. For migrating existing data or importing from external sources, see [Migrate](/docs/migrate/index.md).

2. **Data optimization**

   [Hypercore](/docs/build/columnar-storage/setup-hypercore/index.md) is automatically enabled when you create a hypertable, providing columnar storage with advanced compression. This reduces storage by 90-95% while maintaining full query capabilities and delivering 100x to 1000x performance improvements for analytical queries.

3. **Real-time analytics**

   Create [continuous aggregates](/docs/build/continuous-aggregates/create-a-continuous-aggregate/index.md) to automatically maintain pre-computed summaries. Use [hyperfunctions](/docs/build/data-management/hyperfunctions/index.md) in your aggregates to calculate statistics, percentiles, time-weighted averages, and other specialized metrics. Query continuous aggregates instead of raw data for instant results on dashboards and reports. [Real-time aggregates](/docs/learn/continuous-aggregates/real-time-aggregates/index.md) ensure you see the latest data without waiting for batch processing.

4. **Data lifecycle**

   [Configure retention policies](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md) to automatically drop old data when it’s no longer needed. Retention works seamlessly with hypercore, removing entire chunks efficiently without impacting performance. Retention policies can preserve aggregated data in continuous aggregates even after dropping raw data, enabling long-term trend analysis without storing every data point. On Tiger Cloud, use [tiered storage](/docs/learn/data-lifecycle/storage/about-storage-tiers/index.md) to move older data to low-cost object storage while keeping it queryable.

5. **Automation**

   [Schedule jobs](/docs/build/data-management/create-and-manage-jobs/index.md) to automate hypercore compression, continuous aggregate refreshes, retention, and custom maintenance tasks. Jobs run reliably in the background and provide execution history for monitoring and troubleshooting.

## Capabilities by use case

### IoT and sensor data

For IoT workloads with millions of devices generating continuous metrics:

- **[Hypertables](/docs/learn/hypertables/understand-hypertables/index.md)**: partition data by time and optionally by device ID for optimal performance.
- **[Hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md)**: compress sensor readings with 95%+ storage reduction.
- **[Continuous aggregates](/docs/learn/continuous-aggregates/index.md)**: pre-compute device statistics and fleet-wide metrics.
- **[Hyperfunctions](/docs/build/data-management/hyperfunctions/index.md)**: downsample with LTTB for visualization, use time-weighted averages for irregular samples.
- **[Data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md)**: drop raw sensor data automatically after a retention period.

### Financial analytics

For financial data with high-frequency trading, market data, and portfolio analytics:

- **[Hypertables](/docs/learn/hypertables/understand-hypertables/index.md)**: store tick data, OHLCV bars, and trade executions.
- **[Hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md)**: compress historical market data for cost-effective long-term backtesting.
- **[Continuous aggregates](/docs/learn/continuous-aggregates/index.md)**: maintain pre-computed OHLCV bars, technical indicators, and portfolio valuations.
- **[Hyperfunctions](/docs/build/data-management/hyperfunctions/index.md)**: calculate candlestick aggregates, percentiles, and statistical measures.
- **[Schema optimization](/docs/learn/data-model/understand-database-schemas/index.md)**: use indexes for symbol lookups, constraints for data integrity.

### Observability and monitoring

For system metrics, logs, and distributed tracing:

- **[Hypertables](/docs/learn/hypertables/understand-hypertables/index.md)**: ingest metrics, logs, and traces with automatic partitioning.
- **[Hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md)**: compress historical metrics and traces for cost-effective long-term retention.
- **[Continuous aggregates](/docs/learn/continuous-aggregates/index.md)**: maintain service health metrics, error rates, and latency percentiles.
- **[Hyperfunctions](/docs/build/data-management/hyperfunctions/index.md)**: calculate uptime/downtime via heartbeat aggregation, detect anomalies, and analyze distributions.
- **[Data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md)**: drop raw data after the debugging period while keeping aggregated metrics.
