
By Tiger Data Team
Updated at Apr 10, 2026
Originally published on Jan 10, 2025
Online analytical processing (OLAP) is how organizations turn large volumes of raw data into decisions. Whether you're calculating quarterly revenue trends, aggregating sensor readings from thousands of Internet of Things (IoT) devices, or generating real-time operational dashboards, OLAP is the architectural pattern that makes those queries fast.
This guide covers what OLAP is, how it compares to online transaction processing (OLTP), the major OLAP system types, and how we bring real-time OLAP capabilities to PostgreSQL, without forcing you to abandon the database you already know.
OLAP is a category of database technology designed to execute complex analytical queries over large datasets quickly. While a transactional database is built for lots of small reads and writes, an OLAP system is designed for aggregations, deeper analysis, and working with historical data.
Let's consider a classic example: an e-commerce platform that records every order as it happens. OLAP is what lets you ask "What were total sales by region and product category for each quarter over the last three years?" and get the answer in milliseconds.
Traditional OLAP systems organize data into a structure called a data cube, a multidimensional array that groups data by dimensions such as time, geography, and product. Instead of scanning rows in a flat table, a data cube lets the query engine navigate pre-organized slices of data.
A typical data cube might have:
Time dimension: Years, quarters, months, days
Product dimension: Categories, SKUs, brands
Location dimension: Regions, countries, cities

Modern OLAP systems have largely moved away from physically storing pre-calculated cubes in favor of columnar storage engines and vectorized query execution, which compute aggregations on the fly much faster than maintaining cube state.
OLAP is the right choice when you need to analyze large datasets across multiple dimensions:
Financial analytics: revenue roll-ups, cost center analysis, profit and loss (P&L) reporting across time periods
IoT and operational monitoring: aggregating sensor readings from thousands of devices to detect trends or anomalies
Retail and e-commerce: product performance analysis, customer segmentation, inventory forecasting
Operational reporting: business intelligence dashboards that need to reflect near-real-time data
OLTP is designed for high-throughput, low-latency operations on individual records: inserting a new order, updating a customer's address, or checking inventory availability. OLTP databases are row-oriented and optimized for fast reads and writes on small subsets of data.
OLAP is designed for analytical queries that read large amounts of data, aggregate it across dimensions, and return summarized results. OLAP databases are typically column-oriented, trading write performance for efficient read performance across wide scans.
Characteristic | OLTP | OLAP |
Primary purpose | Record transactions | Analyze data |
Query type | Simple reads/writes | Complex aggregations |
Data volume per query | Small (individual rows) | Large (millions of rows) |
Optimization | Write throughput, low latency | Read throughput, scan efficiency |
Storage model | Row-oriented | Column-oriented |
Schema style | Normalized | Denormalized / star or snowflake |
Typical users | Applications, end users | Analysts, business intelligence (BI) tools |
Update frequency | Continuous | Batch or near-real-time |
Example query | "Insert this order" | "Total revenue by region last quarter" |
Use OLTP when you need low-latency responses to high-frequency transactions, such as application backends, payment processing, or any user-facing data operation where individual row speed matters.
Use OLAP when you're running analytics across historical data, building dashboards, generating reports, or running queries that scan millions of rows.
Many modern systems need both; OLTP for ingestion and OLAP for analytics on the same data. That's where this kind of hybrid approach becomes particularly valuable.
There are three main OLAP architectures, each making different trade-offs between storage, performance, and flexibility.
MOLAP (multidimensional OLAP) stores data in pre-calculated, multidimensional cube structures. Because the data is already aggregated and organized before queries run, MOLAP delivers very fast query responses. The trade-off is that cubes must be built in advance, which means MOLAP systems aren't well-suited to real-time analytics and can consume significant storage for large datasets.
MOLAP works best for stable reporting workloads where dimensions and aggregations are known ahead of time.
ROLAP (relational OLAP) uses a relational database as its storage engine and applies an analytical layer on top to simulate multidimensional analysis. This keeps data in familiar SQL-accessible tables while still enabling OLAP-style queries. ROLAP is more flexible than MOLAP (no cube pre-computation required), scales to larger datasets, and integrates naturally with existing relational infrastructure.
The trade-off is that without specialized optimizations, relational databases can be slow on the complex aggregations OLAP needs. It's the core problem PostgreSQL needs purpose-built extensions to solve.
HOLAP (hybrid OLAP) combines MOLAP and ROLAP; detailed data lives in relational storage, while higher-level aggregations are pre-computed and stored in multidimensional structures. This gives you MOLAP's fast response for common aggregate queries and ROLAP's flexibility for ad-hoc analysis.
Modern analytics platforms have largely converged on HOLAP-like architectures: columnar storage for fast scans combined with materialized aggregates for frequently-run queries. PostgreSQL can implement this pattern natively through Hypercore and continuous aggregates.
The line between OLAP and OLTP isn’t as clear as it used to be. A few shifts have changed how teams think about analytics architecture.
Columnar storage is now mainstream. Where OLAP once required separate specialized systems, columnar storage engines are now available as extensions on general-purpose databases. You no longer need to move data out of PostgreSQL to get column-scan performance.
Real-time analytics has replaced batch extract, transform, load (ETL) for many use cases. Traditional OLAP relied on nightly or hourly data warehouse loads. Modern applications, including IoT platforms, financial systems, and operational dashboards, need analytics to reflect data as it arrives, not hours later.
The data warehouse and the operational database are converging. The separation between "write here, analyze there" architectures is expensive to maintain and introduces latency. Hybrid architectures that handle both transactional writes and analytical reads on the same data store are increasingly common.
Cloud scale has changed the cost calculus. Dedicated OLAP systems like Snowflake, BigQuery, and Redshift are powerful, but they add infrastructure complexity and synchronization overhead. For many teams, a single PostgreSQL-compatible system that handles both workloads is simpler and cheaper to operate.
PostgreSQL is an excellent general-purpose database, but it faces real constraints on OLAP workloads out of the box: row-oriented storage means column-heavy scans read unnecessary data, the query planner isn't optimized for the aggregation patterns OLAP demands, and there's no native mechanism for pre-computing and maintaining aggregates as new data arrives.
All three are addressed through hypertables, continuous aggregates, and Hypercore.
Hypertables are our time-partitioned table structure optimized for time-series data. Under the hood, hypertables automatically partition data into chunks by time (and optionally by an additional dimension like sensor_id), which means queries with time filters only scan the relevant chunks rather than the full table.
For OLAP workloads where time is a primary dimension, which covers most IoT, financial, and operational use cases, hypertables dramatically reduce the data volume each query touches.
Continuous aggregates are our mechanism for materialized OLAP-style aggregations that stay automatically up to date. You define the aggregation once as a view; it is maintained incrementally as new data arrives.
Here's a practical example for an IoT monitoring use case. Starting with a hypertable of sensor readings, you can define hourly averages per sensor for a real-time dashboard:
-- Create the hypertable
CREATE TABLE sensor_readings (
time TIMESTAMPTZ NOT NULL,
sensor_id TEXT NOT NULL,
temperature DOUBLE PRECISION,
pressure DOUBLE PRECISION
);
SELECT create_hypertable('sensor_readings', 'time');
-- Define a continuous aggregate for hourly averages
CREATE MATERIALIZED VIEW sensor_hourly_avg
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', time) AS bucket,
sensor_id,
AVG(temperature) AS avg_temp,
AVG(pressure) AS avg_pressure,
COUNT(*) AS reading_count
FROM sensor_readings
GROUP BY bucket, sensor_id;
-- Set a refresh policy to keep it updated automatically
SELECT add_continuous_aggregate_policy('sensor_hourly_avg',
start_offset => INTERVAL '3 hours',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '1 hour'
);Dashboards now query sensor_hourly_avg instead of scanning raw readings. The aggregate is maintained automatically as new sensor data arrives, delivering real-time OLAP-style performance from a single PostgreSQL connection, with no ETL pipeline and no separate analytics warehouse.
Hypercore is our hybrid row-columnar storage engine that automatically manages how data is stored depending on how it's used. New data is written to the rowstore, which is optimized for fast inserts and updates. As data ages and becomes more suited to analytical workloads, Hypercore automatically converts it to the columnstore, where it is compressed and organized for efficient large-scale queries.
For time-series data, converting data to the columnstore compresses it by more than 90%, helping you save on storage costs and keeping queries fast. This improvement comes not just from the column layout but because there's significantly less data to read in the first place. The result is OLAP-style read performance on PostgreSQL, without a separate system or ETL pipeline.
Ingest sensor readings at high frequency, use continuous aggregates to maintain real-time rollups by device, location, and time bucket. Dashboards query aggregates rather than raw data, so latency stays low regardless of how much history you accumulate.
Store transaction data in a hypertable, use continuous aggregates for running totals, rolling averages, and period-over-period comparisons. ACID compliance and standard SQL make PostgreSQL a natural fit for financial workloads that can't sacrifice correctness for speed.
Track application metrics, user events, or key performance indicators (KPIs) in real time. Hypercore keeps storage costs low while continuous aggregates keep reporting queries fast as data volumes grow.
Our approach isn't for every workload. Dedicated OLAP systems like ClickHouse can outperform PostgreSQL on certain query patterns, especially double-grouping aggregations over very wide tables. But for real-time analytics where data needs to be both written and queried continuously, our PostgreSQL-native approach has meaningful advantages:
No separate system to operate and keep in sync with your transactional data
Full SQL, ACID compliance, and PostgreSQL ecosystem compatibility
Real-time analytics without batch ETL delays
A single operational and analytical datastore reduces architectural complexity and cost
If you're running PostgreSQL and need OLAP capabilities, we offer a direct upgrade path with no migration required.
Managed cloud: Tiger Cloud is a fully managed service: provision a database in minutes, no infrastructure to configure.
Self-hosted: TimescaleDB, the open-source extension Tiger Data is built on, is available for self-hosted PostgreSQL deployments.
Key resources to explore:
Continuous aggregates documentation: the core OLAP primitive for real-time analytics
Hypertables: time-partitioned tables for fast time-range queries
Hypercore: our hybrid row-columnar storage engine for OLAP-style performance on PostgreSQL
PostgreSQL as a Real-Time Analytics Database: a deeper dive into real-time analytics architecture
How to Choose an OLAP Database: an evaluation framework for analytical database decisions
Try Tiger Cloud free and start running OLAP-style analytics on your PostgreSQL data today.
What is the difference between OLAP and OLTP?
OLAP (online analytical processing) is designed for complex analytical queries over large datasets: aggregations, trend analysis, and multi-dimensional reporting. OLTP (online transaction processing) is designed for high-throughput, low-latency operations on individual records. OLAP optimizes for read performance across large scans; OLTP optimizes for fast writes and point reads on small subsets of data. Most modern applications need both.
What is an OLAP database?
An OLAP database is a data store optimized for analytical workloads. Key characteristics include columnar or multidimensional storage, optimizations for aggregation queries, and support for complex multi-dimensional analysis. Examples include ClickHouse, Apache Druid, and Google BigQuery. Tiger Data extends PostgreSQL to support OLAP workloads through Hypercore (its hybrid row-columnar storage engine), continuous aggregates, and time-partitioned hypertables.
Can PostgreSQL handle OLAP workloads?
Standard PostgreSQL can handle OLAP workloads, but its row-oriented storage and general-purpose query planner are not optimized for analytical queries over large datasets. With Tiger Data, PostgreSQL can support real-time OLAP workloads competitively with dedicated OLAP systems, while maintaining full SQL compatibility and ACID compliance.
What are the three types of OLAP?
The three main OLAP types are MOLAP (multidimensional OLAP), which stores pre-calculated data cubes for fast query responses; ROLAP (relational OLAP), which uses relational databases with an analytical layer on top; and HOLAP (hybrid OLAP), which combines pre-computed aggregates for common queries with relational storage for detailed data. Tiger Data's continuous aggregates implement a HOLAP-style architecture natively on PostgreSQL.
What is a continuous aggregate in Tiger Data?
A continuous aggregate is a materialized view in Tiger Data that automatically stays up to date as new data arrives. You define an aggregation query once (for example, hourly averages of sensor readings grouped by device) and Tiger Data incrementally refreshes the result as new rows are inserted. This gives you OLAP-style pre-computed aggregates without batch ETL jobs or manual refresh logic.