---
title: "PostgreSQL vs. Cassandra: The Decision Framework for Time-Series and Write-Heavy Workloads"
description: "Evaluating Cassandra vs. PostgreSQL for time-series or write-heavy workloads? A practical framework covering benchmarks, architecture trade-offs, and when each database actually wins."
section: "Postgres basics"
---

> **TimescaleDB is now Tiger Data.**

**Disclosure:** [<u>Tiger Data</u>](https://www.tigerdata.com/) builds on PostgreSQL. Having disclosed that in this PostgreSQL vs. Cassandra comparison article, what we offer here is based on eight years of benchmark data and production experience running time-series workloads on Postgres against Cassandra. The analysis below includes the cases where Cassandra wins. Read it with that framing in mind.

If you are evaluating [<u>Cassandra</u>](https://cassandra.apache.org/_/index.html) against [<u>PostgreSQL</u>](https://www.postgresql.org/) for a write-heavy, distributed, or time-series workload, you are making a high-stakes infrastructure decision. Database architecture locks you in for three to five years. Getting it wrong means either rebuilding your data layer mid-growth or absorbing 10x higher infrastructure costs than a better-matched system would require.

This guide is written for three people: the architect evaluating both databases for a new system, the team currently on Cassandra hitting the query wall, and the engineer who has absorbed "Cassandra for write scale" as received wisdom and wants to stress-test it before committing. For most IoT, telemetry, and time-series workloads, that heuristic does not hold up the way it once did.

## How PostgreSQL and Cassandra Actually Work

Before comparing them, it helps to understand what each database actually does when a write comes in.

### PostgreSQL write path

PostgreSQL uses a B-tree storage model. Writes go through a Write-Ahead Log (WAL) for durability, then are applied to B-tree indexes on disk. Multi-Version Concurrency Control (MVCC) handles concurrent reads and writes by keeping multiple row versions, enabling full [<u>ACID compliance</u>](https://www.tigerdata.com/learn/understanding-acid-compliance): atomicity, consistency, isolation, and durability. The result is a database that handles diverse, complex workloads extremely well, but one that was designed around the assumption that writes are reasonably distributed across the key space. High-cardinality [<u>time-series data</u>](https://www.tigerdata.com/learn/what-is-temporal-data), where millions of small writes land in monotonically increasing timestamp order, creates write amplification on B-tree indexes without proper partitioning.

### Cassandra write path

Apache Cassandra uses a Log-Structured Merge-Tree (LSM-tree) model. Writes go to an in-memory memtable and an on-disk commit log simultaneously, then are flushed periodically to immutable SSTables. This design is optimized for sequential I/O and append-heavy workloads: each write is fast because there is no random-access B-tree update. Cassandra distributes data across nodes using consistent hashing on a ring topology, with no primary node. Every node can accept writes, and replication happens automatically. That is the architectural basis for Cassandra's horizontal scaling claim, and it is real.

Cassandra offers tunable consistency: you can configure reads and writes to require ONE node, a QUORUM, or ALL nodes to respond. Eventual consistency is the default. Strong consistency is available at the cost of availability and latency.

### The implication

Cassandra's LSM-tree model suits append-only workloads with predictable, wide-row access patterns. PostgreSQL's B-tree model was not designed for millions of small time-ordered writes per second. Time-series extensions change this equation significantly. The table below covers the baseline comparison before extensions enter the picture.

| **Property** | **PostgreSQL** | **Apache Cassandra** |
| --- | --- | --- |
| Storage engine | B-tree | LSM-tree (SSTables) |
| Write model | WAL + B-tree update | Memtable + SSTable flush |
| Consistency | ACID (fully serializable) | Tunable (eventual by default) |
| Query language | SQL (full standard) | CQL (SQL-like, not SQL) |
| Horizontal scaling | Read replicas, Citus, logical replication | Native ring topology, automatic sharding |
| ACID compliance | Yes | No (lightweight transactions via Paxos only) |
| JOIN operations | Native | Not supported |
| Ad hoc analytics | Native SQL | Requires secondary tooling (Spark, Presto) |

## Performance Reality Check

Most published "Cassandra vs. PostgreSQL" performance comparisons are either synthetic, outdated, or run by vendors with obvious bias. This one adheres to what the data actually shows, and what it means for your workload.

### The 2018 Tiger Data benchmark

In 2018, Tiger Data's engineering team (then operating as “Timescale”) ran a structured benchmark comparing a 5-node TimescaleDB cluster against 5-node, 10-node, and 30-node Cassandra clusters. All tests ran on Azure D8s v3 instances. The workload was time-series: sensor data writes and five time-series query types (simple rollups, multi-rollups, lastpoint queries, groupby-orderby-limit, and multi-metric aggregations).

Results: the 5-node Tiger Data cluster [<u>achieved 5.4x higher write throughput</u>](https://www.tigerdata.com/blog/cassandra-vs-timescaledb) than a 5-node Cassandra cluster at parity hardware. On query performance, Tiger Data outperformed Cassandra by up to 5,800x across all five query types, including against a 30-node Cassandra cluster. Infrastructure cost for the 30-node Cassandra configuration: approximately $33,251/month on Azure D8s v3. Equivalent 5-node Tiger Data configuration: approximately $3,325/month. Roughly 10% of the infrastructure cost.

You can read the full methodology in [<u>our original benchmark post</u>](https://www.tigerdata.com/blog/cassandra-vs-timescaledb).

**Frame this correctly.** The test used time-series query patterns against time-series data. This is not a general Postgres benchmark. It is the relevant comparison for teams evaluating Cassandra for IoT and telemetry. The architectural fundamentals behind these results, hypertable partitioning and columnar compression of time-ordered data, are the same in 2026. Absolute numbers on current hardware would differ. The directional advantage reflects real architectural differences.

### Where Cassandra wins on raw write throughput

At true petabyte scale across 10 or more geographic regions, Cassandra's linear horizontal scaling model can exceed what a vertically scaled Postgres cluster achieves. This is a real Cassandra strength. The next two sections cover when that strength actually applies to your workload, and when it does not.

## When Cassandra Wins

The scenarios below are specific rather than generic. Cassandra's architectural advantages apply to a narrower range of production workloads than its reputation suggests.

**True petabyte-scale multi-region writes.** If your use case requires global write availability across five or more data centers simultaneously, with automatic failover and no single point of failure, Cassandra's masterless ring topology is purpose-built for this. PostgreSQL replication at this topology (even with Patroni or Citus) requires considerably more operational investment. Netflix, Apple, and similar organizations running hundreds of petabytes across global infrastructure have legitimate Cassandra deployments.

**Write-only event log workloads with no SQL requirement.** Event sourcing systems, audit logs, and write-once-read-seldom workloads where the data model is fixed in advance and never changes are a reasonable Cassandra fit. If your team has no SQL analytics requirement, no need for joins, and a stable query pattern known at schema design time, Cassandra's append model is efficient.

**Teams already expert in Cassandra operations.** A team that has run Cassandra in production for five years, built tooling around it, and has no analytical or SQL requirement should not switch for switch's sake. Operational expertise is real infrastructure capital. The migration cost is real.

**Extremely high cardinality, append-only writes at sustained scale above 10 million writes per second per cluster.** At the top end of the write throughput envelope, Cassandra's LSM-tree and horizontal scaling can exceed even well-tuned Postgres plus Tiger Data configurations. This is uncommon in practice, but it is a real ceiling.

If your workload fits one of these descriptions, Cassandra may be the right choice. The rest of this guide is for teams whose use case does not cleanly fit these categories, which is most teams evaluating Cassandra for time-series, IoT, and telemetry.

## When PostgreSQL Wins

The scenarios below are specific workloads that drive most Cassandra evaluations.

**Time-series data: IoT sensor ingestion, telemetry, metrics monitoring.** When write patterns are time-ordered and append-heavy, but query patterns require aggregations, window functions, rollups, or multi-dimensional filtering, PostgreSQL with time-series extensions matches Cassandra's write performance while dramatically outperforming it on reads. This is what the 2018 benchmark demonstrated: same hardware, same data volumes, Cassandra faster on raw sequential appends, Tiger Data faster on every time-series query pattern by a factor that ranges from hundreds to thousands.

**Any workload where you need to query the data.** Cassandra's data model forces query-first schema design. Every new query pattern potentially requires a new table, a new data load, and a schema redesign. For teams building dashboards, alerting systems, or operational reports on top of their time-series data, this is a structural constraint that compounds over time. SQL solves this: standard, flexible, tooling-compatible SQL that works with every BI tool, every ORM, every analytics library your team already uses. The broader case for why SQL is winning the NoSQL argument applies directly here.

**Multi-model workloads: time-series plus relational context.** Production systems routinely need to join event data with reference data: device metadata, user tables, configuration records. In Cassandra, this requires application-layer joins or data duplication across multiple tables. In PostgreSQL, it is a standard JOIN with referential integrity enforced at the database level.

**Teams with existing Postgres expertise.** Operational costs matter. A team fluent in psql, pg_dump, logical replication, and the broader PostgreSQL tooling ecosystem is considering Cassandra because they believe Postgres cannot handle write volume. Before committing to a new data model, operational paradigm, and query language, evaluate whether TimescaleDB’s [<u>hypertables</u>](https://www.tigerdata.com/docs/use-timescale/latest/hypertables) actually close the gap first.

**Real examples from production.** [<u>WaterBridge</u>](https://www.tigerdata.com/blog/how-waterbridge-uses-timescaledb-for-real-time-data-consistency), an oil and gas infrastructure company, ingests between 5,000 and 10,000 data points per second using Tiger Data for real-time monitoring and alerting across field operations. [<u>Octave Energy</u>](https://www.tigerdata.com/blog/high-compression-ratio-and-speedy-queries-on-historical-data-while-revolutionizing-the-battery-market) runs energy analytics on continuous telemetry data from commercial and industrial facilities. Both are workloads that Cassandra evaluations commonly target, and both run efficiently on Tiger Data without the operational overhead of a distributed NoSQL cluster. See also the broader context for IoT workloads in [<u>Moving Past Legacy Systems: Data Historian vs. Time-Series Database</u>](https://www.tigerdata.com/learn/moving-past-legacy-systems-data-historian-vs-time-series-database).

## The Third Path: PostgreSQL + Tiger Data

The "Cassandra vs. Postgres" framing assumes you are comparing both databases in their default configurations. Tiger Data changes the equation by adding time-series primitives directly to PostgreSQL.

### Hypertables

A [<u>hypertable</u>](https://www.tigerdata.com/docs/use-timescale/latest/hypertables) is a PostgreSQL table that TimescaleDB automatically partitions by time into smaller chunks. To your application, it looks and behaves exactly like a standard Postgres table. Internally, writes are routed to the current time chunk, a small, bounded table that eliminates the B-tree contention problem that limits vanilla Postgres write throughput on monotonically increasing time-series data. As data ages, older chunks become candidates for compression and tiering. Chunk exclusion ensures queries skip irrelevant time ranges automatically, keeping latency stable as total data volume grows. This is the architectural change that makes Postgres competitive with Cassandra on time-series write performance.

### Hypercore column store

Tiger Data's [<u>Hypercore</u>](https://www.tigerdata.com/docs/build/how-to/basic-compression) storage keeps recent data in row format for fast writes and converts older chunks to columnar format for analytical reads. Time-series data, which is highly repetitive and compressible in columnar format, typically compresses 90 to 95 percent with Hypercore. For IoT workloads generating continuous sensor data, this is the cost lever: petabytes of data stored at a fraction of the raw footprint, fully queryable via standard SQL.

### Continuous aggregates

[<u>Continuous aggregates</u>](https://www.tigerdata.com/docs/use-timescale/latest/continuous-aggregates) are incrementally refreshed materialized views that update only the data that has changed since the last refresh. Cassandra has no native equivalent. Query-time aggregations over large Cassandra datasets require full partition scans with fan-out reads across the cluster. Continuous aggregates make that problem effectively irrelevant for common patterns: hourly rollups, daily summaries, moving averages, and real-time dashboards all run against pre-computed results that stay current without batch jobs.

### What Tiger Data does not solve

Tiger Data does not replace Cassandra for true petabyte-scale multi-region active-active write distribution without a primary node. If that is your genuine requirement, Cassandra or a purpose-built distributed system is the appropriate answer. Tiger Cloud handles automated vertical scaling, high availability, and failover for managed deployments, but it does not replicate Cassandra's leaderless ring topology across five or more independent write regions.

If you want to test whether Tiger Data fits your workload before committing, [<u>start a Tiger Cloud free trial</u>](https://www.tigerdata.com/cloud).

## The Cassandra Ecosystem in 2026: What the IBM Acquisition Means

IBM announced the acquisition of DataStax in February 2025. The acquisition closed in May 2025. DataStax is now an IBM subsidiary. ([<u>IBM press release, May 2025</u>](https://newsroom.ibm.com/2025-05-06-IBM-Completes-Acquisition-of-DataStax))

Here is what changed and what did not.

**Apache Cassandra is unaffected.** The Apache Cassandra project is governed by the Apache Software Foundation, not by DataStax or IBM. The open-source project continues under its existing governance model. IBM does not own Cassandra.

**The commercial ecosystem changed.** DataStax was the primary commercial sponsor and distribution partner for the Cassandra ecosystem, including the managed cloud product AstraDB, enterprise support contracts, and commercial tooling. All of these are now under IBM ownership, with IBM's enterprise pricing model and product roadmap priorities. AstraDB's direction has been reoriented toward IBM's watsonx AI integration stack.

**If your evaluation includes AstraDB or a DataStax enterprise support contract,** factor in vendor concentration risk under IBM's ownership. Enterprise pricing posture and roadmap alignment with watsonx/AI are now IBM's to determine. Evaluate whether the current AstraDB roadmap matches your use case, not the pre-acquisition roadmap.

**If you are evaluating self-managed open-source Cassandra,** the Apache project is unaffected. Commercial support, enterprise tooling, and ecosystem investment have shifted, but self-managed deployments are not directly impacted.

IBM has significant engineering resources, and the acquisition may strengthen DataStax's enterprise capabilities. The relevant question for a 5-year infrastructure decision is whether the current commercial roadmap, not the pre-acquisition one, matches where you are going.

## Migrating from Cassandra to PostgreSQL

Teams that built on Cassandra for write scale and have since hit the query wall, or face growing operational overhead, are a real population. The migration question is high-stakes: schema translation from a query-first NoSQL model to a relational one is a deliberate engineering investment, not a lift-and-shift operation.

### The data model translation challenge

Cassandra's query-first schema design, wide rows, partition keys, and clustering columns, does not map directly to relational tables. Migration requires rethinking data access patterns, not just moving data.

Consider a Cassandra table designed for a specific query:

`-- Cassandra: query-first wide-row design
CREATE TABLE sensor_readings (
  device_id text,
  ts timestamp,
  value double,
  PRIMARY KEY (device_id, ts)
) WITH CLUSTERING ORDER BY (ts DESC);`

The Tiger Data equivalent is a hypertable with a time dimension and a device_id index. The difference: the relational model supports every query pattern you could not run before, without a new table per query.

`-- Tiger Data: hypertable handles all query patterns
CREATE TABLE sensor_readings (
  device_id text,
  ts timestamptz NOT NULL,
  value double precision
);

SELECT create_hypertable('sensor_readings', 'ts');
CREATE INDEX ON sensor_readings (device_id, ts DESC);`

### ETL approach

The migration path has three phases:

1. **Export.** Export Cassandra data to Parquet or CSV using spark-cassandra-connector or Cassandra's COPY command. For large datasets, Spark is more practical.
2. **Transform.** Redesign the schema to a relational model with hypertables. This is where engineering time concentrates, not in data movement. Plan for schema redesign time proportional to the number of distinct Cassandra tables and query patterns.
3. **Load and validate.** Load into Tiger Cloud via COPY or a bulk loader. Run Tiger Data and Cassandra in parallel for a defined validation period, redirect reads progressively, and decommission Cassandra once confidence is established.

**What stays the same:** time-ordered, append-only write patterns are fully compatible with Tiger Data hypertables. The application write path changes minimally: CQL INSERT becomes SQL INSERT, partition key becomes indexed column. Read paths require the most rework, but gain full SQL flexibility in return.

For large or complex migrations, the Tiger Cloud team can assist with migration planning.

## Decision Framework: Which Database Should You Use?

Engineers often scroll to this section first. Here is a structured framework.

### Five questions that map your workload to a recommendation

**Question 1: What are your write patterns?**

- Millions of small time-ordered writes per second (IoT sensor data, metrics, logs) -> Postgres + Tiger Data
- Random distributed writes across hundreds of millions of rows globally -> Cassandra
- Mixed (time-series writes plus transactional operations) -> Postgres + Tiger Data

**Question 2: Do you need to query your data with SQL or ad hoc analytics?**

- Yes: dashboards, rollups, multi-dimensional filters, joins with reference tables -> Postgres + Tiger Data (full SQL, continuous aggregates, native joins)
- No: write-once, point lookups only, fixed query patterns -> Cassandra may be appropriate

**Question 3: How many geographic regions do you need for write availability?**

- 1 to 3 regions, or active-passive replication is acceptable -> Postgres + Tiger Data
- 5 or more regions, active-active, no single point of failure required -> Cassandra 

**Question 4: What is your team's existing expertise?**

- Postgres/SQL ecosystem: psql, JDBC, ORMs, standard tooling -> Postgres + Tiger Data; minimal learning curve
- Cassandra operations, CQL, ring management -> factor switching costs into the decision

**Question 5: What is your data volume and time horizon?**

- TB to low PB range, time-series with retention and compression requirements -> Postgres + Tiger Data (Hypercore, automated retention, tiering)
- PB+ scale with no analytical query requirement -> Cassandra, or purpose-built distributed stores

### Scenario summary

| **Scenario** | **Recommended Database** | **Primary Reason** |
| --- | --- | --- |
| IoT sensor ingestion with analytics | Postgres + Tiger Data | Time-series query performance, full SQL, continuous aggregates |
| Multi-region global write availability (5+ regions) | Cassandra | Leaderless ring topology; no Postgres equivalent at this scale |
| Metrics monitoring and dashboards | Postgres + Tiger Data | Continuous aggregates; native SQL for alerting and reporting |
| Event sourcing (write-only, no SQL needed) | Cassandra | Fixed schema, no analytics requirement |
| Analytics on time-series data | Postgres + Tiger Data | SQL, joins, window functions; 2018 benchmark shows orders-of-magnitude query advantage |
| Hybrid transactional + time-series | Postgres + Tiger Data | ACID compliance; single system for both workloads |
| Migrating from Cassandra (hit query wall) | Postgres + Tiger Data | SQL flexibility, operational simplicity, lower TCO at typical scale tiers |

If your use case maps to Postgres + Tiger Data, [<u>start a Tiger Cloud free trial</u>](https://www.tigerdata.com/cloud).

## FAQ: PostgreSQL vs. Cassandra

### Is Cassandra faster than PostgreSQL?

On raw per-node write throughput, Cassandra's LSM-tree model has an edge over default PostgreSQL. Tiger Data's hypertables close that gap: a 5-node Tiger Data cluster achieved 5.4x higher write throughput than a 5-node Cassandra cluster at identical hardware. On analytical reads, PostgreSQL wins decisively, up to 5,800x faster in structured time-series query testing. The relevant metric is not raw append speed alone, but write plus query performance for your actual workload.

### When should I use Cassandra instead of PostgreSQL?

Cassandra is the better architectural choice in three specific scenarios: multi-region active-active deployments requiring write availability across five or more geographies simultaneously with no single point of failure; write-only event log use cases where no SQL analytics are needed and the data model is fixed at design time; and teams with deep Cassandra operational expertise and no relational analytics requirements. Time-series, IoT, telemetry, and operational analytics workloads generally run better on PostgreSQL with time-series extensions, with lower operational overhead and better query performance.

### Can PostgreSQL replace Cassandra?

Yes, for the workloads that drive most Cassandra evaluations: IoT sensor ingestion, metrics monitoring, telemetry, and application event logging. PostgreSQL with Tiger Data handles these with full SQL, continuous aggregates, and 90 to 95 percent storage compression. The exception is true petabyte-scale active-active deployments requiring write availability across five or more independent regions — at that topology, Cassandra's native horizontal scaling model requires less operational investment than PostgreSQL replication.

### What are the limitations of Cassandra compared to PostgreSQL?

Cassandra's primary limitations in a time-series context: no native SQL (CQL requires query-first schema design that does not generalize), no native JOIN operations, no ad hoc analytics without secondary tooling such as Spark or Presto, tombstone accumulation from deletes and updates that requires manual compaction management, and complex multi-data center configuration that demands specialized operational expertise. Additionally, the IBM acquisition of DataStax in May 2025 has changed the commercial ecosystem dynamics for teams relying on managed Cassandra services or DataStax enterprise support.

### Is Cassandra still relevant in 2026?

Yes, for its target use cases. Cassandra remains the architecture of choice for global, always-on, multi-region distributed storage at scale. Organizations running hundreds of petabytes across global infrastructure have legitimate Cassandra deployments. At the more common terabyte to low-petabyte range, the operational complexity, query limitations, CQL data modeling constraints, and compaction management overhead tend to outweigh the benefits relative to a well-configured PostgreSQL deployment. The IBM acquisition of DataStax in May 2025 is a relevant factor for teams evaluating commercial Cassandra support or managed services.

### How does Cassandra handle time-series data?

Cassandra stores time-series data using wide-row partitions keyed by a device or entity identifier, with a clustering column on timestamp. This works efficiently for point lookups: "give me all readings for device X in the last hour." It performs poorly for cross-partition analytical queries: "give me the average temperature across all devices in region Y, by hour, for the last 30 days." Those queries require fan-out reads across all partitions, with results assembled at the application layer or via Spark. PostgreSQL with hypertables and [<u>continuous aggregates</u>](https://www.tigerdata.com/docs/use-timescale/latest/continuous-aggregates) handles both patterns efficiently without secondary infrastructure.

### What happened to DataStax and Cassandra after the IBM acquisition?

IBM announced the acquisition of DataStax in February 2025, and it closed in May 2025. DataStax is now an IBM subsidiary. Apache Cassandra itself remains an Apache Software Foundation open-source project and is unaffected by the acquisition. DataStax's managed cloud product, AstraDB, and commercial support contracts are now under IBM ownership, with the product roadmap oriented toward IBM's watsonx AI platform. Teams evaluating commercial Cassandra support, AstraDB, or DataStax enterprise tooling should factor IBM's enterprise pricing posture and current roadmap direction into the evaluation.

### Can I use PostgreSQL for high write throughput like Cassandra?

With Tiger Data's hypertables and Hypercore column store, yes, for time-series write patterns specifically. Tiger Data automatically partitions inserts into time-based chunks, which keeps the active write surface small and avoids the B-tree contention that limits vanilla PostgreSQL write throughput on monotonically increasing data. In the 2018 Tiger Data benchmark, a 5-node cluster achieved 5.4x higher write throughput than a 5-node Cassandra cluster at parity hardware on Azure D8s v3 instances, and outperformed a 30-node Cassandra cluster on all five measured time-series query types. See the [<u>full benchmark methodology</u>](https://www.tigerdata.com/blog/cassandra-vs-timescaledb) for the complete test setup and results.

### How do I migrate from Cassandra to PostgreSQL?

Migration involves three phases: (1) schema translation, redesigning query-first Cassandra tables into a relational model with Tiger Data hypertables, which is the most time-intensive phase; (2) data export and ETL, exporting via spark-cassandra-connector or COPY to Parquet or CSV, transforming to the relational schema, then loading into Tiger Cloud via COPY; and (3) validation, running parallel workloads, validating query parity, and decommissioning Cassandra once confidence is established. The schema redesign phase concentrates the engineering work, not the data movement. For large or complex migrations, reach out to the Tiger Cloud team for migration planning support.

### What is the difference between CQL and SQL?

CQL, Cassandra Query Language, resembles SQL syntactically but operates on fundamentally different principles. CQL requires query-first schema design: tables are defined based on the specific queries that will run against them, not based on the data model. Joins are not supported. Subqueries are not supported. Ad hoc analytics require creating new tables or using secondary tools. Changing a query pattern often means creating a new Cassandra table and reloading data into it. PostgreSQL's SQL supports standard relational operations, joins, window functions, subqueries, and ad hoc analytics with no schema redesign. For engineering teams already fluent in SQL, the CQL cognitive overhead and schema rigidity are significant adoption costs that compound over time as query requirements evolve.

### Is Cassandra ACID compliant?

Cassandra is not fully ACID compliant. It provides eventual consistency by default, with tunable consistency levels (ONE, QUORUM, ALL). Lightweight transactions using Paxos are available for compare-and-set operations, but carry significant performance overhead and are not suited for general transactional use. PostgreSQL is fully ACID compliant with serializable isolation, multi-statement transactions, foreign key constraints, and referential integrity at the database level. Workloads with transactional guarantees belong on PostgreSQL. Workloads where eventual consistency is acceptable on writes are a reasonable fit for Cassandra's consistency model.

### Does PostgreSQL scale horizontally like Cassandra?

Not natively in the same way. PostgreSQL's horizontal scaling options, including Citus for distributed query execution, logical replication, and read replicas, require more operational configuration than Cassandra's automatic ring-based horizontal scaling. For most time-series workloads, this is not a practical constraint: vertical scaling combined with Tiger Data's partitioning, compression, and data tiering handles terabyte-scale workloads on a single primary efficiently. At multi-region active-active write distribution across five or more independent regions, Cassandra's horizontal model is a genuine architectural advantage. Tiger Cloud handles automated vertical scaling and failover for managed deployments without requiring manual ring management.