TigerData logo
TigerData logo
  • Product

    Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    TimescaleDB Enterprise

    Self-managed TimescaleDB for on-prem, edge and private cloud

    Open source

    TimescaleDB

    Time-series, real-time analytics and events on Postgres

    Search

    Vector and keyword search on Postgres

  • Industry

    Crypto

    Energy Telemetry

    Oil & Gas Operations

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InStart a free trial
TigerData logo

Products

Time-series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Tutorials Changelog Success Stories Time-series Database

Company

Contact Us Careers About Newsroom Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2026 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Back to blog

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

N

By Nicole Bahr

5 min read

Apr 02, 2026

Dev Q&A

Table of contents

01 The Energy Demand Spike Penalty02 Amazon RDS Wasn’t Built for Time-Series Data03 One Database for Transactional and Analytical Queries04 AWS and Tiger Cloud at the Heart of the Cactos Tech Stack05 Results06 Looking Ahead

How Cactos Migrated from Amazon RDS and Cut Costs by 55%

How Cactos Migrated from Amazon RDS and Cut Costs by 55%

Back to blog

Dev Q&A

N

By Nicole Bahr

5 min read

Apr 02, 2026

Table of contents

01 The Energy Demand Spike Penalty02 Amazon RDS Wasn’t Built for Time-Series Data03 One Database for Transactional and Analytical Queries04 AWS and Tiger Cloud at the Heart of the Cactos Tech Stack05 Results06 Looking Ahead

Copy as HTML

Open in ChatGPT

Open in Claude

Open in v0

Cactos, a manufacturer and operator of battery energy storage systems (BESS), migrated its time-series workloads from Amazon RDS to Tiger Data, shrinking storage by 92% and cutting costs by $5,000 per month. 

In this piece, Cactos Co-Founder and Software Engineer Juuso Mäyränen shares how his team evaluated the move and managed the migration.

The Energy Demand Spike Penalty

For agriculture businesses running high-intensity lighting, heating, and climate control 24/7, electricity is one of their largest costs. Beyond just the energy costs, there is the risk of energy demand spikes. Across much of Europe, Distribution System Operators (DSOs) apply a power-based pricing mechanism: if a facility exceeds its agreed demand cap, the DSO can increase the base tariff for that site and hold it elevated for months. A single peak event can raise charges 4x the normal rate for an entire quarter. 

Electrified logistics face similar challenges. EV truck fleets need to charge large battery packs during the brief loading and unloading windows when vehicles are stopped. The peak draw is enormous and short, so fleet operators need a way to buffer against excessive energy draws while delivering large energy capacity on-demand. 

Cactos solves this problem. By building and operating battery energy storage systems, Cactos acts as an intelligent buffer between the grid and customers in e.g. agriculture, EV charging and grid-scale battery parks. Each battery energy storage system unit (BESS) charges during low-cost, off-peak periods and discharges to absorb demand spikes, lowering energy costs for the end customers. 

Cactos also enables energy trading for battery surplus. For each BESS unit, a central optimizer continuously calculates the best action for each unit, trading unused capacity in ancillary energy markets on 15-minute intervals to generate additional revenue for the customer. Managing those market commitments adds a second layer of real-time pressure. When Cactos bids capacity into a market window and wins the trade, the fleet must deliver on time. The penalty for a missed commitment is both lost revenue and a requirement to purchase the energy amount elsewhere. 

Amazon RDS Wasn’t Built for Time-Series Data

Before Tiger Data, Cactos ran on vanilla Amazon RDS for PostgreSQL. While RDS managed existing workloads reasonably well, the team found data access becoming slower for some use cases, while costs skyrocketed. The database contained 15TB of historical data, primarily fleet telemetry, which used hot storage with no compression and no tiering. The bill for this arrangement grew with the fleet. By the time the team decided to move, the RDS spend reached $9,000 per month and was growing rapidly. 

It was getting prohibitively expensive to run our time-series data on Amazon RDS. - Juuso Mäyränen, Co-Founder & Software Engineer, Cactos

One Database for Transactional and Analytical Queries

Juuso describes himself as “a pretty strong Postgres proponent.” When RDS became untenable, his instinct was to stay in the Postgres ecosystem rather than migrate to a purpose-built time-series database. He'd used InfluxDB and Prometheus in the past, but felt that PostgreSQL was better suited for critical workloads where precision and transactional correctness are baseline requirements, such as energy markets. 

Cactos also considered ClickHouse, but didn’t want to introduce the split architecture required to combine analytics and time-series data with an OLAP database, or to rework the code base for an additional database engine. Cactos needed to house its relational configuration data alongside its time-series telemetry. Running two systems introduced complexity the team didn't want to carry. Tiger Data's position as a PostgreSQL extension kept the architecture clean. 

It makes things much simpler to manage everything from a single database. - Juuso Mäyränen

Tiger Cloud, a fully hosted TimescaleDB solution running on AWS, allowed operational and historical data to be stored and queried from one SQL interface. Native compression addressed the storage cost problem. Tiered storage to S3 handled older data automatically. And the per-table live-sync migration approach let the team move 15T across 35 partitioned tables with minimal downtime, completing the migration in roughly one month with no significant architectural changes required.  

AWS and Tiger Cloud at the Heart of the Cactos Tech Stack

Cactos data starts with sensors attached to each battery as part of the larger battery energy storage system (BESS). Each BESS unit transmits telemetry several times per minute over HTTPS, routed through Cloudflare. When messages arrive, they’re passed to a queue worker via AWS SQS, where they're picked up by backend services running on AWS EKS. The Kubernetes-managed code then processes incoming data, runs the optimizer logic, and writes time-series readings to Tiger Cloud.

Tiger Cloud runs two services: a production instance with a high-availability replica and a dedicated development and test environment. VPC peering handles secure connectivity. Data beyond the two-month hot retention window tiers automatically to S3, currently holding around 13TB of the data, while remaining fully queryable through Tiger Cloud’s SQL interface.

On the read side, there's no caching layer between Tiger Cloud and the applications that consume it. The same store that receives every device write also acts as the single source of truth for customer-facing dashboards, the optimizer's real-time control queries, and TSO and DSO regulatory reporting.

image

Cactos data flow: BESS units transmit telemetry via HTTPS through Cloudflare to SQS and EKS processing on AWS, writing to Tiger Data as the operational source of truth. Customer dashboards, the fleet optimizer, and regulatory reporting all read directly from Tiger Data. Historical data beyond the two-month hot window tiers automatically to S3.

Results

92% Compression - 15 TB → 1 TB

Tiger Cloud’s native compression on hypertables and automatic tiering beyond two months meant the team no longer kept everything in hot storage as they had done with RDS. The full history is now queryable at a fraction of the cost.

Database Costs Cut by More Than Half

Monthly database spending dropped from $9,000-$4,000 with significantly more capabilities included. The spend growth rate also slowed down considerably. The current setup includes high-availability replication, continuous aggregates, tiered S3 storage, and a dedicated dev and test service. The previous setup with RDS had no compression or data tiering.

Tiered Storage Faster Than Hot Storage

Juuso highlights one interesting outcome that was difficult to quantify before moving production workloads over to the new database. When browsing historical device data that had already moved to S3 tiered storage, retrieval was faster than expected. The previous setup kept all 15TB in hot storage on RDS and still returned data slowly. Tiger Data's columnar compression, combined with TimescaleDB's chunk exclusion optimizing which data partitions get scanned, meant that historical queries run faster on compressed tiered data than they did on uncompressed hot data.

Currently it's faster to get data from Tiger Data tiered storage than it was from hot storage on RDS. - Juuso Mäyränen

A Migration That Just Worked

The 15TB migration across 35 partitioned tables was technically demanding for the Cactos team, but using Tiger Data’s per-table live-sync RDS connector approach let the team migrate without taking production offline. After the initial period of working through setup and configuration, the system worked great.

It's been smooth sailing, and everything's been working really well after the migration completed. - Juuso Mäyränen

Looking Ahead

Cactos is now targeting to double the fleet size every year, with new product lines for charging stations and grid-scale solar and wind storage in development. By running on Tiger Data optimized for time-series and analytics, Cactos is ready to expand their business and adopt new use cases at scale. 

Related posts

How TimescaleDB Outperforms ClickHouse and MongoDB for LogTide's Observability Platform

How TimescaleDB Outperforms ClickHouse and MongoDB for LogTide's Observability Platform

Dev Q&ATimescaleDB

Apr 15, 2026

How one developer built an open-source log management platform handling 5M logs/day on minimal hardware—using TimescaleDB continuous aggregates, compression, and hypertables.

Read more

Hexagons, Hypertables, and 240 Dead Tags: Migrating a Maritime Data Platform to TimescaleDB

Hexagons, Hypertables, and 240 Dead Tags: Migrating a Maritime Data Platform to TimescaleDB

Dev Q&ATimescaleDB

Mar 25, 2026

VesselAPI migrated 700K AIS position reports per hour from MongoDB to TimescaleDB—and learned how hexagonal indexing, hypertables, and one struct tag change everything.

Read more

Stay updated with new posts and releases.

Receive the latest technical articles and release notes in your inbox.

Share

Start a free trial