
Back to blog
5 min read
Apr 02, 2026
Table of contents
01 The Energy Demand Spike Penalty02 Amazon RDS Wasn’t Built for Time-Series Data03 One Database for Transactional and Analytical Queries04 AWS and Tiger Cloud at the Heart of the Cactos Tech Stack05 Results06 Looking AheadCactos, a manufacturer and operator of battery energy storage systems (BESS), migrated its time-series workloads from Amazon RDS to Tiger Data, shrinking storage by 92% and cutting costs by $5,000 per month.
In this piece, Cactos Co-Founder and Software Engineer Juuso Mäyränen shares how his team evaluated the move and managed the migration.
For agriculture businesses running high-intensity lighting, heating, and climate control 24/7, electricity is one of their largest costs. Beyond just the energy costs, there is the risk of energy demand spikes. Across much of Europe, Distribution System Operators (DSOs) apply a power-based pricing mechanism: if a facility exceeds its agreed demand cap, the DSO can increase the base tariff for that site and hold it elevated for months. A single peak event can raise charges 4x the normal rate for an entire quarter.
Electrified logistics face similar challenges. EV truck fleets need to charge large battery packs during the brief loading and unloading windows when vehicles are stopped. The peak draw is enormous and short, so fleet operators need a way to buffer against excessive energy draws while delivering large energy capacity on-demand.
Cactos solves this problem. By building and operating battery energy storage systems, Cactos acts as an intelligent buffer between the grid and customers in e.g. agriculture, EV charging and grid-scale battery parks. Each battery energy storage system unit (BESS) charges during low-cost, off-peak periods and discharges to absorb demand spikes, lowering energy costs for the end customers.
Cactos also enables energy trading for battery surplus. For each BESS unit, a central optimizer continuously calculates the best action for each unit, trading unused capacity in ancillary energy markets on 15-minute intervals to generate additional revenue for the customer. Managing those market commitments adds a second layer of real-time pressure. When Cactos bids capacity into a market window and wins the trade, the fleet must deliver on time. The penalty for a missed commitment is both lost revenue and a requirement to purchase the energy amount elsewhere.
Before Tiger Data, Cactos ran on vanilla Amazon RDS for PostgreSQL. While RDS managed existing workloads reasonably well, the team found data access becoming slower for some use cases, while costs skyrocketed. The database contained 15TB of historical data, primarily fleet telemetry, which used hot storage with no compression and no tiering. The bill for this arrangement grew with the fleet. By the time the team decided to move, the RDS spend reached $9,000 per month and was growing rapidly.
It was getting prohibitively expensive to run our time-series data on Amazon RDS. - Juuso Mäyränen, Co-Founder & Software Engineer, Cactos
Juuso describes himself as “a pretty strong Postgres proponent.” When RDS became untenable, his instinct was to stay in the Postgres ecosystem rather than migrate to a purpose-built time-series database. He'd used InfluxDB and Prometheus in the past, but felt that PostgreSQL was better suited for critical workloads where precision and transactional correctness are baseline requirements, such as energy markets.
Cactos also considered ClickHouse, but didn’t want to introduce the split architecture required to combine analytics and time-series data with an OLAP database, or to rework the code base for an additional database engine. Cactos needed to house its relational configuration data alongside its time-series telemetry. Running two systems introduced complexity the team didn't want to carry. Tiger Data's position as a PostgreSQL extension kept the architecture clean.
It makes things much simpler to manage everything from a single database. - Juuso Mäyränen
Tiger Cloud, a fully hosted TimescaleDB solution running on AWS, allowed operational and historical data to be stored and queried from one SQL interface. Native compression addressed the storage cost problem. Tiered storage to S3 handled older data automatically. And the per-table live-sync migration approach let the team move 15T across 35 partitioned tables with minimal downtime, completing the migration in roughly one month with no significant architectural changes required.
Cactos data starts with sensors attached to each battery as part of the larger battery energy storage system (BESS). Each BESS unit transmits telemetry several times per minute over HTTPS, routed through Cloudflare. When messages arrive, they’re passed to a queue worker via AWS SQS, where they're picked up by backend services running on AWS EKS. The Kubernetes-managed code then processes incoming data, runs the optimizer logic, and writes time-series readings to Tiger Cloud.
Tiger Cloud runs two services: a production instance with a high-availability replica and a dedicated development and test environment. VPC peering handles secure connectivity. Data beyond the two-month hot retention window tiers automatically to S3, currently holding around 13TB of the data, while remaining fully queryable through Tiger Cloud’s SQL interface.
On the read side, there's no caching layer between Tiger Cloud and the applications that consume it. The same store that receives every device write also acts as the single source of truth for customer-facing dashboards, the optimizer's real-time control queries, and TSO and DSO regulatory reporting.

Cactos data flow: BESS units transmit telemetry via HTTPS through Cloudflare to SQS and EKS processing on AWS, writing to Tiger Data as the operational source of truth. Customer dashboards, the fleet optimizer, and regulatory reporting all read directly from Tiger Data. Historical data beyond the two-month hot window tiers automatically to S3.
Tiger Cloud’s native compression on hypertables and automatic tiering beyond two months meant the team no longer kept everything in hot storage as they had done with RDS. The full history is now queryable at a fraction of the cost.
Monthly database spending dropped from $9,000-$4,000 with significantly more capabilities included. The spend growth rate also slowed down considerably. The current setup includes high-availability replication, continuous aggregates, tiered S3 storage, and a dedicated dev and test service. The previous setup with RDS had no compression or data tiering.
Juuso highlights one interesting outcome that was difficult to quantify before moving production workloads over to the new database. When browsing historical device data that had already moved to S3 tiered storage, retrieval was faster than expected. The previous setup kept all 15TB in hot storage on RDS and still returned data slowly. Tiger Data's columnar compression, combined with TimescaleDB's chunk exclusion optimizing which data partitions get scanned, meant that historical queries run faster on compressed tiered data than they did on uncompressed hot data.
Currently it's faster to get data from Tiger Data tiered storage than it was from hot storage on RDS. - Juuso Mäyränen
The 15TB migration across 35 partitioned tables was technically demanding for the Cactos team, but using Tiger Data’s per-table live-sync RDS connector approach let the team migrate without taking production offline. After the initial period of working through setup and configuration, the system worked great.
It's been smooth sailing, and everything's been working really well after the migration completed. - Juuso Mäyränen
Cactos is now targeting to double the fleet size every year, with new product lines for charging stations and grid-scale solar and wind storage in development. By running on Tiger Data optimized for time-series and analytics, Cactos is ready to expand their business and adopt new use cases at scale.

How TimescaleDB Outperforms ClickHouse and MongoDB for LogTide's Observability Platform
How one developer built an open-source log management platform handling 5M logs/day on minimal hardware—using TimescaleDB continuous aggregates, compression, and hypertables.
Read more
Receive the latest technical articles and release notes in your inbox.