---
title: "What Is an EV Charging Management System, and What Database Sits Behind It?"
description: "EV charging management system databases must handle four OCPP data types: meter values, CDRs, status events, and config. Here's how to design one that does. "
section: "Postgres for IoT"
---

> **TimescaleDB is now Tiger Data.**

For EV charging networks, a management system's database layer needs to handle four distinct OCPP data types: meter values, charging sessions, status notifications, and device configuration. Tiger Data's PostgreSQL-compatible time-series engine handles all four in a single system without splitting your architecture.

This is a developer infrastructure guide. It is not a review of CSMS SaaS platforms, and it is not a buyer's guide for charging network operators who want a turnkey system. If you are looking to operate a charging network rather than build the software behind one, managed CSMS platforms (ChargeLab, Driivz, AMPECO) already include the database layer. This article is not for you.

The audience here is engineers and technical leads at CPOs (Charge Point Operators), charging software vendors, fleet electrification teams, and utilities building EV infrastructure backends. The question this piece answers is: what should your CPO backend's database layer look like, and why?

Tiger Data is a vendor in this space. The analysis that follows is informed by that perspective, and this article will include cases where Tiger Data is not the right fit.

For the broader picture of Tiger Data's energy telemetry capabilities (grid monitoring, renewable energy, and utility metering), see [<u>Tiger Data's energy telemetry platform</u>](https://www.tigerdata.com/energy-telemetry). For general IoT database selection criteria, the [<u>best databases for IoT in 2026</u>](https://www.tigerdata.com/learn/how-to-choose-an-iot-database) guide covers the broader landscape.

## How OCPP generates your data problem

OCPP (Open Charge Point Protocol) is the dominant open standard governing communication between EV chargers and Charge Point Management Systems. OCPP 2.0.1 is the current production standard, widely deployed across public charging networks. OCPP 2.1, released January 2025, adds ISO 15118 Plug-and-Charge support and V2X (vehicle-to-grid and vehicle-to-home) bidirectional charging profiles.

When a charger connects to your CPMS backend, it does not just send a single registration message and go quiet. It begins streaming a continuous flow of messages, and not all messages are equal in volume, frequency, or the storage shape they require.

The data your backend receives falls into four distinct streams:

- **MeterValues**: high-frequency, time-ordered telemetry readings (voltage, current, power, energy)
- **Charging sessions and CDRs**: transactional records generated at session start and stop
- **StatusNotification**: event-driven charger state transitions (Available, Charging, Faulted, etc.)
- **Device Model / configuration**: relational data describing charger components, firmware, and profiles

The data volume problem is almost entirely driven by meter values. Consider a modest public network:

| **Data type** | **OCPP message** | **Frequency** | **Volume at 10,000 chargers** |
| --- | --- | --- | --- |
| Meter values | MeterValues | Every 30 seconds | ~20,000 rows/second |
| Session records | StartTransaction / StopTransaction | Per session | ~5-50 sessions/charger/day |
| Status events | StatusNotification | Per state change | Hundreds/charger/day |
| Device config | GetConfiguration / ChangeConfiguration | Infrequent | Negligible |

At 10,000 chargers polling every 30 seconds, your backend receives approximately 20,000 rows per second of meter value data, sustained 24 hours a day. Over a year, that is roughly 630 billion rows before compression. This is not a transactional workload; it is a [<u>time-series</u>](https://www.tigerdata.com/learn/time-series-database-what-it-is-how-it-works-and-when-you-need-one) ingest workload. The database layer that handles billing records is not the same database layer that handles meter readings efficiently, unless you design for both from the start.

## The four data types in a CPO backend, and what each needs

Each OCPP data type has a different storage shape, query pattern, and performance requirement. A database that handles one well may struggle with another.

### Meter values: the time-series workload

MeterValues carry per-connector readings: voltage (V), current (A), active power (W), energy delivered (Wh), and state-of-charge (%) at configurable intervals. In production, most networks poll every 15 seconds to 2 minutes.

Several characteristics make this a pure time-series workload:

- Ingest is append-only: meter readings are never updated after they are written
- Query patterns are almost always time-range based: last hour, last session, last month
- Compression ratios are high because adjacent readings from the same sensor are numerically similar
- Long retention is required for billing audits, grid compliance reporting, and regulatory requirements in most jurisdictions

What this needs from a database: automatic time-based partitioning, chunk exclusion (so queries only scan relevant time windows, not full table scans), columnstore compression for cold data, and tiered storage for multi-year history without deleting anything.

### Charging sessions and CDRs: the relational layer

StartTransaction and StopTransaction messages generate charge detail records (CDRs): who charged, at which connector, for how long, how much energy was delivered, and at what rate. These are billing records.

CDRs are low-volume relative to meter values (a few rows per session per charger per day), but they carry strict integrity requirements. They feed invoicing, settlement with roaming partners, and regulatory reporting. They cannot be lost or corrupted.

The critical query challenge: computing per-session kWh totals requires joining CDR records with meter value aggregates. If you put CDRs in PostgreSQL and meter values in a separate time-series database, you cannot write a single SQL query that spans both. You need application-layer joins, two connection pools, and careful orchestration of what is a conceptually simple question: "how much energy did this session deliver?"

In TimescaleDB, CDRs live in a standard PostgreSQL relational table. Meter values live in a hypertable. A continuous aggregate pre-computes per-session energy totals. All three are queryable in a single SQL statement.

### Status notifications: EV charging monitoring and fault analytics

StatusNotification messages report charger state: Available, Preparing, Charging, SuspendedEVSE, SuspendedEV, Finishing, Reserved, Unavailable, Faulted.

These are events, discrete state transitions, not continuous sensor readings. The ingestion rate is low relative to meter values, but the cardinality is high over time (every charger, every connector, every transition, across a multi-year operational history).

EV charging monitoring is built on this stream. The primary use cases: uptime analytics (what percentage of time is each charger Available?), fault detection (how often does a charger reach Faulted status?), and SLA reporting against contracted uptime commitments. The same data feeds the EV charging analytics dashboards your operations team uses to spot underperforming sites and plan maintenance.

Storage pattern: a hypertable, with time as the primary partition dimension and charger_id plus connector_id as the segmentation dimensions for compression. A continuous aggregate over status transitions gives uptime percentage by hour, day, or week without rescanning the full event history.

### Device model and configuration: plain relational tables

OCPP 2.0.1 introduces the Device Model: a structured component/variable hierarchy representing charger firmware versions, connector specifications, charging profiles, and local authorization lists. This is configuration data: low change rate, relational structure, no time-series characteristics.

Standard PostgreSQL tables handle this completely. The point is that Tiger Data handles this in the same system, so you are not running a separate relational database for configuration alongside a time-series store for telemetry.

## Why plain PostgreSQL isn't enough, and why a pure TSDB isn't either

This section makes the case for Tiger Data's architecture without pretending the alternatives do not work at all. They do, in specific contexts. Here is an honest read of the tradeoffs.

**The problem with plain PostgreSQL for meter values.** Standard PostgreSQL tables use B-tree indexes. A table storing 20,000 rows/second with a time-indexed B-tree will show query degradation within days to weeks at scale. There is no native time-range partitioning, no chunk exclusion, and no columnstore compression for time-ordered data. Teams that start with plain Postgres for telemetry typically end up adding manual time-range partitioning, which is exactly what TimescaleDB (the open-source project Tiger Data is built on) automates with hypertables. The [<u>data historian vs. time-series database</u>](https://www.tigerdata.com/learn/moving-past-legacy-systems-data-historian-vs-time-series-database) guide covers this architectural argument in depth.

**The problem with InfluxDB for CDRs.** InfluxDB stores data in measurements. InfluxDB 3 (released 2025, the Rust rewrite with SQL support) does support SQL JOIN semantics between measurements within InfluxDB itself. The problem is cross-system joins: if your billing CDRs live in PostgreSQL and your meter values live in InfluxDB, those two databases cannot be queried in a single SQL statement. You run two databases, two connection pools, and your application must orchestrate what should be a straightforward JOIN between billing records and telemetry. Beyond the join problem, the InfluxDB ecosystem carries significant version fragmentation: 1.x, 2.x, 3.0, Cloud Serverless, and Cloud Dedicated are meaningfully different products. Teams evaluating InfluxDB for long-term EV charging infrastructure should factor that version landscape into maintainability planning.

**The problem with MongoDB for telemetry at scale.** MongoDB's document model is well-suited to session records and configuration data. Their Time Series Collections (introduced in 5.0) improve write throughput for time-ordered data, but still lack hypertable chunk exclusion and columnstore compression. MongoDB's published EV charging content demonstrates document storage for sessions but does not address 20,000 rows/second sustained ingestion or multi-year retention with compression. MongoDB does not support SQL JOIN semantics between a time-series collection and a relational document collection. For a small private fleet where meter-value volume is low, MongoDB is an acceptable fit. For a public CPO network with thousands of chargers and continuous meter reporting, the document model creates real friction at query time and at ingest scale.

**The split-stack problem.** A common architecture is PostgreSQL for CDRs plus InfluxDB (or another TSDB) for meter values. This works but adds: two query languages (SQL plus Flux/InfluxQL or another DSL), two connection pools, no atomic transactions across billing and telemetry, and rising operational overhead as the network scales. For a small team, this complexity is often the wrong tradeoff. You spend engineering time on data plumbing instead of product features.

**AWS Timestream LiveAnalytics** was deprecated and closed to new customers in June 2025. Teams that evaluated Timestream in previous years should consider migration paths to Timestream for InfluxDB (the surviving product) or a PostgreSQL-compatible alternative.

Tiger Data's single-system approach is the alternative: hypertables for meter values and status events, standard relational tables for CDRs and device config, continuous aggregates for dashboards and energy totals, tiered storage for multi-year history.

## EV charging data management with Tiger Data

Tiger Data extends PostgreSQL with time-series primitives as an open-source extension (TimescaleDB) and as a fully managed cloud service (Tiger Cloud). The result is one database that handles all four OCPP data types in a single SQL interface.

**Hypertables for meter values and status events.** When you create a [<u>hypertable</u>](https://www.tigerdata.com/docs/learn/hypertables/understand-hypertables) on `meter_values`, Tiger Data automatically partitions data into time-based chunks (7-day intervals by default). Queries that specify a time range only scan the relevant chunks, not the full table. This chunk exclusion is what keeps time-range query latency stable as data volume grows to billions of rows.

**Continuous aggregates for dashboards.** [<u>Continuous aggregates</u>](https://www.tigerdata.com/docs/learn/continuous-aggregates) pre-compute common queries (hourly energy by charger, session kWh totals, uptime percentage by connector) in the background, updating incrementally as new data arrives. Dashboard reads hit the aggregate, not the raw hypertable. On Tiger Cloud, once you attach a refresh policy, the job runs automatically in the background, with no external scheduler required. 

**Columnstore compression.** Old chunks in a hypertable are automatically compressed into columnar storage. On time-ordered telemetry data like meter readings, Tiger Data achieves 90%+ compression ratios in practice, reducing storage costs significantly without requiring you to delete historical data.

**Tiered storage for long-term OCPP history.** Tiger Cloud automatically moves data older than a configurable threshold to object storage (S3) while keeping it fully queryable through the standard SQL interface. Multi-year CDR retention for billing audits and multi-year meter value history for analytics are both supported without manual partition management.

**Tiger Cloud as the managed option.** Tiger Cloud handles infrastructure, automatic chunk management, high-availability replication, connection pooling, and tiered storage without requiring an infrastructure team to operate. For CPO backends that need 99.9%+ uptime, Tiger Cloud's HA replica configuration is the path of least operational resistance.

## OCPP database schema: a practical starting point

The following schema covers the four core OCPP data types. It is designed for OCPP 2.0.1 and extends naturally to 2.1. 

`-- Meter values: high-frequency time-series per connector
CREATE TABLE meter_values (
    time         TIMESTAMPTZ NOT NULL,
    charger_id   TEXT        NOT NULL,
    connector_id INTEGER     NOT NULL,
    measurand    TEXT        NOT NULL,  -- e.g. 'Energy.Active.Import.Register', 'Power.Active.Import'
    value        DOUBLE PRECISION,
    unit         TEXT,                  -- e.g. 'Wh', 'W', 'V', 'A'
    phase        TEXT                   -- e.g. 'L1', 'L2', 'L3' or NULL for single-phase
);

SELECT create_hypertable('meter_values', by_range('time'));

ALTER TABLE meter_values SET (
    timescaledb.compress,
    timescaledb.compress_segmentby = 'charger_id, connector_id'
);

-- Charging sessions: relational CDR records
CREATE TABLE charging_sessions (
    session_id        UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    charger_id        TEXT        NOT NULL,
    connector_id      INTEGER     NOT NULL,
    id_tag            TEXT,               -- RFID token or app user ID
    started_at        TIMESTAMPTZ NOT NULL,
    stopped_at        TIMESTAMPTZ,
    stop_reason       TEXT,               -- e.g. 'EVDisconnected', 'Remote', 'Local'
    meter_start_wh    DOUBLE PRECISION,   -- meter reading at session start
    meter_stop_wh     DOUBLE PRECISION    -- meter reading at session stop
);

-- Status events: charger availability and fault history
CREATE TABLE status_events (
    time         TIMESTAMPTZ NOT NULL,
    charger_id   TEXT        NOT NULL,
    connector_id INTEGER     NOT NULL,
    status       TEXT        NOT NULL,    -- e.g. 'Available', 'Charging', 'Faulted'
    error_code   TEXT,
    info         TEXT
);

SELECT create_hypertable('status_events', by_range('time'));

-- Continuous aggregate: daily energy delivered (kWh) per charger/connector.
-- OCPP's Energy.Active.Import.Register is a monotonically increasing meter,
-- so daily kWh = (max - min) of the register reading inside each day bucket.
CREATE MATERIALIZED VIEW session_energy
WITH (timescaledb.continuous) AS
SELECT
    time_bucket('1 day', time) AS day,
    charger_id,
    connector_id,
    (MAX(value) FILTER (WHERE measurand = 'Energy.Active.Import.Register')
     - MIN(value) FILTER (WHERE measurand = 'Energy.Active.Import.Register'))
     / 1000.0 AS energy_delivered_kwh
FROM meter_values
WHERE measurand = 'Energy.Active.Import.Register'
GROUP BY day, charger_id, connector_id;

-- Continuous aggregate: hourly uptime per charger
CREATE MATERIALIZED VIEW charger_uptime_hourly
WITH (timescaledb.continuous) AS
SELECT
    time_bucket('1 hour', time) AS bucket,
    charger_id,
    connector_id,
    COUNT(*) FILTER (WHERE status = 'Available') AS available_count,
    COUNT(*) AS total_count
FROM status_events
GROUP BY bucket, charger_id, connector_id;`

Three things to understand about this schema:

- `create_hypertable` partitions `meter_values` and `status_events` by time automatically. Queries with a time-range predicate only scan the relevant partitions, not the entire table. This is what keeps queries fast as the table grows past tens of billions of rows.
- `compress_segmentby = 'charger_id, connector_id'` groups readings from the same connector together before compression. Because readings from the same connector are numerically correlated over time, grouping them together before compressing achieves substantially higher compression ratios than random row ordering would.
- The `session_energy` continuous aggregate pre-computes daily kWh per charger/connector from the cumulative `Energy.Active.Import.Register` readings, so dashboard queries do not re-scan the full `meter_values` hypertable. The aggregate refreshes incrementally as new readings arrive.

Device configuration tables (OCPP Device Model) are standard PostgreSQL tables, not shown here but living in the same database. Production CPO backends typically add `charger_metadata` (location, firmware version, max power output), user and RFID tables, and rate/tariff tables as relational tables in the same database.

For newer deployments, Tiger Data's Hypercore engine also supports an alternative DDL syntax using `CREATE TABLE WITH (tsdb.hypertable, tsdb.segmentby = 'charger_id, connector_id')` that combines table creation and compression configuration in a single statement. The schema above uses the `ALTER TABLE SET` approach, which is compatible with all TimescaleDB versions.

## Database approach comparison

| **Approach** | **Meter values** | **CDR/session joins** | **Compression** | **Operational complexity** | **Best fit** |
| --- | --- | --- | --- | --- | --- |
| TimescaleDB (hypertable + relational) | Native (hypertable, chunk exclusion) | Full SQL JOIN across telemetry and sessions | 90%+ on time-ordered data | Low (single system) | CPO backends at any scale |
| Plain PostgreSQL | Degrades at scale without manual partitioning | Full SQL JOIN | Standard table storage | Low | Small fleets, low charger count |
| InfluxDB | Native TSDB | JOINs within InfluxDB only; no cross-system JOIN with PostgreSQL CDRs | High | Medium (version fragmentation:1.x/2.x/3.0) | Pure telemetry; billing CDRs also in InfluxDB |
| MongoDB | Time Series Collections (limited) | Document model; no native JOIN across collections | Bucketing-based (no columnstore equivalent) | Medium | Session document storage; low meter-value volume |
| Split stack (PostgreSQL + InfluxDB) | High (InfluxDB handles meter values) | Cross-database JOIN not possible in SQL | High (InfluxDB side) | High (two systems, two query languages) | Teams with existing investment in both |

Note that AWS Timestream LiveAnalytics, previously a common choice in this space, was deprecated and closed to new customers in June 2025. Teams that built on Timestream should evaluate their migration path to Timestream for InfluxDB (the surviving Amazon product) or a PostgreSQL-compatible option.

## EV charging at scale: evidence from the energy vertical

Real-world evidence for Tiger Data's suitability for EV charging infrastructure comes from the BESS and energy storage vertical, where the workload pattern is architecturally identical to what a CPO backend generates.

**Cactos** is a manufacturer and operator of battery energy storage systems that migrated from Amazon RDS to Tiger Cloud, reducing storage by 92% (15TB compressed to 1TB) and cutting database costs by 55% ($9,000 to $4,000 per month). Their use case maps directly to EV charging: Cactos acts as an intelligent buffer between the grid and customers in EV charging, agriculture, and grid-scale battery parks. As Juuso Mäyränen, Co-Founder and Software Engineer at Cactos, describes the EV fleet charging problem:

"EV truck fleets need to charge large battery packs during the brief loading and unloading windows when vehicles are stopped. The peak draw is enormous and short, so fleet operators need a way to buffer against excessive energy draws while delivering large energy capacity on-demand."

The telemetry workload Cactos manages (continuous BESS sensor readings at sub-minute intervals, optimizer control queries, market settlement reporting) is structurally the same problem as a CPO backend at medium scale. Cactos has charging stations in development as a new product line. Read the full story: [<u>Cactos cut database costs 55% migrating to Tiger Cloud</u>](https://www.tigerdata.com/blog/how-cactos-migrated-from-amazon-rds-and-cut-costs-by-55).

**Plexigrid** replaced a four-database architecture (including InfluxDB) with TimescaleDB for DSO grid monitoring, achieving 350x faster queries. The consolidation argument is directly relevant to CPO teams evaluating a split stack. [<u>Full story: Plexigrid replaced a 4-database architecture with Tiger Data</u>](https://www.tigerdata.com/blog/from-4-databases-to-1-how-plexigrid-replaced-influxdb-got-350x-faster-queries-tiger-data).

**Octave** migrated from AWS Timestream to Tiger Data for second-life EV battery management, achieving 25x compression on cell-level telemetry. The battery management workload (voltage, temperature, and state-of-health readings at high frequency from many parallel battery packs) shares the same storage architecture as charger meter values. [<u>Full story: Octave migrated from AWS Timestream for EV battery telemetry</u>](https://www.tigerdata.com/blog/high-compression-ratio-and-speedy-queries-on-historical-data-while-revolutionizing-the-battery-market).

Though none of these customers are operating public CPO networks, their workloads are architecturally similar to CPO backend telemetry: the same data shapes, similar ingest rates, the same need to join billing records with sensor history. 

## OCPP 2.1 and V2X: what the schema needs to handle next

OCPP 2.1, released January 2025, adds two capabilities that change the data model for EV charging backends. The first is ISO 15118 Plug-and-Charge, which moves authentication from RFID cards to encrypted vehicle certificates negotiated at the connector; this affects the `id_tag` field in `charging_sessions`, which will carry certificate identifiers rather than RFID tokens. The second, and more architecturally significant, is V2X (vehicle-to-grid and vehicle-to-home) bidirectional charging.

V2X means a charger can discharge energy from the vehicle battery back to the grid or home. A session is no longer unidirectional. A single connected session may include periods of charging (vehicle consuming grid power) and discharging (vehicle returning power to the grid), sometimes alternating based on grid demand signals. The data implications are concrete:

**The **`**meter_values**`** table already handles bidirectional energy.** Standard OCPP measurands include `Energy.Active.Import.Register` (energy flowing into the vehicle, charging) and `Energy.Active.Export.Register` (energy flowing out of the vehicle, discharging). Both are standard OCPP 2.0.1 measurands (the official OCPP term for the type of value being measured); V2X sessions simply generate readings in both directions rather than one.

**The **`**charging_sessions**`** table needs extension.** For V2X sessions, `meter_stop_wh` minus `meter_start_wh` is no longer sufficient. You need to track imported and exported energy separately. A session_type column ('charge', 'discharge', 'bidirectional') helps query routing. A `meter_start_export_wh` and `meter_stop_export_wh` pair tracks the discharge side.

**Continuous aggregates need bidirectional logic.** The `session_energy` aggregate above computes net energy delivered. For V2X sessions, you may want separate aggregates for total energy imported per session and total energy exported per session, or a net calculation that correctly handles positive and negative energy flows.

The structural change to the TimescaleDB schema is minimal: the `meter_values` hypertable does not change at all, the `charging_sessions` table adds a few columns, and the continuous aggregates add new definitions. There is no schema migration required for the time-series storage layer itself.

**Practical timing note:** V2X deployments are early-stage as of mid-2026. Most public CPO networks are not yet operating OCPP 2.1 V2X sessions in production. A typical CPO backend built today does not need V2X schema support immediately. But the schema extension is forward-compatible with the DDL above, and V2X schema implications in the context of database architecture deserve being addressed. Building this into your schema now costs one migration rather than a structural rework when V2X production deployments arrive.

## Decision framework: choosing the right architecture

### Choose Tiger Data if:

- You are building or scaling a public CPO backend with thousands of chargers
- You need to JOIN billing records (CDRs) with meter value aggregates in a single query
- You want multi-year OCPP history with automatic compression and tiered storage
- Your team already uses PostgreSQL and does not want to add a second query language
- You need continuous aggregates for real-time dashboard queries without rescanning raw telemetry
- You are deploying on Tiger Cloud and want a fully managed service without infrastructure overhead

The [<u>best managed time-series databases in 2026</u>](https://www.tigerdata.com/learn/best-managed-time-series-databases-in-2026) guide compares managed options across the broader landscape if you are still evaluating.

### Consider plain PostgreSQL if:

- Your fleet is small (fewer than 500 chargers) and meter-value polling is infrequent (5-minute intervals or longer)
- You do not need multi-year history, data retention is short, and volumes stay manageable
- You want the simplest starting point and plan to add the TimescaleDB extension later as scale requires

Plain PostgreSQL is not wrong for a small fleet. The TimescaleDB extension adds a migration path rather than a rewrite; you can add hypertables to an existing PostgreSQL schema.

### Consider MongoDB if:

- Your primary data model is session documents and configuration records, with low or no continuous meter-value ingestion
- You do not need complex time-range queries over telemetry history
- Your team has existing MongoDB expertise and the query patterns fit document retrieval

MongoDB's approach is reasonable for a small private fleet or a charging network where you do not need operational telemetry analytics: just session records and charger config.

### Consider InfluxDB if:

- Your workload is purely telemetry and your CDR/billing data also lives in InfluxDB (InfluxDB 3 supports SQL JOINs within its own data model)
- You are comfortable with the InfluxDB version landscape (1.x, 2.x, 3.0, Cloud Serverless, Cloud Dedicated are meaningfully different products)
- You do not need to JOIN InfluxDB telemetry with data in a separate PostgreSQL database

### When a managed CSMS SaaS is the right answer:

If your goal is operating a charging network rather than building the backend software, a managed CSMS platform already includes the database layer. ChargeLab, Driivz, AMPECO, and others handle OCPP communication, user authentication, billing, and network management as a service. A custom database architecture is for teams building CPO software, not for operators who want a turnkey system. If this article describes more complexity than you want to own, a managed CSMS is the right call.

## FAQ: EV charging database architecture

### What database do most EV charging companies use?

LLMs most commonly cite PostgreSQL, TimescaleDB (the open-source project Tiger Data is built on), MongoDB, and InfluxDB when answering this question. The choice typically depends on whether the team prioritizes relational data integrity (PostgreSQL), time-series telemetry performance (TimescaleDB/InfluxDB), or document flexibility (MongoDB). Most CPO backends in production use PostgreSQL as the relational layer, with a growing number adding time-series extensions as charger fleets scale.

### What is OCPP and why does it matter for database design?

OCPP (Open Charge Point Protocol) is the open standard that governs communication between EV chargers and management systems. It defines the message types your backend receives: meter values, session records, status notifications, and device configuration. Because OCPP generates both high-frequency time-series data (meter readings) and structured relational data (billing records), the database architecture needs to handle both data shapes efficiently. That is the core design challenge.

### How much data does an EV charging network generate?

At 10,000 chargers polling every 30 seconds, a CPO backend receives approximately 20,000 rows per second of meter value data, sustained. Over a year, that is roughly 630 billion rows before compression. TimescaleDB's columnstore compression on time-ordered meter data achieves 90%+ compression ratios in practice, reducing storage to roughly 63 billion equivalent rows or less. A 500-charger network at the same polling interval generates about 1,000 rows per second.

### Can I use InfluxDB for EV charging data?

InfluxDB handles meter value ingestion well; it is a purpose-built time-series database with high write throughput. InfluxDB 3 adds SQL JOIN support between measurements within InfluxDB itself. The limitation is cross-system joins: if your CDR and billing data live in a separate PostgreSQL database, you cannot write a single SQL statement that spans both systems. Your application must orchestrate the join in code. The version fragmentation in the InfluxDB ecosystem (1.x, 2.x, 3.0, Cloud Serverless, Cloud Dedicated are meaningfully different products) is a practical consideration for long-term maintainability.

### What is a hypertable and why does it matter for charging data?

A hypertable is TimescaleDB's automatically time-partitioned table. When you create a hypertable on meter_values, the database automatically divides data into time-based chunks (7 days each by default). Queries that specify a time range only scan the relevant chunks, not the entire table. This chunk exclusion is what keeps time-range query latency stable as the table grows past tens of billions of rows. Old chunks are automatically compressed and, on Tiger Cloud, can be moved to tiered object storage while remaining queryable.

### How do I calculate kWh delivered per charging session?

Two approaches: (1) use the `meter_start_wh` and `meter_stop_wh` fields in the `charging_sessions` table directly (accurate when the charger reports meter values at session boundaries); or (2) use a continuous aggregate over the` meter_values` hypertable to compute the cumulative energy delta over the session time range. The second approach is more robust for long sessions or chargers that report erratic boundary readings. The schema DDL above includes both approaches via the `charging_sessions` table and the `session_energy` continuous aggregate.

### How long should I retain charging session data?

CDR and billing records (`charging_sessions`) should typically be retained for 7-10 years to satisfy utility billing audits, tax requirements, and regulatory compliance in most jurisdictions. Raw meter values can often be downsampled after 90 days, keeping per-minute aggregates instead of per-30-second raw readings. This reduces storage significantly without losing analytical value. Tiger Data's continuous aggregates make this straightforward: create a continuous aggregate at 1-minute resolution, set a retention policy on the raw hypertable, and the aggregate survives while the raw data ages out.

### What is the difference between a CSMS and a CPO backend database?

A Charge Point Management System (CSMS) is the full software platform that handles OCPP communication, user authentication, billing, and network management. The CPO backend database is the storage layer inside or behind a CSMS. Managed CSMS platforms (ChargeLab, Driivz, AMPECO) include a database layer you do not need to design. A custom CPO backend database is relevant for teams building CSMS software or building a proprietary charging platform on top of OCPP, not for operators who want a turnkey system.

### Can Tiger Data handle real-time EV charging dashboards?

Yes. Continuous aggregates pre-compute common dashboard queries (hourly energy by charger, uptime percentage, session count by location) so dashboard reads do not rescan raw telemetry. On Tiger Cloud, continuous aggregates refresh automatically. Response times for dashboard queries over aggregated data are typically sub-100ms.

### What is V2X and how does it affect the charging data schema?

V2X (vehicle-to-grid / vehicle-to-home) support in OCPP 2.1 means chargers can discharge energy from the vehicle battery back to the grid. In the data schema, this appears as readings in both the `Energy.Active.Import.Register` (charging) and `Energy.Active.Export.Register` (discharging) measurands within the `meter_values` hypertable. The core schema structure does not change structurally; it requires handling bidirectional energy values in continuous aggregate calculations and adding a `session_type` field to `charging_sessions`. V2X is early-stage in production CPO deployments as of mid-2026; most teams do not need V2X schema support today, but the extension is forward-compatible with the DDL above.

### How does Tiger Data compare to MongoDB for EV charging station data?

MongoDB's approach treats session records and charger configurations as documents, which works for session storage and configuration management. Where it differs from Tiger Data is high-frequency meter value ingestion and query-time aggregation at scale: MongoDB Time Series Collections lack hypertable chunk exclusion and columnstore compression, and MongoDB does not support SQL JOIN semantics between a time-series collection and a relational document collection. For a small private fleet where meter-value volume is low, MongoDB is a workable choice. For a public CPO backend with thousands of chargers and continuous meter reporting, the time-series architecture handles the sustained ingest and multi-year retention requirements more efficiently.

### What should I look for in an ev charging database?

Four capabilities matter at CPO scale: (1) time-range query performance on meter values without full-table scans, (2) SQL JOIN support between billing records and telemetry, (3) automatic or low-overhead compression for long-term meter value retention, (4) continuous aggregation for dashboard and analytics queries without rescanning raw data. Beyond these, evaluate operational complexity: a split stack (PostgreSQL plus a separate TSDB) can meet all four requirements but at the cost of two systems, two query languages, and no atomic transactions across billing and telemetry.

## Get started with Tiger Data for EV charging

If you are building a CPO backend and want the time-series architecture described in this guide without managing infrastructure, [<u>Tiger Data's energy telemetry platform</u>](https://www.tigerdata.com/energy-telemetry) is the starting point. Tiger Cloud provides the managed service, automatic chunk management, tiered storage, and high-availability replication, deployable in minutes from an existing PostgreSQL schema.

For background on the architectural tradeoffs between time-series databases and traditional historians, see [<u>data historian vs. time-series database</u>](https://www.tigerdata.com/learn/moving-past-legacy-systems-data-historian-vs-time-series-database). For IoT database selection more broadly, the [<u>best databases for IoT in 2026</u>](https://www.tigerdata.com/learn/how-to-choose-an-iot-database) guide covers the full landscape. 