---
title: Analyze energy consumption | Tiger Data Docs
description: Get insights from energy consumption data with Grafana and Tiger Cloud
---

Energy providers understand that customers tend to lose patience when there is not enough power for them to complete day-to-day activities. Task one is keeping the lights on. If you are transitioning to renewable energy, it helps to know when you need to produce energy so you can choose a suitable energy source.

Real-time analytics refers to the process of collecting, analyzing, and interpreting data instantly as it is generated. This approach enables you to track and monitor activity, make the decisions based on real-time insights on data stored in a Tiger Cloud service and keep those lights on.

[Grafana](https://grafana.com/docs/) is a popular data visualization tool that enables you to create customizable dashboards and effectively monitor your systems and applications.

![Grafana dashboard showing energy consumption data from a continuous aggregate](/docs/_astro/use-case-rta-grafana-timescale-energy-cagg.BnE5uNhv_Eg8x0.webp)

This page shows you how to integrate Grafana with a Tiger Cloud service and make insights based on visualization of data optimized for size and speed in the columnstore.

## Prerequisites for this tutorial

To follow the procedure on this page you need to:

- Create a [target Tiger Cloud service](/docs/get-started/quickstart/create-service/index.md).

  This procedure also works for [self-hosted TimescaleDB](/docs/get-started/choose-your-path/install-timescaledb/index.md).

* Install and run [self-managed Grafana](https://grafana.com/get/?tab=self-managed), or sign up for [Grafana Cloud](https://grafana.com/get/).

## Optimize time-series data in hypertables

Hypertables are PostgreSQL tables in TimescaleDB that automatically partition your time-series data by time. Time-series data represents the way a system, process, or behavior changes over time. Hypertables enable TimescaleDB to work efficiently with time-series data. Each hypertable is made up of child tables called chunks. Each chunk is assigned a range of time, and only contains data from that range. When you run a query, TimescaleDB identifies the correct chunk and runs the query on it, instead of going through the entire table.

[Hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md) is the hybrid row-columnar storage engine in TimescaleDB used by hypertables. Traditional databases force a trade-off between fast inserts (row-based storage) and efficient analytics (columnar storage). Hypercore eliminates this trade-off, allowing real-time analytics without sacrificing transactional capabilities.

Hypercore dynamically stores data in the most efficient format for its lifecycle:

![Move from rowstore to columstore in hypercore](/docs/_astro/hypercore_intro.DutS1jP2.svg)

- **Row-based storage for recent data**: the most recent chunk (and possibly more) is always stored in the rowstore, ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage.
- **Columnar storage for analytical performance**: chunks are automatically compressed into the columnstore, optimizing storage efficiency and accelerating analytical queries.

Unlike traditional columnar databases, hypercore allows data to be inserted or modified at any stage, making it a flexible solution for both high-ingest transactional workloads and real-time analytics, within a single database.

Because TimescaleDB is 100% PostgreSQL, you can use all the standard PostgreSQL tables, indexes, stored procedures, and other objects alongside your hypertables. This makes creating and working with hypertables similar to standard PostgreSQL.

1. **Import time-series data into a hypertable**

   1. Unzip [metrics.csv.gz](https://assets.timescale.com/docs/downloads/metrics.csv.gz) to a `<local folder>`.

      This test dataset contains energy consumption data.

      To import up to 100GB of data directly from your current PostgreSQL based database, [migrate with downtime](/docs/migrate/migrate-with-downtime/index.md). To seamlessly import 100GB-10TB+ of data, use the [live migration](/docs/migrate/live-migration/index.md) tooling supplied by Tiger Data. To add data from non-PostgreSQL data sources, see [Import and ingest data](/docs/migrate/import-terminal/index.md).

   2. In Terminal, navigate to `<local folder>` and update the following string with [your connection details](/docs/integrate/find-connection-details/index.md) to connect to your service.

      Terminal window

      ```
      psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>?sslmode=require"
      ```

   3. Create an optimized hypertable for your time-series data.

      Create a [hypertable](/docs/learn/hypertables/understand-hypertables/index.md) with [hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md) enabled by default for your time-series data using [CREATE TABLE](/docs/reference/timescaledb/index.md). For [efficient queries](/docs/build/performance-optimization/secondary-indexes/index.md) on data in the columnstore, remember to `segmentby` the column you will use most often to filter your data.

      In your SQL client, run the following command:

      ```
      CREATE TABLE "metrics"(
          created timestamp with time zone default now() not null,
          type_id integer                                not null,
          value   double precision                       not null
      ) WITH (
          tsdb.hypertable,
          tsdb.segmentby = 'type_id',
          tsdb.orderby = 'created DESC'
      );
      ```

      When you create a hypertable using [CREATE TABLE … WITH …](/docs/reference/timescaledb/hypertables/create_table/index.md), the default partitioning column is automatically the first column with a timestamp data type. Also, TimescaleDB creates a [columnstore policy](/docs/reference/timescaledb/hypercore/add_columnstore_policy/index.md) that automatically converts your data to the columnstore, after an interval equal to the value of the [chunk\_interval](/docs/reference/timescaledb/hypertables/set_chunk_time_interval/index.md), defined through `after` in the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the columnstore conversion, hypertable chunks are compressed by up to 98%, and organized for efficient, large-scale queries.

      You can customize this policy later using [alter\_job](/docs/reference/timescaledb/jobs-automation/alter_job/index.md). However, to change `after` or `created_before`, the compression settings, or the hypertable the policy is acting on, you must [remove the columnstore policy](/docs/reference/timescaledb/hypercore/remove_columnstore_policy/index.md) and [add a new one](/docs/reference/timescaledb/hypercore/add_columnstore_policy/index.md).

      You can also manually [convert chunks](/docs/reference/timescaledb/hypercore/convert_to_columnstore/index.md) in a hypertable to the columnstore.

   4. Upload the dataset to your service:

      ```
      \COPY metrics FROM metrics.csv CSV;
      ```

2. **Have a quick look at your data**

   You query hypertables in exactly the same way as you would a relational PostgreSQL table. Use one of the following SQL editors to run a query and see the data you uploaded:

   - **Data view**: write queries, visualize data, and share your results in [Tiger Console](https://console.cloud.tigerdata.com/dashboard/services?popsql) for all your Tiger Cloud services.
   - **SQL editor**: write, fix, and organize SQL faster and more accurately in [Tiger Console](https://console.cloud.tigerdata.com/dashboard/services) for a Tiger Cloud service.
   - **psql**: easily run queries on your Tiger Cloud services or self-hosted TimescaleDB deployment from Terminal.

   ```
   SELECT time_bucket('1 day', created, 'Europe/Berlin') AS "time",
   round((last(value, created) - first(value, created)) * 100.) / 100. AS value
   FROM metrics
   WHERE type_id = 5
   GROUP BY 1;
   ```

   On this amount of data, this query on data in the rowstore takes about 3.6 seconds. You see something like:

   | Time                   | value |
   | ---------------------- | ----- |
   | 2023-05-29 22:00:00+00 | 23.1  |
   | 2023-05-28 22:00:00+00 | 19.5  |
   | 2023-05-30 22:00:00+00 | 25    |
   | 2023-05-31 22:00:00+00 | 8.1   |

## Write fast analytical queries

Aggregation is a way of combining data to get insights from it. Average, sum, and count are all examples of simple aggregates. However, with large amounts of data aggregation slows things down, quickly. Continuous aggregates are a kind of hypertable that is refreshed automatically in the background as new data is added, or old data is modified. Changes to your dataset are tracked, and the hypertable behind the continuous aggregate is automatically updated in the background.

By default, querying continuous aggregates provides you with real-time data. Pre-aggregated data from the materialized view is combined with recent data that hasn’t been aggregated yet. This gives you up-to-date results on every query.

You create continuous aggregates on uncompressed data in high-performance storage. They continue to work on [data in the columnstore](/docs/learn/tiger-cloud/tiger-cloud-essentials/index.md) and [rarely accessed data in tiered storage](/docs/build/data-management/storage/query-tiered-data/index.md). You can even create [continuous aggregates on top of your continuous aggregates](/docs/learn/continuous-aggregates/hierarchical-continuous-aggregates/index.md).

1. **Monitor energy consumption on a day-to-day basis**

   1. Create a continuous aggregate called `kwh_day_by_day` for energy consumption:

      ```
      CREATE MATERIALIZED VIEW kwh_day_by_day(time, value)
         with (timescaledb.continuous) as
      SELECT time_bucket('1 day', created, 'Europe/Berlin') AS "time",
             round((last(value, created) - first(value, created)) * 100.) / 100. AS value
      FROM metrics
      WHERE type_id = 5
      GROUP BY 1;
      ```

   2. Add a refresh policy to keep `kwh_day_by_day` up-to-date:

      ```
      SELECT add_continuous_aggregate_policy('kwh_day_by_day',
         start_offset => NULL,
         end_offset => INTERVAL '1 hour',
         schedule_interval => INTERVAL '1 hour');
      ```

2. **Monitor energy consumption on an hourly basis**

   1. Create a continuous aggregate called `kwh_hour_by_hour` for energy consumption:

      ```
      CREATE MATERIALIZED VIEW kwh_hour_by_hour(time, value)
        with (timescaledb.continuous) as
      SELECT time_bucket('01:00:00', metrics.created, 'Europe/Berlin') AS "time",
             round((last(value, created) - first(value, created)) * 100.) / 100. AS value
      FROM metrics
      WHERE type_id = 5
      GROUP BY 1;
      ```

   2. Add a refresh policy to keep the continuous aggregate up-to-date:

      ```
      SELECT add_continuous_aggregate_policy('kwh_hour_by_hour',
         start_offset => NULL,
         end_offset => INTERVAL '1 hour',
         schedule_interval => INTERVAL '1 hour');
      ```

3. **Analyze your data**

   Now you have made continuous aggregates, you can use them to perform analytics on your data. For example, to see how average energy consumption changes during weekdays over the last year, run the following query:

   ```
   WITH per_day AS (
    SELECT
      time,
      value
    FROM kwh_day_by_day
    WHERE "time" at time zone 'Europe/Berlin' > date_trunc('month', time) - interval '1 year'
    ORDER BY 1
   ), daily AS (
       SELECT
          to_char(time, 'Dy') as day,
          value
       FROM per_day
   ), percentile AS (
       SELECT
           day,
           approx_percentile(0.50, percentile_agg(value)) as value
       FROM daily
       GROUP BY 1
       ORDER BY 1
   )
   SELECT
       d.day,
       d.ordinal,
       pd.value
   FROM unnest(array['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']) WITH ORDINALITY AS d(day, ordinal)
   LEFT JOIN percentile pd ON lower(pd.day) = lower(d.day);
   ```

   You see something like:

   | day | ordinal | value              |
   | --- | ------- | ------------------ |
   | Mon | 2       | 23.08078714975423  |
   | Sun | 1       | 19.511430831944395 |
   | Tue | 3       | 25.003118897837307 |
   | Wed | 4       | 8.09300571759772   |

## Connect Grafana to Tiger Cloud

To visualize the results of your queries, enable Grafana to read the data in your service:

1. **Log in to Grafana**

   In your browser, log in to either:

   - Self-hosted Grafana: at `http://localhost:3000/`. The default credentials are `admin`, `admin`.
   - Grafana Cloud: use the URL and credentials you set when you created your account.

2. **Add your service as a data source**

   1. Open `Connections` > `Data sources`, then click `Add new data source`.

   2. Select `{C.PG}` from the list.

   3. Configure the connection:

      - `Host URL`, `Database name`, `Username`, and `Password`, configure using your [connection details](/docs/integrate/find-connection-details/index.md). `Host URL` is in the format `<host>:<port>`.
      - `TLS/SSL Mode`: select `require`.
      - `{C.PG} options`: enable `TimescaleDB`.
      - Leave the default setting for all other fields.

   4. Click `Save & test`.

      Grafana checks that your details are set correctly.

## Visualize energy consumption

A Grafana dashboard represents a view into the performance of a system, and each dashboard consists of one or more panels, which represent information about a specific metric related to that system.

To visually monitor the volume of energy consumption over time:

1. **Create the dashboard**

   1. On the `Dashboards` page, click `New` and select `New dashboard`.

   2. Click `Add visualization`, then select the data source that connects to your Tiger Cloud service and the `Bar chart` visualization type in the top right.

      ![Configuring a Grafana dashboard with a TimescaleDB data source](/docs/_astro/use-case-rta-grafana-timescale-configure-dashboard.PNpiH0JO_204Gcy.webp)

   3. In the `Queries` section, select `Code` on the right, then run the following query based on your continuous aggregate:

      ```
      WITH per_hour AS (
      SELECT
      time,
      value
      FROM kwh_hour_by_hour
      WHERE "time" at time zone 'Europe/Berlin' > date_trunc('month', time) - interval '1 year'
      ORDER BY 1
      ), hourly AS (
       SELECT
            extract(HOUR FROM time) * interval '1 hour' as hour,
            value
       FROM per_hour
      )
      SELECT
          hour,
          approx_percentile(0.50, percentile_agg(value)) as median,
          max(value) as maximum
      FROM hourly
      GROUP BY 1
      ORDER BY 1;
      ```

      This query averages the results for households in a specific time zone by hour and orders them by time. Because you use a continuous aggregate, this data is always correct in real time.

      ![Grafana dashboard showing energy consumption data from a continuous aggregate](/docs/_astro/use-case-rta-grafana-timescale-energy-cagg.BnE5uNhv_Eg8x0.webp)

      You see that energy consumption is highest in the evening and at breakfast time. You also know that the wind drops off in the evening. This data proves that you need to supply a supplementary power source for peak times, or plan to store energy during the day for peak times.

2. **Click `Save dashboard`**

You have integrated Grafana with a Tiger Cloud service and made insights based on visualization of your data.
