---
title: Integrate Supabase with Tiger Cloud | Tiger Data Docs
description: Run real-time analytical queries across databases using foreign data wrappers
---

[Supabase](https://supabase.com/) is an open source Firebase alternative. This page shows how to run real-time analytical queries against a Tiger Cloud service through Supabase using a foreign data wrapper (fdw) to bring aggregated data from your Tiger Cloud service.

## Prerequisites

To follow the procedure on this page you need to:

- Create a [target Tiger Cloud service](/docs/get-started/quickstart/create-service/index.md).

  This procedure also works for [self-hosted TimescaleDB](/docs/get-started/choose-your-path/install-timescaledb/index.md).

* Create a [Supabase project](https://supabase.com/dashboard/new)

## Set up your service

To set up a Tiger Cloud service optimized for analytics to receive data from Supabase:

1. **Optimize time-series data in hypertables**

   Time-series data represents how a system, process, or behavior changes over time. [Hypertables](/docs/learn/hypertables/understand-hypertables/index.md) are PostgreSQL tables that help you improve insert and query performance by automatically partitioning your data by time.

   [Connect to your Tiger Cloud service](/docs/build/data-management/run-queries-from-tiger-console/index.md) and create a table that will point to a Supabase database:

   ```
   CREATE TABLE signs (
       time timestamptz NOT NULL DEFAULT now(),
       origin_time timestamptz NOT NULL,
       name TEXT
   ) WITH (
     tsdb.hypertable
   );
   ```

   When you create a hypertable using [CREATE TABLE … WITH …](/docs/reference/timescaledb/hypertables/create_table/index.md), the default partitioning column is automatically the first column with a timestamp data type. Also, TimescaleDB creates a [columnstore policy](/docs/reference/timescaledb/hypercore/add_columnstore_policy/index.md) that automatically converts your data to the columnstore, after an interval equal to the value of the [chunk\_interval](/docs/reference/timescaledb/hypertables/set_chunk_time_interval/index.md), defined through `after` in the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the columnstore conversion, hypertable chunks are compressed by up to 98%, and organized for efficient, large-scale queries.

   You can customize this policy later using [alter\_job](/docs/reference/timescaledb/jobs-automation/alter_job/index.md). However, to change `after` or `created_before`, the compression settings, or the hypertable the policy is acting on, you must [remove the columnstore policy](/docs/reference/timescaledb/hypercore/remove_columnstore_policy/index.md) and [add a new one](/docs/reference/timescaledb/hypercore/add_columnstore_policy/index.md).

   You can also manually [convert chunks](/docs/reference/timescaledb/hypercore/convert_to_columnstore/index.md) in a hypertable to the columnstore.

2. **Optimize cooling data for analytics**

   Hypercore is the hybrid row-columnar storage engine in TimescaleDB, designed specifically for real-time analytics and powered by time-series data. The advantage of hypercore is its ability to seamlessly switch between row-oriented and column-oriented storage. This flexibility enables TimescaleDB to deliver the best of both worlds, solving the key challenges in real-time analytics.

   ```
   ALTER TABLE signs SET (
     timescaledb.enable_columnstore = true,
     timescaledb.segmentby = 'name');
   ```

3. **Create optimized analytical queries**

   Continuous aggregates are designed to make queries on very large datasets run faster. Continuous aggregates in Tiger Cloud use PostgreSQL [materialized views](https://www.postgresql.org/docs/current/rules-materializedviews.html) to continuously, and incrementally refresh a query in the background, so that when you run the query, only the data that has changed needs to be computed, not the entire dataset.

   1. Create a continuous aggregate pointing to the Supabase database.

      ```
      CREATE MATERIALIZED VIEW IF NOT EXISTS signs_per_minute
      WITH (timescaledb.continuous)
      AS
      SELECT time_bucket('1 minute', time) as ts,
       name,
       count(*) as total
      FROM signs
      GROUP BY 1, 2
      WITH NO DATA;
      ```

   2. Setup a delay stats comparing `origin_time` to `time`.

      ```
      CREATE MATERIALIZED VIEW IF NOT EXISTS _signs_per_minute_delay
      WITH (timescaledb.continuous)
      AS
      SELECT time_bucket('1 minute', time) as ts,
        stats_agg(extract(epoch from origin_time - time)::float8) as delay_agg,
        candlestick_agg(time, extract(epoch from origin_time - time)::float8, 1) as delay_candlestick
      FROM signs GROUP BY 1
      WITH NO DATA;
      ```

   3. Setup a view to recieve the data from Supabase.

      ```
      CREATE VIEW signs_per_minute_delay
      AS
        SELECT ts,
        average(delay_agg) as avg_delay,
        stddev(delay_agg) as stddev_delay,
        open(delay_candlestick) as open,
        high(delay_candlestick) as high,
        low(delay_candlestick) as low,
        close(delay_candlestick) as close
      FROM _signs_per_minute_delay
      ```

4. **Add refresh policies for your analytical queries**

   You use `start_offset` and `end_offset` to define the time range that the continuous aggregate will cover. Assuming that the data is being inserted without any delay, set the `start_offset` to `5 minutes` and the `end_offset` to `1 minute`. This means that the continuous aggregate is refreshed every minute, and the refresh covers the last 5 minutes. You set `schedule_interval` to `INTERVAL '1 minute'` so the continuous aggregate refreshes on your Tiger Cloud service every minute. The data is accessed from Supabase, and the continuous aggregate is refreshed every minute in the other side.

   ```
   SELECT add_continuous_aggregate_policy('signs_per_minute',
    start_offset => INTERVAL '5 minutes',
    end_offset => INTERVAL '1 minute',
    schedule_interval => INTERVAL '1 minute');
   ```

   Do the same thing for data inserted with a delay:

   ```
   SELECT add_continuous_aggregate_policy('_signs_per_minute_delay',
    start_offset => INTERVAL '5 minutes',
    end_offset => INTERVAL '1 minute',
    schedule_interval => INTERVAL '1 minute');
   ```

## Set up a Supabase database

To set up a Supabase database that injects data into your Tiger Cloud service:

1. **Connect a foreign server in Supabase to your Tiger Cloud service**

   1. Connect to your Supabase project using Supabase dashboard or psql.

   2. Enable the `postgres_fdw` extension.

      ```
      CREATE EXTENSION postgres_fdw;
      ```

   3. Create a foreign server that points to your Tiger Cloud service.

      Update the following command with your [connection details](/docs/integrate/find-connection-details/index.md), then run it in the Supabase database:

      ```
      CREATE SERVER timescale
      FOREIGN DATA WRAPPER postgres_fdw
      OPTIONS (
          host '<value of host>',
          port '<value of port>',
          dbname '<value of dbname>',
          sslmode 'require',
          extensions 'timescaledb'
      );
      ```

2. **Create the user mapping for the foreign server**

   Update the following command with your [connection details](/docs/integrate/find-connection-details/index.md), the run it in the Supabase database:

   ```
   CREATE USER MAPPING FOR CURRENT_USER
   SERVER timescale
   OPTIONS (
      user '<value of user>',
      password '<value of password>'
   );
   ```

3. **Create a foreign table that points to a table in your Tiger Cloud service.**

   This query introduced the following columns:

   - `time`: with a default value of `now()`. This is because the `time` column is used by Tiger Cloud to optimize data in the columnstore.
   - `origin_time`: store the original timestamp of the data.

   Using both columns, you understand the delay between Supabase (`origin_time`) and the time the data is inserted into your Tiger Cloud service (`time`).

   ```
   CREATE FOREIGN TABLE signs (
     TIME timestamptz NOT NULL DEFAULT now(),
     origin_time timestamptz NOT NULL,
     NAME TEXT)
   SERVER timescale OPTIONS (
     schema_name 'public',
     table_name 'signs'
   );
   ```

4. **Create a foreign table in Supabase**

   1. Create a foreign table that matches the `signs_per_minute` view in your Tiger Cloud service. It represents a top level view of the data.

      ```
      CREATE FOREIGN TABLE signs_per_minute (
       ts timestamptz,
       name text,
       total int
      )
      SERVER timescale OPTIONS (schema_name 'public', table_name 'signs_per_minute');
      ```

   2. Create a foreign table that matches the `signs_per_minute_delay` view in your Tiger Cloud service.

      ```
      CREATE FOREIGN TABLE signs_per_minute_delay (
         ts timestamptz,
         avg_delay float8,
         stddev_delay float8,
         open float8,
         high float8,
         low float8,
         close float8
      ) SERVER timescale OPTIONS (schema_name 'public', table_name 'signs_per_minute_delay');
      ```

## Test the integration

To inject data into your Tiger Cloud service from a Supabase database using a foreign table:

1. **Insert data into your Supabase database**

   Connect to Supabase and run the following query:

   ```
   INSERT INTO signs (origin_time, name) VALUES (now(), 'test')
   ```

2. **Check the data in your Tiger Cloud service**

   [Connect to your Tiger Cloud service](/docs/build/data-management/run-queries-from-tiger-console/index.md) and run the following query:

   ```
   SELECT * from signs;
   ```

   You see something like:

   | origin\_time                  | time                          | name |
   | ----------------------------- | ----------------------------- | ---- |
   | 2025-02-27 16:30:04.682391+00 | 2025-02-27 16:30:04.682391+00 | test |

You have successfully integrated Supabase with your Tiger Cloud service.
