---
title: Migrate with downtime | Tiger Data Docs
description: Migrate a hypertable or an entire database to Tiger Cloud with native PostgreSQL commands using pg_dump and pg_restore
---

You use downtime migration to move less than 100GB of data from a self-hosted database to a Tiger Cloud service.

Downtime migration uses the native PostgreSQL [`pg_dump`](https://www.postgresql.org/docs/current/app-pgdump.html) and [`pg_restore`](https://www.postgresql.org/docs/current/app-pgrestore.html) commands. If you are migrating from self-hosted TimescaleDB, this method works for hypertables compressed into the columnstore without having to convert the data back to the rowstore before you begin.

Tips

If you want to migrate more than 400GB of data, create a [Tiger Console support request](https://console.cloud.tigerdata.com/dashboard/support), or send us an email at <support@tigerdata.com> saying how much data you want to migrate. We pre-provision your Tiger Cloud service for you.

However, downtime migration for large amounts of data takes a large amount of time. For more than 100GB of data, best practice is to follow [live migration](/docs/migrate/live-migration/index.md).

This page shows you how to move your data from a self-hosted database to a Tiger Cloud service using shell commands.

## Prerequisites

Best practice is to use an [Ubuntu EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) hosted in the same region as your Tiger Cloud service to move data. That is, the machine you run the commands on to move your data from your source database to your target Tiger Cloud service.

Before you move your data:

- Create a target [Tiger Cloud service](/docs/get-started/quickstart/create-service/index.md).

  Each Tiger Cloud service has a single PostgreSQL instance that supports the [most popular extensions](/docs/deploy/tiger-cloud/tiger-cloud-aws/tiger-cloud-extensions/index.md). Tiger Cloud services do not support tablespaces, and there is no superuser associated with a service. Best practice is to create a Tiger Cloud service with at least 8 CPUs for a smoother experience. A higher-spec instance can significantly reduce the overall migration window.

- To ensure that maintenance does not run while migration is in progress, best practice is to [adjust the maintenance window](/docs/deploy/tiger-cloud/tiger-cloud-aws/upgrades#define-your-maintenance-window/index.md).

* Install the PostgreSQL client tools on your migration machine.

  This includes `psql`, `pg_dump`, and `pg_dumpall`.

* Install the GNU implementation of `sed`.

  Run `sed --version` on your migration machine. GNU sed identifies itself as GNU software, BSD sed returns `sed: illegal option -- -`.

## Migrate to Tiger Cloud

To move your data from a self-hosted database to a Tiger Cloud service:

- [From TimescaleDB](#tab-panel-664)
- [From PostgreSQL](#tab-panel-665)
- [From AWS RDS/Aurora](#tab-panel-666)
- [From MST](#tab-panel-667)

This section shows you how to move your data from self-hosted TimescaleDB to a Tiger Cloud service using `pg_dump` and `psql` from Terminal.

## Prepare to migrate

1. **Take the applications that connect to the source database offline**

   The duration of the migration is proportional to the amount of data stored in your database. By disconnecting your app from your database you avoid any possible data loss.

2. **Set your connection strings**

   These variables hold the connection information for the source database and target Tiger Cloud service:

   Terminal window

   ```
   export SOURCE="postgres://<user>:<password>@<source host>:<source port>/<db_name>"
   export TARGET="postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"
   ```

   You find the connection information for your Tiger Cloud service in the configuration file you downloaded when you created the service.

## Align the version of TimescaleDB on the source and target

1. **Ensure that the source and target databases are running the same version of TimescaleDB**

   1. Check the version of TimescaleDB running on your Tiger Cloud service:

      Terminal window

      ```
      psql $TARGET -c "SELECT extversion FROM pg_extension WHERE extname = 'timescaledb';"
      ```

   2. Update the TimescaleDB extension in your source database to match the target service:

      If the TimescaleDB extension is the same version on the source database and target service, you do not need to do this.

      Terminal window

      ```
      psql $SOURCE -c "ALTER EXTENSION timescaledb UPDATE TO '<version here>';"
      ```

      For more information and guidance, see [Upgrade TimescaleDB](/docs/deploy/self-hosted/upgrades/index.md).

2. **Ensure that the Tiger Cloud service is running the PostgreSQL extensions used in your source database**

   1. Check the extensions on the source database:

      Terminal window

      ```
      psql $SOURCE -c "SELECT * FROM pg_extension;"
      ```

   2. For each extension, enable it on your target Tiger Cloud service:

      Terminal window

      ```
      psql $TARGET -c "CREATE EXTENSION IF NOT EXISTS <extension name> CASCADE;"
      ```

## Migrate the roles from TimescaleDB to your Tiger Cloud service

Roles manage database access permissions. To migrate your role-based security hierarchy to your Tiger Cloud service:

1. **Dump the roles from your source database**

   Export your role-based security hierarchy. `<db_name>` has the same value as `<db_name>` in `$SOURCE`. I know, it confuses me as well.

   Terminal window

   ```
   pg_dumpall -d "$SOURCE" \
     -l <db_name>
     --quote-all-identifiers \
     --roles-only \
     --file=roles.sql
   ```

   If you only use the default `postgres` role, this step is not necessary.

2. **Remove roles with superuser access**

   Tiger Cloud service do not support roles with superuser access. Run the following script to remove statements, permissions and clauses that require superuser permissions from `roles.sql`:

   Terminal window

   ```
   sed -i -E \
   -e '/CREATE ROLE "postgres";/d' \
   -e '/ALTER ROLE "postgres"/d' \
   -e '/CREATE ROLE "tsdbadmin";/d' \
   -e '/ALTER ROLE "tsdbadmin"/d' \
   -e 's/(NO)*SUPERUSER//g' \
   -e 's/(NO)*REPLICATION//g' \
   -e 's/(NO)*BYPASSRLS//g' \
   -e 's/GRANTED BY "[^"]*"//g' \
   roles.sql
   ```

3. **Dump the source database schema and data**

   The `pg_dump` flags remove superuser access and tablespaces from your data. When you run `pg_dump`, check the run time, [a long-running `pg_dump` can cause issues](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md).

   Terminal window

   ```
   pg_dump -d "$SOURCE" \
   --format=plain \
   --quote-all-identifiers \
   --no-tablespaces \
   --no-owner \
   --no-privileges \
   --file=dump.sql
   ```

   To dramatically reduce the time taken to dump the source database, use multiple connections. For more information, see [dumping with concurrency](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md) and [restoring with concurrency](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md).

## Upload your data to the target Tiger Cloud service

This command uses the \[timescaledb\_pre\_restore] and \[timescaledb\_post\_restore] functions to put your database in the correct state.

Terminal window

```
psql $TARGET -v ON_ERROR_STOP=1 --echo-errors \
-f roles.sql \
-c "SELECT timescaledb_pre_restore();" \
-f dump.sql \
-c "SELECT timescaledb_post_restore();"
```

## Validate your Tiger Cloud service and restart your app

1. **Update the table statistics**

   Terminal window

   ```
   psql $TARGET -c "ANALYZE;"
   ```

2. **Verify the data in the target Tiger Cloud service**

   Check that your data is correct, and returns the results that you expect.

3. **Enable any Tiger Cloud features you want to use**

   Migration from PostgreSQL moves the data only. Now manually enable Tiger Cloud features like [hypertables](/docs/learn/hypertables/understand-hypertables/index.md), [hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md) or [data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md) while your database is offline.

4. **Reconfigure your app to use the target database, then restart it**

And that is it, you have migrated your data from a self-hosted instance running TimescaleDB to a Tiger Cloud service.

This section shows you how to move your data from self-hosted PostgreSQL to a Tiger Cloud service using `pg_dump` and `psql` from Terminal.

Migration from PostgreSQL moves the data only. You must manually enable Tiger Cloud features like [hypertables](/docs/learn/hypertables/understand-hypertables/index.md), [hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md) or [data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md) after the migration is complete. You enable Tiger Cloud features while your database is offline.

## Prepare to migrate

1. **Take the applications that connect to the source database offline**

   The duration of the migration is proportional to the amount of data stored in your database. By disconnecting your app from your database you avoid any possible data loss.

2. **Set your connection strings**

   These variables hold the connection information for the source database and target Tiger Cloud service:

   Terminal window

   ```
   export SOURCE="postgres://<user>:<password>@<source host>:<source port>/<db_name>"
   export TARGET="postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"
   ```

   You find the connection information for your Tiger Cloud service in the configuration file you downloaded when you created the service.

## Align the extensions on the source and target

1. **Ensure that the Tiger Cloud service is running the PostgreSQL extensions used in your source database**

   1. Check the extensions on the source database:

      Terminal window

      ```
      psql $SOURCE -c "SELECT * FROM pg_extension;"
      ```

   2. For each extension, enable it on your target Tiger Cloud service:

      Terminal window

      ```
      psql $TARGET -c "CREATE EXTENSION IF NOT EXISTS <extension name> CASCADE;"
      ```

## Migrate the roles from TimescaleDB to your Tiger Cloud service

Roles manage database access permissions. To migrate your role-based security hierarchy to your Tiger Cloud service:

1. **Dump the roles from your source database**

   Export your role-based security hierarchy. `<db_name>` has the same value as `<db_name>` in `$SOURCE`. I know, it confuses me as well.

   Terminal window

   ```
   pg_dumpall -d "$SOURCE" \
     -l <db_name>
     --quote-all-identifiers \
     --roles-only \
     --file=roles.sql
   ```

   If you only use the default `postgres` role, this step is not necessary.

2. **Remove roles with superuser access**

   Tiger Cloud service do not support roles with superuser access. Run the following script to remove statements, permissions and clauses that require superuser permissions from `roles.sql`:

   Terminal window

   ```
   sed -i -E \
   -e '/CREATE ROLE "postgres";/d' \
   -e '/ALTER ROLE "postgres"/d' \
   -e '/CREATE ROLE "tsdbadmin";/d' \
   -e '/ALTER ROLE "tsdbadmin"/d' \
   -e 's/(NO)*SUPERUSER//g' \
   -e 's/(NO)*REPLICATION//g' \
   -e 's/(NO)*BYPASSRLS//g' \
   -e 's/GRANTED BY "[^"]*"//g' \
   roles.sql
   ```

3. **Dump the source database schema and data**

   The `pg_dump` flags remove superuser access and tablespaces from your data. When you run `pg_dump`, check the run time, [a long-running `pg_dump` can cause issues](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md).

   Terminal window

   ```
   pg_dump -d "$SOURCE" \
   --format=plain \
   --quote-all-identifiers \
   --no-tablespaces \
   --no-owner \
   --no-privileges \
   --file=dump.sql
   ```

   To dramatically reduce the time taken to dump the source database, use multiple connections. For more information, see [dumping with concurrency](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md) and [restoring with concurrency](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md).

## Upload your data to the target Tiger Cloud service

Terminal window

```
psql $TARGET -v ON_ERROR_STOP=1 --echo-errors \
-f roles.sql \
-f dump.sql
```

## Validate your Tiger Cloud service and restart your app

1. **Update the table statistics**

   Terminal window

   ```
   psql $TARGET -c "ANALYZE;"
   ```

2. **Verify the data in the target Tiger Cloud service**

   Check that your data is correct, and returns the results that you expect.

3. **Enable any Tiger Cloud features you want to use**

   Migration from PostgreSQL moves the data only. Now manually enable Tiger Cloud features like [hypertables](/docs/learn/hypertables/understand-hypertables/index.md), [hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md) or [data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md) while your database is offline.

4. **Reconfigure your app to use the target database, then restart it**

And that is it, you have migrated your data from a self-hosted instance running PostgreSQL to a Tiger Cloud service.

To migrate your data from an Amazon RDS/Aurora PostgreSQL instance to a Tiger Cloud service, you extract the data to an intermediary EC2 Ubuntu instance in the same AWS region as your RDS/Aurora PostgreSQL instance. You then upload your data to a Tiger Cloud service. To make this process as painless as possible, ensure that the intermediary machine has enough CPU and disk space to rapidly extract and store your data before uploading to Tiger Cloud.

Migration from RDS/Aurora PostgreSQL moves the data only. You must manually enable Tiger Cloud features like [hypertables](/docs/learn/hypertables/understand-hypertables/index.md), [data compression](/docs/learn/columnar-storage/understand-hypercore/index.md) or [data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md) after the migration is complete. You enable Tiger Cloud features while your database is offline.

This section shows you how to move your data from a PostgreSQL database running in an Amazon RDS/Aurora PostgreSQL instance to a Tiger Cloud service using `pg_dump` and `psql` from Terminal.

## Create an intermediary EC2 Ubuntu instance

1. **Select the RDS/Aurora instance to migrate**

   In <https://console.aws.amazon.com/rds/home#databases:>, select the RDS/Aurora PostgreSQL instance to migrate.

2. **Click `Actions` > `Set up EC2 connection`**

   Press `Create EC2 instance` and use the following settings:

   - **AMI**: Ubuntu Server.
   - **Key pair**: use an existing pair or create a new one that you will use to access the intermediary machine.
   - **VPC**: by default, this is the same as the database instance.
   - **Configure Storage**: adjust the volume to at least the size of RDS/Aurora PostgreSQL instance you are migrating from. You can reduce the space used by your data on Tiger Cloud using [Hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md).

3. **Click `Launch instance`, then connect via SSH**

   AWS creates your EC2 instance. Click `Connect to instance` > `SSH client` and follow the instructions to create the connection to your intermediary EC2 instance.

## Install the psql client tools on the intermediary instance

1. **Connect to your intermediary EC2 instance. For example:**

   Terminal window

   ```
   ssh -i "<key-pair>.pem" ubuntu@<EC2 instance's Public IPv4>
   ```

2. **On your intermediary EC2 instance, install the PostgreSQL client.**

   Terminal window

   ```
   sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
   wget -qO- https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo tee /etc/apt/trusted.gpg.d/pgdg.asc &>/dev/null
   sudo apt update
   sudo apt install postgresql-client-16 -y # "postgresql-client-16" if your source DB is using PG 16.
   psql --version && pg_dump --version
   ```

   Keep this terminal open, you need it to connect to the RDS/Aurora PostgreSQL instance for migration.

## Set up secure connectivity between your RDS/Aurora PostgreSQL and EC2 instances

1. **Select the RDS/Aurora instance to migrate**

   In <https://console.aws.amazon.com/rds/home#databases:>, select the RDS/Aurora PostgreSQL instance to migrate.

2. **Open the inbound rules for the security group**

   Scroll down to `Security group rules (1)` and select the `EC2 Security Group - Inbound` group. The `Security Groups (1)` window opens. Click the `Security group ID`, then click `Edit inbound rules`.

   ![Creating a security group rule to enable RDS/Aurora to EC2 connection](/docs/_astro/rds-add-security-rule-to-ec2-instance.BXp5di3z.svg)

3. **On your intermediary EC2 instance, get your local IP address:**

   Terminal window

   ```
   ec2metadata --local-ipv4
   ```

   Bear with me on this one, you need this IP address to enable access to your RDS/Aurora PostgreSQL instance.

4. **Add inbound rule for your EC2 instance**

   In `Edit inbound rules`, click `Add rule`, then create a PostgreSQL `TCP` rule granting access to the local IP address for your EC2 instance. Then click `Save rules`.

   ![Adding an inbound security rule for the EC2 instance](/docs/_astro/rds-add-inbound-rule-for-ec2-instance.BElzIqvO_Z2Ld7z.webp)

## Test the connection between your RDS/Aurora PostgreSQL and EC2 instances

1. **Select the RDS/Aurora instance to migrate**

   In <https://console.aws.amazon.com/rds/home#databases:>, select the RDS/Aurora PostgreSQL instance to migrate.

2. **Create the source connection string**

   On your intermediary EC2 instance, use the values of `Endpoint`, `Port`, `Master username`, and `DB name` to create the PostgreSQL connectivity string for the `SOURCE` variable.

   ![Recording the RDS endpoint, port, and VPC details](/docs/_astro/migrate-source-rds-instance.BHStcVYP.svg)

   Terminal window

   ```
   export SOURCE="postgres://<Master username>:<Master password>@<Endpoint>:<Port>/<DB name>"
   ```

   The value of `Master password` was supplied when this RDS/Aurora PostgreSQL instance was created.

3. **Test your connection:**

   Terminal window

   ```
   psql -d $SOURCE
   ```

   You are connected to your RDS/Aurora PostgreSQL instance from your intermediary EC2 instance.

## Migrate your data to your Tiger Cloud service

To securely migrate data from your RDS instance:

## Prepare to migrate

1. **Take the applications that connect to the RDS instance offline**

   The duration of the migration is proportional to the amount of data stored in your database.\
   By disconnection your app from your database you avoid and possible data loss. You should also ensure that your source RDS instance is not receiving any DML queries.

2. **Connect to your intermediary EC2 instance**

   For example:

   Terminal window

   ```
   ssh -i "<key-pair>.pem" ubuntu@<EC2 instance's Public IPv4>
   ```

3. **Set your connection strings**

   These variables hold the connection information for the RDS instance and target Tiger Cloud service:

   Terminal window

   ```
   export SOURCE="postgres://<Master username>:<Master password>@<Endpoint>:<Port>/<DB name>"
   export TARGET=postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require
   ```

   You find the connection information for `SOURCE` in your RDS configuration. For `TARGET` in the configuration file you downloaded when you created the Tiger Cloud service.

## Align the extensions on the source and target

1. **Ensure that the Tiger Cloud service is running the PostgreSQL extensions used in your source database**

   1. Check the extensions on the source database:

      Terminal window

      ```
      psql $SOURCE -c "SELECT * FROM pg_extension;"
      ```

   2. For each extension, enable it on your target Tiger Cloud service:

      Terminal window

      ```
      psql $TARGET -c "CREATE EXTENSION IF NOT EXISTS <extension name> CASCADE;"
      ```

## Migrate roles from RDS to your Tiger Cloud service

Roles manage database access permissions. To migrate your role-based security hierarchy to your Tiger Cloud service:

1. **Dump the roles from your RDS instance**

   Export your role-based security hierarchy. If you only use the default `postgres` role, this step is not necessary.

   Terminal window

   ```
   pg_dumpall -d "$SOURCE" \
     --quote-all-identifiers \
     --roles-only \
     --no-role-passwords \
     --file=roles.sql
   ```

   AWS RDS does not allow you to export passwords with roles. You assign passwords to these roles when you have uploaded them to your Tiger Cloud service.

2. **Remove roles with superuser access**

   Tiger Cloud services do not support roles with superuser access. Run the following script to remove statements, permissions and clauses that require superuser permissions from `roles.sql`:

   Terminal window

   ```
   sed -i -E \
   -e '/CREATE ROLE "postgres";/d' \
   -e '/ALTER ROLE "postgres"/d' \
   -e '/CREATE ROLE "rds/d' \
   -e '/ALTER ROLE "rds/d' \
   -e '/TO "rds/d' \
   -e '/GRANT "rds/d' \
   -e 's/(NO)*SUPERUSER//g' \
   -e 's/(NO)*REPLICATION//g' \
   -e 's/(NO)*BYPASSRLS//g' \
   -e 's/GRANTED BY "[^"]*"//g' \
   roles.sql
   ```

3. **Upload the roles to your Tiger Cloud service**

   Terminal window

   ```
   psql -X -d "$TARGET" \
     -v ON_ERROR_STOP=1 \
     --echo-errors \
     -f roles.sql
   ```

4. **Manually assign passwords to the roles**

   AWS RDS did not allow you to export passwords with roles. For each role, use the following command to manually assign a password to a role:

   Terminal window

   ```
    psql $TARGET -c "ALTER ROLE <role name> WITH PASSWORD '<highly secure password>';"
   ```

## Migrate data from your RDS instance to your Tiger Cloud service

1. **Dump the data from your RDS instance to your intermediary EC2 instance**

   The `pg_dump` flags remove superuser access and tablespaces from your data. When you run `pgdump`, check the run time, [a long-running `pg_dump` can cause issues](/docs/migrate/troubleshooting/#dumping-and-locks/index.md).

   Terminal window

   ```
   pg_dump -d "$SOURCE" \
   --format=plain \
   --quote-all-identifiers \
   --no-tablespaces \
   --no-owner \
   --no-privileges \
   --file=dump.sql
   ```

   To dramatically reduce the time taken to dump the RDS instance, using multiple connections. For more information, see [dumping with concurrency](/docs/migrate/troubleshooting/#dumping-with-concurrency/index.md) and [restoring with concurrency](/docs/migrate/troubleshooting/#restoring-with-concurrency/index.md).

2. **Upload your data to your Tiger Cloud service**

   Terminal window

   ```
   psql -d $TARGET -v ON_ERROR_STOP=1 --echo-errors \
     -f dump.sql
   ```

## Validate your Tiger Cloud service and restart your app

1. **Update the table statistics**

   Terminal window

   ```
   psql $TARGET -c "ANALYZE;"
   ```

2. **Verify the data in the target Tiger Cloud service**

   Check that your data is correct, and returns the results that you expect.

3. **Enable any Tiger Cloud features you want to use**

   Migration from PostgreSQL moves the data only. Now manually enable Tiger Cloud features like [hypertables](/docs/learn/hypertables/understand-hypertables/index.md), [hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md) or [data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md) while your database is offline.

4. **Reconfigure your app to use the target database, then restart it**

And that is it, you have migrated your data from an RDS/Aurora PostgreSQL instance to a Tiger Cloud service.

This section shows you how to move your data from a Managed Service for TimescaleDB instance to a Tiger Cloud service using `pg_dump` and `psql` from Terminal.

## Prepare to migrate

1. **Take the applications that connect to the source database offline**

   The duration of the migration is proportional to the amount of data stored in your database. By disconnecting your app from your database you avoid any possible data loss.

2. **Set your connection strings**

   These variables hold the connection information for the source database and target Tiger Cloud service:

   Terminal window

   ```
   export SOURCE="postgres://<user>:<password>@<source host>:<source port>/<db_name>"
   export TARGET="postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"
   ```

   You find the connection information for your Tiger Cloud service in the configuration file you downloaded when you created the service.

## Align the version of TimescaleDB on the source and target

1. **Ensure that the source and target databases are running the same version of TimescaleDB**

   1. Check the version of TimescaleDB running on your Tiger Cloud service:

      Terminal window

      ```
      psql $TARGET -c "SELECT extversion FROM pg_extension WHERE extname = 'timescaledb';"
      ```

   2. Update the TimescaleDB extension in your source database to match the target service:

      If the TimescaleDB extension is the same version on the source database and target service, you do not need to do this.

      Terminal window

      ```
      psql $SOURCE -c "ALTER EXTENSION timescaledb UPDATE TO '<version here>';"
      ```

      For more information and guidance, see [Upgrade TimescaleDB](/docs/deploy/self-hosted/upgrades/index.md).

2. **Ensure that the Tiger Cloud service is running the PostgreSQL extensions used in your source database**

   1. Check the extensions on the source database:

      Terminal window

      ```
      psql $SOURCE -c "SELECT * FROM pg_extension;"
      ```

   2. For each extension, enable it on your target Tiger Cloud service:

      Terminal window

      ```
      psql $TARGET -c "CREATE EXTENSION IF NOT EXISTS <extension name> CASCADE;"
      ```

## Migrate the roles from TimescaleDB to your Tiger Cloud service

Roles manage database access permissions. To migrate your role-based security hierarchy to your Tiger Cloud service:

1. **Dump the roles from your source database**

   Export your role-based security hierarchy. `<db_name>` has the same value as `<db_name>` in `$SOURCE`. I know, it confuses me as well.

   Terminal window

   ```
   pg_dumpall -d "$SOURCE" \
     -l <db_name>  \
     --quote-all-identifiers \
     --roles-only \
     --no-role-passwords \
     --file=roles.sql
   ```

   MST does not allow you to export passwords with roles. You assign passwords to these roles when you have uploaded them to your Tiger Cloud service.

2. **Remove roles with superuser access**

   Tiger Cloud services do not support roles with superuser access. Run the following script to remove statements, permissions and clauses that require superuser permissions from `roles.sql`:

   Terminal window

   ```
   sed -i -E \
   -e '/DROP ROLE IF EXISTS "postgres";/d' \
   -e '/DROP ROLE IF EXISTS "tsdbadmin";/d' \
   -e '/CREATE ROLE "postgres";/d' \
   -e '/ALTER ROLE "postgres"/d' \
   -e '/CREATE ROLE "rds/d' \
   -e '/ALTER ROLE "rds/d' \
   -e '/TO "rds/d' \
   -e '/GRANT "rds/d' \
   -e '/GRANT "pg_read_all_stats" TO "tsdbadmin"/d' \
   -e 's/(NO)*SUPERUSER//g' \
   -e 's/(NO)*REPLICATION//g' \
   -e 's/(NO)*BYPASSRLS//g' \
   -e 's/GRANTED BY "[^"]*"//g' \
   -e '/CREATE ROLE "tsdbadmin";/d' \
   -e '/ALTER ROLE "tsdbadmin"/d' \
   -e 's/WITH ADMIN OPTION,/WITH /g' \
   -e 's/WITH ADMIN OPTION//g' \
   -e 's/GRANTED BY ".*"//g' \
   -e '/GRANT "pg_.*" TO/d' \
   -e '/CREATE ROLE "_aiven";/d' \
   -e '/ALTER ROLE "_aiven"/d' \
   -e '/GRANT SET ON PARAMETER "pgaudit\.[^"]+" TO "_tsdbadmin_auditing"/d' \
   -e '/GRANT SET ON PARAMETER "anon\.[^"]+" TO "tsdbadmin_group"/d' \
   roles.sql
   ```

3. **Dump the source database schema and data**

   The `pg_dump` flags remove superuser access and tablespaces from your data. When you run `pg_dump`, check the run time, [a long-running `pg_dump` can cause issues](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md).

   Terminal window

   ```
   pg_dump -d "$SOURCE" \
   --format=plain \
   --quote-all-identifiers \
   --no-tablespaces \
   --no-owner \
   --no-privileges \
   --file=dump.sql
   ```

   To dramatically reduce the time taken to dump the source database, use multiple connections. For more information, see [dumping with concurrency](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md) and [restoring with concurrency](/docs/build/tips-and-tricks/troubleshoot-import-ingest/index.md).

## Upload your data to the target Tiger Cloud service

This command uses the \[timescaledb\_pre\_restore] and \[timescaledb\_post\_restore] functions to put your database in the correct state.

1. **Upload your data**

   Terminal window

   ```
   psql $TARGET -v ON_ERROR_STOP=1 --echo-errors \
   -f roles.sql \
   -c "SELECT timescaledb_pre_restore();" \
   -f dump.sql \
   -c "SELECT timescaledb_post_restore();"
   ```

2. **Manually assign passwords to the roles**

   MST did not allow you to export passwords with roles. For each role, use the following command to manually assign a password to a role:

   Terminal window

   ```
    psql $TARGET -c "ALTER ROLE <role name> WITH PASSWORD '<highly secure password>';"
   ```

## Validate your Tiger Cloud service and restart your app

1. **Update the table statistics**

   Terminal window

   ```
   psql $TARGET -c "ANALYZE;"
   ```

2. **Verify the data in the target Tiger Cloud service**

   Check that your data is correct, and returns the results that you expect.

3. **Enable any Tiger Cloud features you want to use**

   Migration from PostgreSQL moves the data only. Now manually enable Tiger Cloud features like [hypertables](/docs/learn/hypertables/understand-hypertables/index.md), [hypercore](/docs/learn/columnar-storage/understand-hypercore/index.md) or [data retention](/docs/learn/data-lifecycle/data-retention/about-data-retention/index.md) while your database is offline.

4. **Reconfigure your app to use the target database, then restart it**

And that is it, you have migrated your data from a Managed Service for TimescaleDB instance to a Tiger Cloud service.
