Migrate with downtime
Migrate a hypertable or an entire database to Tiger Cloud with native PostgreSQL commands using pg_dump and pg_restore
You use downtime migration to move less than 100GB of data from a self-hosted database to a Tiger Cloud service.
Downtime migration uses the native PostgreSQL pg_dump and pg_restore commands.
If you are migrating from self-hosted TimescaleDB, this method works for hypertables compressed into the columnstore without having
to convert the data back to the rowstore before you begin.
If you want to migrate more than 400GB of data, create a Tiger Console support request, or send us an email at support@tigerdata.com saying how much data you want to migrate. We pre-provision your Tiger Cloud service for you.
However, downtime migration for large amounts of data takes a large amount of time. For more than 100GB of data, best practice is to follow live migration.
This page shows you how to move your data from a self-hosted database to a Tiger Cloud service using shell commands.
Prerequisites
Section titled “Prerequisites”Best practice is to use an Ubuntu EC2 instance hosted in the same region as your Tiger Cloud service to move data. That is, the machine you run the commands on to move your data from your source database to your target Tiger Cloud service.
Before you move your data:
-
Create a target Tiger Cloud service.
Each Tiger Cloud service has a single PostgreSQL instance that supports the most popular extensions. Tiger Cloud services do not support tablespaces, and there is no superuser associated with a service. Best practice is to create a Tiger Cloud service with at least 8 CPUs for a smoother experience. A higher-spec instance can significantly reduce the overall migration window.
-
To ensure that maintenance does not run while migration is in progress, best practice is to adjust the maintenance window.
-
Install the PostgreSQL client tools on your migration machine.
This includes
psql,pg_dump, andpg_dumpall. -
Install the GNU implementation of
sed.Run
sed --versionon your migration machine. GNU sed identifies itself as GNU software, BSD sed returnssed: illegal option -- -.
Migrate to Tiger Cloud
Section titled “Migrate to Tiger Cloud”To move your data from a self-hosted database to a Tiger Cloud service:
This section shows you how to move your data from self-hosted TimescaleDB to a Tiger Cloud service
using pg_dump and psql from Terminal.
Prepare to migrate
- Take the applications that connect to the source database offline
The duration of the migration is proportional to the amount of data stored in your database. By disconnecting your app from your database you avoid any possible data loss.
- Set your connection strings
These variables hold the connection information for the source database and target Tiger Cloud service:
Terminal window export SOURCE="postgres://<user>:<password>@<source host>:<source port>/<db_name>"export TARGET="postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"You find the connection information for your Tiger Cloud service in the configuration file you downloaded when you created the service.
Align the version of TimescaleDB on the source and target
- Ensure that the source and target databases are running the same version of TimescaleDB
Check the version of TimescaleDB running on your Tiger Cloud service:
Terminal window psql $TARGET -c "SELECT extversion FROM pg_extension WHERE extname = 'timescaledb';"Update the TimescaleDB extension in your source database to match the target service:
If the TimescaleDB extension is the same version on the source database and target service, you do not need to do this.
Terminal window psql $SOURCE -c "ALTER EXTENSION timescaledb UPDATE TO '<version here>';"For more information and guidance, see Upgrade TimescaleDB.
- Ensure that the Tiger Cloud service is running the PostgreSQL extensions used in your source database
Check the extensions on the source database:
Terminal window psql $SOURCE -c "SELECT * FROM pg_extension;"For each extension, enable it on your target Tiger Cloud service:
Terminal window psql $TARGET -c "CREATE EXTENSION IF NOT EXISTS <extension name> CASCADE;"
Migrate the roles from TimescaleDB to your Tiger Cloud service
Roles manage database access permissions. To migrate your role-based security hierarchy to your Tiger Cloud service:
- Dump the roles from your source database
Export your role-based security hierarchy.
<db_name>has the same value as<db_name>in$SOURCE. I know, it confuses me as well.Terminal window pg_dumpall -d "$SOURCE" \-l <db_name>--quote-all-identifiers \--roles-only \--file=roles.sqlIf you only use the default
postgresrole, this step is not necessary. - Remove roles with superuser access
Tiger Cloud service do not support roles with superuser access. Run the following script to remove statements, permissions and clauses that require superuser permissions from
roles.sql:Terminal window sed -i -E \-e '/CREATE ROLE "postgres";/d' \-e '/ALTER ROLE "postgres"/d' \-e '/CREATE ROLE "tsdbadmin";/d' \-e '/ALTER ROLE "tsdbadmin"/d' \-e 's/(NO)*SUPERUSER//g' \-e 's/(NO)*REPLICATION//g' \-e 's/(NO)*BYPASSRLS//g' \-e 's/GRANTED BY "[^"]*"//g' \roles.sql - Dump the source database schema and data
The
pg_dumpflags remove superuser access and tablespaces from your data. When you runpg_dump, check the run time, a long-runningpg_dumpcan cause issues.Terminal window pg_dump -d "$SOURCE" \--format=plain \--quote-all-identifiers \--no-tablespaces \--no-owner \--no-privileges \--file=dump.sqlTo dramatically reduce the time taken to dump the source database, use multiple connections. For more information, see dumping with concurrency and restoring with concurrency.
Upload your data to the target Tiger Cloud service
This command uses the [timescaledb_pre_restore] and [timescaledb_post_restore] functions to put your database in the correct state.
psql $TARGET -v ON_ERROR_STOP=1 --echo-errors \-f roles.sql \-c "SELECT timescaledb_pre_restore();" \-f dump.sql \-c "SELECT timescaledb_post_restore();"Validate your Tiger Cloud service and restart your app
- Update the table statistics
Terminal window psql $TARGET -c "ANALYZE;" - Verify the data in the target Tiger Cloud service
Check that your data is correct, and returns the results that you expect.
- Enable any Tiger Cloud features you want to use
Migration from PostgreSQL moves the data only. Now manually enable Tiger Cloud features like hypertables, hypercore or data retention while your database is offline.
- Reconfigure your app to use the target database, then restart it
And that is it, you have migrated your data from a self-hosted instance running TimescaleDB to a Tiger Cloud service.
This section shows you how to move your data from self-hosted PostgreSQL to a Tiger Cloud service
using pg_dump and psql from Terminal.
Migration from PostgreSQL moves the data only. You must manually enable Tiger Cloud features like hypertables, hypercore or data retention after the migration is complete. You enable Tiger Cloud features while your database is offline.
Prepare to migrate
- Take the applications that connect to the source database offline
The duration of the migration is proportional to the amount of data stored in your database. By disconnecting your app from your database you avoid any possible data loss.
- Set your connection strings
These variables hold the connection information for the source database and target Tiger Cloud service:
Terminal window export SOURCE="postgres://<user>:<password>@<source host>:<source port>/<db_name>"export TARGET="postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"You find the connection information for your Tiger Cloud service in the configuration file you downloaded when you created the service.
Align the extensions on the source and target
- Ensure that the Tiger Cloud service is running the PostgreSQL extensions used in your source database
Check the extensions on the source database:
Terminal window psql $SOURCE -c "SELECT * FROM pg_extension;"For each extension, enable it on your target Tiger Cloud service:
Terminal window psql $TARGET -c "CREATE EXTENSION IF NOT EXISTS <extension name> CASCADE;"
Migrate the roles from TimescaleDB to your Tiger Cloud service
Roles manage database access permissions. To migrate your role-based security hierarchy to your Tiger Cloud service:
- Dump the roles from your source database
Export your role-based security hierarchy.
<db_name>has the same value as<db_name>in$SOURCE. I know, it confuses me as well.Terminal window pg_dumpall -d "$SOURCE" \-l <db_name>--quote-all-identifiers \--roles-only \--file=roles.sqlIf you only use the default
postgresrole, this step is not necessary. - Remove roles with superuser access
Tiger Cloud service do not support roles with superuser access. Run the following script to remove statements, permissions and clauses that require superuser permissions from
roles.sql:Terminal window sed -i -E \-e '/CREATE ROLE "postgres";/d' \-e '/ALTER ROLE "postgres"/d' \-e '/CREATE ROLE "tsdbadmin";/d' \-e '/ALTER ROLE "tsdbadmin"/d' \-e 's/(NO)*SUPERUSER//g' \-e 's/(NO)*REPLICATION//g' \-e 's/(NO)*BYPASSRLS//g' \-e 's/GRANTED BY "[^"]*"//g' \roles.sql - Dump the source database schema and data
The
pg_dumpflags remove superuser access and tablespaces from your data. When you runpg_dump, check the run time, a long-runningpg_dumpcan cause issues.Terminal window pg_dump -d "$SOURCE" \--format=plain \--quote-all-identifiers \--no-tablespaces \--no-owner \--no-privileges \--file=dump.sqlTo dramatically reduce the time taken to dump the source database, use multiple connections. For more information, see dumping with concurrency and restoring with concurrency.
Upload your data to the target Tiger Cloud service
psql $TARGET -v ON_ERROR_STOP=1 --echo-errors \-f roles.sql \-f dump.sqlValidate your Tiger Cloud service and restart your app
- Update the table statistics
Terminal window psql $TARGET -c "ANALYZE;" - Verify the data in the target Tiger Cloud service
Check that your data is correct, and returns the results that you expect.
- Enable any Tiger Cloud features you want to use
Migration from PostgreSQL moves the data only. Now manually enable Tiger Cloud features like hypertables, hypercore or data retention while your database is offline.
- Reconfigure your app to use the target database, then restart it
And that is it, you have migrated your data from a self-hosted instance running PostgreSQL to a Tiger Cloud service.
To migrate your data from an Amazon RDS/Aurora PostgreSQL instance to a Tiger Cloud service, you extract the data to an intermediary EC2 Ubuntu instance in the same AWS region as your RDS/Aurora PostgreSQL instance. You then upload your data to a Tiger Cloud service. To make this process as painless as possible, ensure that the intermediary machine has enough CPU and disk space to rapidly extract and store your data before uploading to Tiger Cloud.
Migration from RDS/Aurora PostgreSQL moves the data only. You must manually enable Tiger Cloud features like hypertables, data compression or data retention after the migration is complete. You enable Tiger Cloud features while your database is offline.
This section shows you how to move your data from a PostgreSQL database running in an Amazon RDS/Aurora PostgreSQL instance to a
Tiger Cloud service using pg_dump and psql from Terminal.
Create an intermediary EC2 Ubuntu instance
- Select the RDS/Aurora instance to migrate
In https://console.aws.amazon.com/rds/home#databases:, select the RDS/Aurora PostgreSQL instance to migrate.
- Click
Actions>Set up EC2 connectionPress
Create EC2 instanceand use the following settings:- AMI: Ubuntu Server.
- Key pair: use an existing pair or create a new one that you will use to access the intermediary machine.
- VPC: by default, this is the same as the database instance.
- Configure Storage: adjust the volume to at least the size of RDS/Aurora PostgreSQL instance you are migrating from. You can reduce the space used by your data on Tiger Cloud using Hypercore.
- Click
Launch instance, then connect via SSHAWS creates your EC2 instance. Click
Connect to instance>SSH clientand follow the instructions to create the connection to your intermediary EC2 instance.
Install the psql client tools on the intermediary instance
- Connect to your intermediary EC2 instance. For example:
Terminal window ssh -i "<key-pair>.pem" ubuntu@<EC2 instance's Public IPv4> - On your intermediary EC2 instance, install the PostgreSQL client.
Terminal window sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'wget -qO- https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo tee /etc/apt/trusted.gpg.d/pgdg.asc &>/dev/nullsudo apt updatesudo apt install postgresql-client-16 -y # "postgresql-client-16" if your source DB is using PG 16.psql --version && pg_dump --versionKeep this terminal open, you need it to connect to the RDS/Aurora PostgreSQL instance for migration.
Set up secure connectivity between your RDS/Aurora PostgreSQL and EC2 instances
- Select the RDS/Aurora instance to migrate
In https://console.aws.amazon.com/rds/home#databases:, select the RDS/Aurora PostgreSQL instance to migrate.
- Open the inbound rules for the security group
Scroll down to
Security group rules (1)and select theEC2 Security Group - Inboundgroup. TheSecurity Groups (1)window opens. Click theSecurity group ID, then clickEdit inbound rules. - On your intermediary EC2 instance, get your local IP address:
Terminal window ec2metadata --local-ipv4Bear with me on this one, you need this IP address to enable access to your RDS/Aurora PostgreSQL instance.
- Add inbound rule for your EC2 instance
In
Edit inbound rules, clickAdd rule, then create a PostgreSQLTCPrule granting access to the local IP address for your EC2 instance. Then clickSave rules.
Test the connection between your RDS/Aurora PostgreSQL and EC2 instances
- Select the RDS/Aurora instance to migrate
In https://console.aws.amazon.com/rds/home#databases:, select the RDS/Aurora PostgreSQL instance to migrate.
- Create the source connection string
On your intermediary EC2 instance, use the values of
Endpoint,Port,Master username, andDB nameto create the PostgreSQL connectivity string for theSOURCEvariable.Terminal window export SOURCE="postgres://<Master username>:<Master password>@<Endpoint>:<Port>/<DB name>"The value of
Master passwordwas supplied when this RDS/Aurora PostgreSQL instance was created. - Test your connection:
Terminal window psql -d $SOURCEYou are connected to your RDS/Aurora PostgreSQL instance from your intermediary EC2 instance.
Migrate your data to your Tiger Cloud service
To securely migrate data from your RDS instance:
Prepare to migrate
- Take the applications that connect to the RDS instance offline
The duration of the migration is proportional to the amount of data stored in your database.
By disconnection your app from your database you avoid and possible data loss. You should also ensure that your source RDS instance is not receiving any DML queries. - Connect to your intermediary EC2 instance
For example:
Terminal window ssh -i "<key-pair>.pem" ubuntu@<EC2 instance's Public IPv4> - Set your connection strings
These variables hold the connection information for the RDS instance and target Tiger Cloud service:
Terminal window export SOURCE="postgres://<Master username>:<Master password>@<Endpoint>:<Port>/<DB name>"export TARGET=postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=requireYou find the connection information for
SOURCEin your RDS configuration. ForTARGETin the configuration file you downloaded when you created the Tiger Cloud service.
Align the extensions on the source and target
- Ensure that the Tiger Cloud service is running the PostgreSQL extensions used in your source database
Check the extensions on the source database:
Terminal window psql $SOURCE -c "SELECT * FROM pg_extension;"For each extension, enable it on your target Tiger Cloud service:
Terminal window psql $TARGET -c "CREATE EXTENSION IF NOT EXISTS <extension name> CASCADE;"
Migrate roles from RDS to your Tiger Cloud service
Roles manage database access permissions. To migrate your role-based security hierarchy to your Tiger Cloud service:
- Dump the roles from your RDS instance
Export your role-based security hierarchy. If you only use the default
postgresrole, this step is not necessary.Terminal window pg_dumpall -d "$SOURCE" \--quote-all-identifiers \--roles-only \--no-role-passwords \--file=roles.sqlAWS RDS does not allow you to export passwords with roles. You assign passwords to these roles when you have uploaded them to your Tiger Cloud service.
- Remove roles with superuser access
Tiger Cloud services do not support roles with superuser access. Run the following script to remove statements, permissions and clauses that require superuser permissions from
roles.sql:Terminal window sed -i -E \-e '/CREATE ROLE "postgres";/d' \-e '/ALTER ROLE "postgres"/d' \-e '/CREATE ROLE "rds/d' \-e '/ALTER ROLE "rds/d' \-e '/TO "rds/d' \-e '/GRANT "rds/d' \-e 's/(NO)*SUPERUSER//g' \-e 's/(NO)*REPLICATION//g' \-e 's/(NO)*BYPASSRLS//g' \-e 's/GRANTED BY "[^"]*"//g' \roles.sql - Upload the roles to your Tiger Cloud service
Terminal window psql -X -d "$TARGET" \-v ON_ERROR_STOP=1 \--echo-errors \-f roles.sql - Manually assign passwords to the roles
AWS RDS did not allow you to export passwords with roles. For each role, use the following command to manually assign a password to a role:
Terminal window psql $TARGET -c "ALTER ROLE <role name> WITH PASSWORD '<highly secure password>';"
Migrate data from your RDS instance to your Tiger Cloud service
- Dump the data from your RDS instance to your intermediary EC2 instance
The
pg_dumpflags remove superuser access and tablespaces from your data. When you runpgdump, check the run time, a long-runningpg_dumpcan cause issues.Terminal window pg_dump -d "$SOURCE" \--format=plain \--quote-all-identifiers \--no-tablespaces \--no-owner \--no-privileges \--file=dump.sqlTo dramatically reduce the time taken to dump the RDS instance, using multiple connections. For more information, see dumping with concurrency and restoring with concurrency.
- Upload your data to your Tiger Cloud service
Terminal window psql -d $TARGET -v ON_ERROR_STOP=1 --echo-errors \-f dump.sql
Validate your Tiger Cloud service and restart your app
- Update the table statistics
Terminal window psql $TARGET -c "ANALYZE;" - Verify the data in the target Tiger Cloud service
Check that your data is correct, and returns the results that you expect.
- Enable any Tiger Cloud features you want to use
Migration from PostgreSQL moves the data only. Now manually enable Tiger Cloud features like hypertables, hypercore or data retention while your database is offline.
- Reconfigure your app to use the target database, then restart it
And that is it, you have migrated your data from an RDS/Aurora PostgreSQL instance to a Tiger Cloud service.
This section shows you how to move your data from a Managed Service for TimescaleDB instance to a
Tiger Cloud service using pg_dump and psql from Terminal.
Prepare to migrate
- Take the applications that connect to the source database offline
The duration of the migration is proportional to the amount of data stored in your database. By disconnecting your app from your database you avoid any possible data loss.
- Set your connection strings
These variables hold the connection information for the source database and target Tiger Cloud service:
Terminal window export SOURCE="postgres://<user>:<password>@<source host>:<source port>/<db_name>"export TARGET="postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"You find the connection information for your Tiger Cloud service in the configuration file you downloaded when you created the service.
Align the version of TimescaleDB on the source and target
- Ensure that the source and target databases are running the same version of TimescaleDB
Check the version of TimescaleDB running on your Tiger Cloud service:
Terminal window psql $TARGET -c "SELECT extversion FROM pg_extension WHERE extname = 'timescaledb';"Update the TimescaleDB extension in your source database to match the target service:
If the TimescaleDB extension is the same version on the source database and target service, you do not need to do this.
Terminal window psql $SOURCE -c "ALTER EXTENSION timescaledb UPDATE TO '<version here>';"For more information and guidance, see Upgrade TimescaleDB.
- Ensure that the Tiger Cloud service is running the PostgreSQL extensions used in your source database
Check the extensions on the source database:
Terminal window psql $SOURCE -c "SELECT * FROM pg_extension;"For each extension, enable it on your target Tiger Cloud service:
Terminal window psql $TARGET -c "CREATE EXTENSION IF NOT EXISTS <extension name> CASCADE;"
Migrate the roles from TimescaleDB to your Tiger Cloud service
Roles manage database access permissions. To migrate your role-based security hierarchy to your Tiger Cloud service:
- Dump the roles from your source database
Export your role-based security hierarchy.
<db_name>has the same value as<db_name>in$SOURCE. I know, it confuses me as well.Terminal window pg_dumpall -d "$SOURCE" \-l <db_name> \--quote-all-identifiers \--roles-only \--no-role-passwords \--file=roles.sqlMST does not allow you to export passwords with roles. You assign passwords to these roles when you have uploaded them to your Tiger Cloud service.
- Remove roles with superuser access
Tiger Cloud services do not support roles with superuser access. Run the following script to remove statements, permissions and clauses that require superuser permissions from
roles.sql:Terminal window sed -i -E \-e '/DROP ROLE IF EXISTS "postgres";/d' \-e '/DROP ROLE IF EXISTS "tsdbadmin";/d' \-e '/CREATE ROLE "postgres";/d' \-e '/ALTER ROLE "postgres"/d' \-e '/CREATE ROLE "rds/d' \-e '/ALTER ROLE "rds/d' \-e '/TO "rds/d' \-e '/GRANT "rds/d' \-e '/GRANT "pg_read_all_stats" TO "tsdbadmin"/d' \-e 's/(NO)*SUPERUSER//g' \-e 's/(NO)*REPLICATION//g' \-e 's/(NO)*BYPASSRLS//g' \-e 's/GRANTED BY "[^"]*"//g' \-e '/CREATE ROLE "tsdbadmin";/d' \-e '/ALTER ROLE "tsdbadmin"/d' \-e 's/WITH ADMIN OPTION,/WITH /g' \-e 's/WITH ADMIN OPTION//g' \-e 's/GRANTED BY ".*"//g' \-e '/GRANT "pg_.*" TO/d' \-e '/CREATE ROLE "_aiven";/d' \-e '/ALTER ROLE "_aiven"/d' \-e '/GRANT SET ON PARAMETER "pgaudit\.[^"]+" TO "_tsdbadmin_auditing"/d' \-e '/GRANT SET ON PARAMETER "anon\.[^"]+" TO "tsdbadmin_group"/d' \roles.sql - Dump the source database schema and data
The
pg_dumpflags remove superuser access and tablespaces from your data. When you runpg_dump, check the run time, a long-runningpg_dumpcan cause issues.Terminal window pg_dump -d "$SOURCE" \--format=plain \--quote-all-identifiers \--no-tablespaces \--no-owner \--no-privileges \--file=dump.sqlTo dramatically reduce the time taken to dump the source database, use multiple connections. For more information, see dumping with concurrency and restoring with concurrency.
Upload your data to the target Tiger Cloud service
This command uses the [timescaledb_pre_restore] and [timescaledb_post_restore] functions to put your database in the correct state.
- Upload your data
Terminal window psql $TARGET -v ON_ERROR_STOP=1 --echo-errors \-f roles.sql \-c "SELECT timescaledb_pre_restore();" \-f dump.sql \-c "SELECT timescaledb_post_restore();" - Manually assign passwords to the roles
MST did not allow you to export passwords with roles. For each role, use the following command to manually assign a password to a role:
Terminal window psql $TARGET -c "ALTER ROLE <role name> WITH PASSWORD '<highly secure password>';"
Validate your Tiger Cloud service and restart your app
- Update the table statistics
Terminal window psql $TARGET -c "ANALYZE;" - Verify the data in the target Tiger Cloud service
Check that your data is correct, and returns the results that you expect.
- Enable any Tiger Cloud features you want to use
Migration from PostgreSQL moves the data only. Now manually enable Tiger Cloud features like hypertables, hypercore or data retention while your database is offline.
- Reconfigure your app to use the target database, then restart it
And that is it, you have migrated your data from a Managed Service for TimescaleDB instance to a Tiger Cloud service.