---
title: Upload a file using the terminal | Tiger Data Docs
description: Upload CSV, MySQL, and Parquet files from your machine into your Tiger Cloud service using the terminal
---

This guide shows you how to upload **CSV**, **MySQL**, and **Parquet** files from a source machine into your Tiger Cloud service using the terminal. Use the tab below that matches your file type. Terminal-based imports are ideal for large files, scripting, and automation, and they avoid the size limits of the [Console upload](/docs/migrate/import-console/index.md).

## Before you start

### Prerequisites

To follow the steps on this page:

- Create a [Tiger Cloud service](/docs/get-started/quickstart/create-service/index.md) and find your [connection details](/docs/integrate/find-connection-details/index.md).

- Prepare data files on a machine that can reach your service (your laptop, a jump host, or a server):

  - **For CSV:** `psql` (or another PostgreSQL client) installed.
  - **For MySQL:** Access to the MySQL source (for example, `mysqldump`, MySQL client) and `psql` for the target.
  - **For Parquet:** A way to convert Parquet to CSV if you use `COPY` (for example, Python, `parquet-tools`), or a loader that supports Parquet; plus `psql` or [timescaledb-parallel-copy](https://github.com/timescale/timescaledb-parallel-copy) for bulk load.

### Supported formats and limits

- **CSV:** Use PostgreSQL `COPY` or `\copy` for high-throughput bulk load. UTF-8 recommended; set delimiter and header as needed.
- **MySQL:** Export from MySQL (for example, `mysqldump` or CSV export), then import into your service with `psql` or `pg_restore` (for dumps). For ongoing sync, see [Sync from PostgreSQL](/docs/migrate/livesync-for-postgresql/index.md) or migration guides.
- **Parquet:** Convert to CSV then use `COPY` or timescaledb-parallel-copy, or use a Parquet-capable ETL tool that can write to PostgreSQL.

Tips

Use the same PostgreSQL major version as your target when running `pg_dump`/`pg_restore` for dumps. For large CSV or Parquet loads, [timescaledb-parallel-copy](https://github.com/timescale/timescaledb-parallel-copy) can speed up imports.

## How to upload a file

Choose the tab that matches your source file type and follow the steps.

- [From CSV](#tab-panel-656)
- [From MySQL](#tab-panel-657)
- [From Parquet](#tab-panel-658)

To load a CSV file into your service from the terminal:

1. **Get your connection string**

   Use your [Tiger Cloud connection details](/docs/integrate/find-connection-details/index.md) and set a connection string, for example: `postgres://USER:PASSWORD@HOST:PORT/DATABASE?sslmode=require`

2. **Create the target table (if needed)**

   Define a table that matches your CSV columns and types. Example:

   ```
   CREATE TABLE my_data (
     time  TIMESTAMPTZ,
     symbol TEXT,
     price  DOUBLE PRECISION,
     volume BIGINT
   );
   ```

3. **Load the CSV with COPY or \copy**

   - **From your local machine:** Use `\copy` in `psql` (client-side; file must be on your machine):

   Terminal window

   ```
   psql "postgres://USER:PASSWORD@HOST:PORT/DATABASE?sslmode=require" -c "\copy my_data FROM 'path/to/file.csv' WITH (FORMAT csv, HEADER true, DELIMITER ',');"
   ```

   - **From a server that can reach the DB:** Use `COPY` in SQL (server-side; file path is on the server) or run `\copy` from a client that has the file. For very large files, consider [timescaledb-parallel-copy](https://github.com/timescale/timescaledb-parallel-copy) for parallel loading.

4. **Verify the import**

   Query the table in `psql` or the [SQL editor](https://console.cloud.tigerdata.com) to confirm row counts and sample data.

To get data from MySQL into your Tiger Cloud service using the terminal:

1. **Export data from MySQL**

   - **Option A: CSV:** Export tables to CSV (for example, `SELECT ... INTO OUTFILE` or a script that writes CSV), then load with `\copy` or `COPY` as in the **From CSV** tab.
   - **Option B: SQL dump:** Use `mysqldump` to create a SQL dump. Note: MySQL dump syntax differs from PostgreSQL; you may need to convert or load schema and data in steps (for example, schema first, then data via CSV).

2. **Get your Tiger Cloud connection string**

   Use your [Tiger Cloud connection details](/docs/integrate/find-connection-details/index.md): `postgres://USER:PASSWORD@HOST:PORT/DATABASE?sslmode=require`

3. **Create the target schema and tables**

   Create tables in your Tiger Cloud service that match the data you exported (same column names and compatible types). Adjust types as needed (for example, MySQL `DATETIME` → `TIMESTAMPTZ`).

4. **Import the data**

   - If you exported to **CSV**, use `psql` with `\copy` or `COPY` as in the **From CSV** tab.
   - If you have a **MySQL dump**, you can use a conversion step or tool (for example, [pgloader](https://pgloader.io/) for MySQL → PostgreSQL) to load into your service. For schema-only or custom dumps, restore schema first, then load data (for example, via CSV) to avoid syntax differences.

5. **Verify the import**

   Query the tables in `psql` or the [SQL editor](https://console.cloud.tigerdata.com) to confirm data.

To load a Parquet file into your service from the terminal:

1. **Convert Parquet to CSV (if using COPY)**

   PostgreSQL `COPY` does not read Parquet directly. Convert the Parquet file to CSV (for example, with Python/pandas, `parquet-tools`, or another tool), then follow the **From CSV** steps. Example with Python:

   Terminal window

   ```
   python -c "
   import pandas as pd
   df = pd.read_parquet('data.parquet')
   df.to_csv('data.csv', index=False)
   "
   ```

2. **Get your connection string**

   Use your [Tiger Cloud connection details](/docs/integrate/find-connection-details/index.md): `postgres://USER:PASSWORD@HOST:PORT/DATABASE?sslmode=require`

3. **Create the target table**

   Define a table that matches the Parquet/CSV columns and types (infer from the Parquet schema or the CSV header).

4. **Load the data**

   Use `\copy` or `COPY` as in the **From CSV** tab, or use [timescaledb-parallel-copy](https://github.com/timescale/timescaledb-parallel-copy) for large files. Point the command at the converted CSV.

5. **Verify the import**

   Query the table in `psql` or the [SQL editor](https://console.cloud.tigerdata.com) to confirm row counts and sample data.

Your data is now in your Tiger Cloud service. For time-series tables, consider converting them to [hypertables](/docs/learn/hypertables/understand-hypertables/index.md) for better performance.

## Troubleshooting

- **Permission denied:** Ensure your database user has `CREATE` and `INSERT` (and `USAGE` on the schema) on the target tables.
- **Encoding errors:** Use UTF-8 for CSV files and connection encoding to avoid corruption.
- **Connection timeouts:** For large imports, use a stable network; increase timeouts if your client supports it.
- **Foreign key errors:** Create and load tables in dependency order, or temporarily defer constraint checks during import.

## Summary

You can upload CSV, MySQL, and Parquet data into your Tiger Cloud service from the terminal: From CSV with `COPY`/`\copy`, from MySQL by exporting to CSV or using a migration tool, and from Parquet by converting to CSV then loading. For a UI-based upload, use [Upload a file using the Console](/docs/migrate/import-console/index.md); for continuous sync from S3, use [Sync from S3](/docs/migrate/livesync-for-s3/index.md).

## Related

- [Upload a file using Tiger Console](/docs/migrate/import-console/index.md): Upload CSV, Parquet, and text files via the web UI.
- [Sync data from S3](/docs/migrate/livesync-for-s3/index.md): Continuously sync CSV and Parquet from an S3 bucket.
- [Sync data from PostgreSQL](/docs/migrate/livesync-for-postgresql/index.md): Continuously replicate from a PostgreSQL database.
- [Live import from a database](/docs/migrate/live-migration/index.md): Migrate with minimal or no downtime.
