<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[Tiger Data Blog]]></title>
        <description><![CDATA[Insights, product updates, and tips from TigerData (Creators of TimescaleDB) engineers on Postgres, time series & AI. IoT, crypto, and analytics tutorials & use cases.]]></description>
        <link>https://www.tigerdata.com/blog</link>
        
        <generator>RSS for Node</generator>
        <lastBuildDate>Tue, 07 Apr 2026 09:52:15 GMT</lastBuildDate>
        <atom:link href="https://www.tigerdata.com/blog" rel="self" type="application/rss+xml"/>
        <ttl>60</ttl>
        <item>
            <title><![CDATA[Our One-Year Journey to a Unified Postgres Data Infrastructure on AWS]]></title>
            <description><![CDATA[See how Tiger Data and AWS deliver a unified Postgres data layer with time-series, vector search, native S3 ingest, and Iceberg lakehouse integration.]]></description>
            <link>https://www.tigerdata.com/blog/one-year-journey-unified-postgres-data-infrastructure-aws</link>
            <guid isPermaLink="true">https://www.tigerdata.com/blog/one-year-journey-unified-postgres-data-infrastructure-aws</guid>
            <category><![CDATA[AWS]]></category>
            <category><![CDATA[PostgreSQL]]></category>
            <dc:creator><![CDATA[Vineeth Pothulapati]]></dc:creator>
            <pubDate>Tue, 02 Dec 2025 13:00:30 GMT</pubDate>
            <media:content medium="image" href="https://timescale.ghost.io/blog/content/images/2025/12/tigerdata-aws-thumbnail.png">
            </media:content>
            <content:encoded><![CDATA[<p>Data infrastructure requirements are changing. Modern applications combine operational transactions, time-series, events and vector data all while integrating the lakehouse backbone for model training, historical analytics, and enterprise insights.</p><p>Developers building modern applications start with Postgres. It’s familiar, flexible, and handles transactions well. And every framework, ORM, and service on AWS speaks fluent Postgres.</p><p>But as workloads evolve, teams outgrow what vanilla Postgres can comfortably handle. Companies ingest more data, expand real-time analytics, store longer histories, and increasingly support AI-driven features. To accommodate new requirements, developers incorporate specialized data store technologies alongside extended pipelines required to keep data in-sync. Over time, data drifts, operational burden increases and developer velocity tanks as teams spend more time gluing databases together and maintaining complex architectures rather than building new applications.</p><p><strong>Developers on AWS want a data layer that unifies transactional, analytical and agentic workloads on top of Postgres and integrates seamlessly within the broader AWS ecosystem.</strong></p><p>That’s why we’re partnering with AWS: to jointly solve data fragmentation and deliver the unified data infrastructure developers have been asking for. Through our <a href="https://www.tigerdata.com/blog/tiger-data-aws-forge-unified-postgres-platform-for-developers-devices-ai-agents"><u>Strategic Collaboration Agreement</u></a> announced last week, we’re working with AWS to provide this unified infrastructure to every team building modern applications.</p><p>In this blog post, we’re showing the incredible progress we have made over the last 12 months to make this architecture native to AWS including the announcement of two new releases: Tiger Lake public beta and S3 Connector GA.</p><h2 id="postgres-extended-for-modern-workloads-on-aws">Postgres, Extended for Modern Workloads on AWS</h2><p>Developers on AWS already rely on Postgres to power their operational systems. Tiger Data builds on that foundation by extending Postgres to support time-series, vector and full-text search, advanced analytics, and tight lakehouse integration, all while fitting naturally into the AWS ecosystem through our growing collaboration.</p><p>The result is Postgres that does everything developers wish Postgres could do, without introducing new query languages, new operational paradigms, or new systems to manage.</p><p>At the core of this extended engine are four pillars:</p><p><a href="https://www.tigerdata.com/timescaledb"><strong><u>TimescaleDB</u></strong></a> brings a purpose-built time-series engine directly into Postgres. It handles massive ingest rates, automatic partitioning, hybrid row–column storage with 90%+ compression, SIMD-accelerated scans, incremental materialized views, and keeps recent data fast while automatically tiering older data to Amazon S3. It also includes Hyperfunctions, a rich set of SQL analytics functions for things like statistical aggregates, percentile approximations, gap-filling, and downsampling, so teams can run advanced time-series analytics directly in Postgres without external systems or pipelines.</p><p><a href="https://github.com/timescale/pgvectorscale"><strong><u>pgvectorscale</u></strong></a> introduces high-performance vector search with a DiskANN index, supporting large, high-dimensional embedding workloads with fast filtering and optimized storage. Instead of deploying a separate vector database, developers can store embeddings alongside time-series and relational context, enabling AI-driven features on top of unified data.</p><p><a href="https://www.tigerdata.com/docs/use-timescale/latest/extensions/pg-textsearch"><strong><u>pg_textsearch</u></strong></a> provides modern full-text search powered by a BM25 ranking model and a memtable architecture that enables fast incremental indexing and low-latency search, right inside Postgres.</p><p>Layered on top of these engines is deep integration with the AWS platform for secure connectivity, streaming and batch ingest, observability, analytics, AI and billing, which has been a key area of focus for us over the last 12 months.</p><h2 id="a-year-of-deepening-aws-integrations">A Year of Deepening AWS Integrations</h2><p>The past 12 months of collaboration with AWS were shaped around one theme:</p><p><strong>Make Tiger Cloud feel native inside AWS architectures.</strong></p><p>We approached this holistically spanning secure connectivity, observability, billing, ingest and finally lakehouse interoperability.</p><figure class="kg-card kg-image-card"><img src="https://timescale.ghost.io/blog/content/images/2025/12/2025-dec-01-AWS--Diagram.png" class="kg-image" alt="Tiger Cloud feels native inside AWS architectures." loading="lazy" width="2000" height="1241" srcset="https://timescale.ghost.io/blog/content/images/size/w600/2025/12/2025-dec-01-AWS--Diagram.png 600w, https://timescale.ghost.io/blog/content/images/size/w1000/2025/12/2025-dec-01-AWS--Diagram.png 1000w, https://timescale.ghost.io/blog/content/images/size/w1600/2025/12/2025-dec-01-AWS--Diagram.png 1600w, https://timescale.ghost.io/blog/content/images/size/w2400/2025/12/2025-dec-01-AWS--Diagram.png 2400w" sizes="(min-width: 720px) 720px"></figure><h3 id="fast-ingest">Fast Ingest</h3><p>Ingest has been one of the biggest areas of investment this year. Across different industries, AWS customers generate time-series data in many ways: IoT fleets send telemetry through IoT Core or drop device exports into S3, financial firms stream market data through Kafka or Amazon MSK, and application teams accumulate large volumes of event or log data that regularly land in S3. These are different workloads and different customers, but they all shared the same pain: getting time-stamped data into Postgres required a patchwork of custom pipelines, consumers, and periodic backfills that were hard to scale and harder to maintain.</p><p>To simplify this, we spent the past year building native ingestion paths for Kafka / Amazon MSK, RDS for PostgreSQL, and Aurora PostgreSQL—all currently in beta — so that streams and operational data can flow directly into Tiger Cloud without bespoke glue code.</p><p>With our Postgres source connectors, customers can replicate existing time-series tables from RDS or Aurora into Tiger Cloud, where they’re converted into optimized hypertables for high-ingest workloads and fast queries, all without modifying their application or schema.</p><p><strong>S3 Connector Is Now Generally Available</strong></p><p><strong>Alongside these efforts, we also introduced a new S3 Connector, which has quickly become one of the most common ingest paths for AWS users. And today, we’re announcing its general availability. </strong>It continuously loads Parquet and CSV files from S3 into hypertables, handles late-arriving files, and makes historical data immediately available for real-time analytics without the Glue jobs, Lambdas, or custom ingestion services teams used to build.</p><p><strong>Taken together, these capabilities create a cleaner, more AWS-native ingest model:</strong> whether your time-series data originates from IoT devices, market data feeds, event systems, or existing Postgres deployments, it can now flow directly into Tiger Cloud without additional code or operational overhead.</p><h3 id="lakehouse-interoperability">Lakehouse Interoperability</h3><p>For many AWS customers, Amazon S3 serves as the foundation of their analytics and AI platforms, the place where governed, long-term datasets live and where engines like Athena, EMR or SageMaker expect to read data. Earlier this year, we introduced Tiger Lake in private beta, our Apache Iceberg integration, to make it easy for teams to expose curated operational and time-series data from Tiger Cloud directly into their S3-based lakehouse.</p><p>Tiger Lake works by using Postgres change data capture (CDC) to track every insert, update, and delete in source tables or hypertables. It then converts those changes into Iceberg-compliant commits and writes them to Amazon S3 via the S3 Tables interface. Because the output is a native Iceberg table stored in S3, AWS analytics and AI services can immediately query or train on the data using their existing tooling without batch exports, Spark pipelines, or glue code required. Operational changes in Tiger Cloud flow directly into the lakehouse as versioned Iceberg snapshots.</p><p><strong>Tiger Lake Is Now Public Beta</strong></p><p><strong>Today we’re announcing that Tiger Lake is available in open beta, enabling any table or hypertable in Tiger Cloud to be continuously published as an Iceberg table on S3.</strong></p><p>Tiger Cloud powers the real-time, high-ingest workloads that vanilla Postgres struggles with, while your AWS analytics and AI stack reads the same data through Iceberg on S3. It’s a natural bridge between operational Postgres workflows and the lakehouse architectures that drive analytics, ML, and enterprise intelligence on AWS.</p><h3 id="secure-connectivity">Secure Connectivity</h3><p>Customers want their databases to plug into AWS environments in the same way they already connect to RDS, Aurora, or Redshift to ensure their data and applications remain secure. Tiger Cloud has supported VPC peering for years,&nbsp; making private, single-VPC deployments straightforward. At the beginning of this year, our new Transit Gateway support expanded that pattern to multi-account, multi-VPC organizations. Today, customers can connect Tiger Cloud using well-understood AWS constructs, without VPNs, proxies, or public endpoints.</p><h3 id="observability">Observability</h3><p>Observability has been AWS-native in Tiger Cloud for a long time. Tiger Cloud integrates directly with Amazon CloudWatch for both metrics and logs, so teams can monitor their database using the same tooling they rely on for EC2, EKS, Lambda, MSK, and the rest of their AWS environment.</p><p>Tiger Cloud streams operational metrics into CloudWatch Metrics and sends structured logs to CloudWatch Logs, making it easy to build dashboards, set alarms, and satisfy compliance requirements without new tooling.&nbsp;</p><h3 id="billing">Billing</h3><p>Finally, we wanted the commercial experience to feel as seamless as the technical one. Tiger Cloud is fully integrated into AWS Marketplace, allowing customers to use the same procurement paths they already use for other AWS services.</p><p>For teams that want to get started quickly, Tiger Cloud supports pay-as-you-go billing directly through their AWS account. There’s no new vendor onboarding or separate invoice; usage simply appears on the existing monthly AWS bill.</p><p>For larger organizations with specific architectural, security, or cost requirements, we also support private offers, giving enterprises the ability to secure annual commitments, customized pricing, and tailored deployment guidance, all handled through AWS Marketplace.</p><h2 id="case-study-speedcast%E2%80%99s-unified-architecture-on-aws">Case Study: Speedcast’s Unified Architecture on AWS</h2><p>Speedcast runs a global telecom network for remote industries, combining satellite and terrestrial links to keep ships, rigs, and NGOs online.</p><p>Previously, Speedcast had to juggle separate geospatial, relational, and time-series stores plus aging SCADA systems, stitching them together with fragile ETL pipelines that slowed insights and raised operational risk. With Tiger Lake in their AWS + Tiger Data stack, Speedcast dropped custom scripts and batching, using native integrations between their data lakehouse and Tiger Cloud to move toward a continuous data integration pipeline with Tiger Cloud at the center.</p><p>With Tiger Cloud as the “spider at the center of the web,” operations, data scientists, and customers now have a single, authoritative data source instead of hunting for data across systems. Platforms, dashboards and events are powered by the same database in real-time, regardless of workload pattern, with every system able to communicate with Apache Iceberg.</p><p><strong>&nbsp;"We stitched together Kafka, Flink, and custom code to stream data from Postgres to Iceberg. It worked, but it was fragile and high-maintenance,” said Kevin Otten, Director of Technical Architecture at Speedcast. “Tiger Lake replaces all of that with native infrastructure. It’s the architecture we wish we had from day one."</strong></p><p>As Speedcast plans for service expansions and continues installing beyond 12,000 Starlink Terminals globally, Tiger Lake’s ingest pipeline will scale with them. For example, Speedcast can monitor usage patterns and spot emerging service-area outages in real-time before customers feel the impact. When a new service ticket is generated, Speedcast can drill into location, usage, and history with a single SQL query instead of bouncing between silos, reducing the time to resolution.&nbsp;</p><p>Read the <a href="https://www.tigerdata.com/blog/how-speedcast-built-a-global-communications-network-on-tiger-lake"><u>full case study</u></a> in our blog.</p><h2 id="bringing-your-data-workloads-together-on-aws">Bringing Your Data Workloads Together on AWS</h2><p>Modern applications shouldn’t require four different databases, half a dozen pipelines, and constant backfills just to keep data consistent. With Tiger Cloud and AWS, you get a unified Postgres engine that handles real-time ingest, high-performance time-series, vector search, and lakehouse integration—all inside the AWS architecture you already trust.</p><p>This is the future we’re building with AWS: simpler stacks, fewer moving parts, and one Postgres data layer for operational, analytical, and agentic workloads.</p><p>You can get started in minutes through the <a href="https://aws.amazon.com/marketplace/seller-profile?id=seller-wbtecrjp3kxpm"><u>AWS Marketplace</u></a> or sign up directly at <a href="http://tigerdata.com"><u>tigerdata.com</u></a> for free. We can’t wait to see what you build.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Migrate From AWS RDS for PostgreSQL to Timescale]]></title>
            <description><![CDATA[Database migration doesn’t have to be hard. Read this step-by-step guide on how to migrate your database from AWS RDS to Timescale.]]></description>
            <link>https://www.tigerdata.com/blog/how-to-migrate-from-aws-rds-for-postgresql-to-timescale</link>
            <guid isPermaLink="true">https://www.tigerdata.com/blog/how-to-migrate-from-aws-rds-for-postgresql-to-timescale</guid>
            <category><![CDATA[Amazon RDS]]></category>
            <category><![CDATA[PostgreSQL]]></category>
            <dc:creator><![CDATA[Vineeth Pothulapati]]></dc:creator>
            <pubDate>Tue, 09 Jan 2024 15:04:50 GMT</pubDate>
            <media:content medium="image" href="https://timescale.ghost.io/blog/content/images/2024/01/Migrating-from-Amazon-RDS-to-Timescale-Cloud_hero.jpg">
            </media:content>
            <content:encoded><![CDATA[<p>Database migration refers to the process of moving data from one database to another. It's a vital and complex operation requiring careful planning and execution. Unsurprisingly, developers tend to avoid it, as migrating a database involves changing the database architecture or transferring data from one platform or format to another.&nbsp;</p><p>At Timescale, we see many devs looking for an <a href="https://www.tigerdata.com/learn/alternatives-to-rds" rel="noreferrer">alternative to AWS RDS for PostgreSQL</a>. Drawn by performance and a transparent pricing model (no one has the time to <a href="https://timescale.ghost.io/blog/understanding-rds-pricing/"><u>understand RDS pricing</u></a>), they often land at Timescale. When trying to help them, we couldn’t help but notice how lengthy and complex it is to migrate from RDS.</p><p>But what if you had a playbook to assist you along the way? If you’re embarking on a journey through uncharted territory, having a map makes all the difference. That’s what this blog post is: a step-by-step overview of the migration process from RDS to Timescale using live migrations. This migration workflow ensures a smooth transition (even for heavy workloads) while helping you avoid common pitfalls and harness our features. Think of it as your GPS for database migration. </p><h2 id="steps-for-a-successful-database-migration">Steps for a Successful Database Migration</h2><p>Migrating a database is a complex process due to all the moving parts you’ll need to handle to ensure a seamless and safe transition of your data. A successful database migration typically involves several steps:</p><ul><li><strong>Assessment</strong>: understand your current database's size, complexity, and dependencies.</li><li><strong>Planning</strong>: decide on a migration strategy, schedule downtime if necessary, and prepare a backup plan in case something goes wrong.</li><li><strong>Enable hypertables: </strong>to leverage Timescale’s performance optimization, compression, continuous aggregates, and data tiering, you have to enable hypertables on your time-series tables.&nbsp;</li><li><strong>Data migration</strong>: move the old data and incoming data during migration from the old database to the new one with minimal downtime.</li><li><strong>Testing</strong>: rigorously test the new database with your applications to ensure everything works as expected.</li><li><strong>Cutover</strong>: finally, switch over from the old database to the new one, which usually involves a period of downtime. (If you follow our playbook, we ensure minimal downtime.)</li></ul><p>Our playbook covers all the bases for a successful migration and streamlines it, allowing you to skip some of these steps. For example, we devised a way for you to enable one of Timescale’s most popular features, <a href="https://docs.timescale.com/use-timescale/latest/hypertables/about-hypertables/"><u>hypertables</u></a>, in your target database while migrating your data. This means that all the data coming into your target database will be automatically partitioned when it comes into Timescale, saving you precious time.</p><p>This helpful aid is already part of our <a href="https://timescale.ghost.io/blog/migrating-a-terabyte-scale-postgresql-database-to-timescale-with-zero-downtime/"><u>live migrations</u></a> solution, a database migration workflow for Timescale based on pg_dump/pg_restore (for schema) and PostgreSQL logical decoding (for live data). But now, we have accelerated the process even more, shortening the number of steps and making it even more straightforward. Let’s jump into it.</p><h2 id="steps-for-migrating-from-aws-rds-to-timescale">Steps for Migrating From AWS RDS to Timescale</h2><p>For a detailed step-by-step guide to migrate your database(s) from Amazon RDS for PostgreSQL to Timescale, we recommend you read our <a href="https://docs.timescale.com/migrate/latest/playbooks/rds-timescale-live-migration/"><u>documentation</u></a>. However, here is a high-level overview of the process. You’ll have to follow these steps:</p><h3 id="1-create-a-timescale-instance">1.&nbsp;Create a Timescale instance</h3><p>Sign up or log in to <a href="https://console.cloud.timescale.com/signup"><u>Timescale Cloud</u></a> and click on "Create service." Choose a Time-series service with your desired CPU and Memory plan. We recommend a minimum of 4 CPUs and 8 GB memory for migration. Click on "Download the cheatsheet" post-service creation to obtain an SQL file that contains the login details for your new service. Alternatively, you can copy the details directly from this page. After copying your password, click "I stored my password, go to service overview" at the bottom of the page. Once your service is ready to use, it will be labeled as "Running" in the Service Overview, displayed in green.</p><h3 id="2-collect-information-from-your-aws-rds-instance">2.&nbsp;Collect information from your AWS RDS instance</h3><p>The AWS RDS management console is extensive, containing numerous details. Our customers often express that locating requirement details and taking actions in AWS is time-consuming, requiring manual navigation of various screens and documentation to find the relevant information. Additionally, AWS has its own intricacies, such as parameter groups, which require engineers to invest effort in understanding the concepts and gathering the required information.</p><p>To prepare an AWS RDS instance for migration, you need the following information about the instance as the first step:</p><ol><li>Endpoint</li><li>Port</li><li>Master username</li><li>Master password</li><li>VPC</li><li>DB instance parameter group</li></ol><h3 id="3-update-the-following-parameters-of-the-aws-rds-for-postgresql-instance">3.&nbsp;Update the following parameters of the AWS RDS for PostgreSQL instance</h3><p>The live migration offered by Timescale is powered by logical decoding, which consumes Write-Ahead Logs (WAL) to synchronize real-time changes happening on the source database during the migration. Therefore, you need to set the <code>wal_level</code> to <code>logical</code>.&nbsp;</p><p>Additionally, you must set <code>old_snapshot_threshold</code> to <code>-1</code> to preserve the snapshots until the migration is complete. Not setting this parameter to <code>-1</code> can cause errors in the logical decoding process, as the database performs periodic maintenance checks. You need to update the following configurations:</p><ol><li><code>wal_level</code>: set to <code>logical</code></li><li><code>old_snapshot_threshold</code>: set to <code>-1</code></li></ol><h3 id="4-prepare-the-intermediate-machine-to-start-the-migration-process">4.&nbsp;Prepare the intermediate machine to start the migration process</h3><p>To migrate a database from a source to a target, an intermediate machine is necessary. This machine is responsible for pulling data from the source database and pushing it to the target database. Specific migration tools are required to perform actions on both the source and target databases to complete the migration. The intermediate machine will host these migration tools throughout the entire migration process.</p><p>The recommended steps are the following:</p><ol><li>Set up an EC2 instance in the same region as your source and target databases.</li><li>Configure your EC2 instance to connect to your AWS RDS instance. This involves updating the security group of your RDS instance to allow the connection from the EC2 instance.</li></ol><h3 id="5-perform-the-database-migration">5.&nbsp;Perform the database migration</h3><p>Now that the source database (AWS RDS instance) is ready for migration, an intermediate machine is prepared to execute the migration, and the target database (your Timescale instance) is up and running, we can proceed with the actual migration process.</p><ol><li>Set source database uri (i.e., AWS RDS instance) and target database uri (i.e., Timescale instance).</li></ol><p><code>export TARGET=&lt;target_db_uri&gt;</code></p><p><code>export SOURCE=&lt;source_db_uri&gt;&nbsp;</code></p><ol start="2"><li>Execute the following command to initiate the migration. We have packaged all the necessary tools for live migration in the docker image. Simply run the command below and wait for the migration to complete.</li></ol><pre><code class="language-SQL">docker run --rm -dit --name live-migration \
  -e PGCOPYDB_SOURCE_PGURI=$SOURCE \
  -e PGCOPYDB_TARGET_PGURI=$TARGET \
  -v ~/live-migration:/opt/timescale/ts_cdc \
  timescale/live-migration:v0.0.1
</code></pre>
<p>During this step, the live migration workflow will copy existing data from the source database to the target database, as well as replicate ongoing changes (change data capture) from the source to the target. The copying process will occur sequentially, beginning with the existing data and then applying ongoing changes. The output logs of this step will display the migration progress and provide prompts to verify the target database before transitioning applications to use the target database as the primary.</p><ol start="3"><li>Verify the data consistency between the source and target databases once the live migration is synchronized. Once you are confident that the data consistency checks have been successful, proceed with the application switchover to the target database by promoting it as the primary database.</li></ol><p>And you’re set! Welcome to Timescale!</p><h2 id="why-choose-timescale-over-rds">Why Choose Timescale Over RDS</h2><p>If you’re reading this article because you’re contemplating a migration from RDS to Timescale, know that moving your data will not only be easy but will also come with the following benefits:</p><h3 id="44-faster-data-ingestion">44&nbsp;% faster data ingestion&nbsp;</h3><p><a href="https://timescale.ghost.io/blog/timescale-cloud-vs-amazon-rds-postgresql-up-to-350-times-faster-queries-44-faster-ingest-95-storage-savings-for-time-series-data/"><u>During our 16-thread ingestion benchmark</u></a>, where we inserted nearly one billion rows of data, we observed impressive results when running Timescale and RDS. Timescale outperformed RDS by 32&nbsp;% with 4 vCPUs and by 44&nbsp;% with 8 vCPUs. Both systems had the same I/O performance configured on their gp3 disk.</p><figure class="kg-card kg-image-card"><img src="https://timescale.ghost.io/blog/content/images/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-ingest-performance.png" class="kg-image" alt="A line graph displaying Timescale's superior ingest performance comprared to RDS" loading="lazy" width="1116" height="753" srcset="https://timescale.ghost.io/blog/content/images/size/w600/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-ingest-performance.png 600w, https://timescale.ghost.io/blog/content/images/size/w1000/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-ingest-performance.png 1000w, https://timescale.ghost.io/blog/content/images/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-ingest-performance.png 1116w" sizes="(min-width: 720px) 720px"></figure><p></p><h3 id="up-to-350x-faster-queries">Up to 350x faster queries</h3><p>Query performance optimization is crucial in a time-series database, especially when powering real-time dashboards. The <a href="https://github.com/timescale/tsbs?ref=timescale.com"><u>Time-Series Benchmarking Suite</u></a> (TSBS), which we use to run our benchmarks, includes a variety of queries, each with its complex description. For the RDS benchmark, we conducted 100 runs of each query on 4 vCPU instance types and recorded the results.</p><p>The table demonstrates that Timescale outperforms Amazon RDS consistently, often by more than 100x. In some cases, Timescale performs over 350x better without any degradation for any query type. The table below displays the data for 4 vCPU instances, but similar results were observed across all CPU types tested. With a high instance workload, you can achieve even better results.</p><figure class="kg-card kg-image-card"><img src="https://timescale.ghost.io/blog/content/images/2024/01/How-to-migrate-from-AWS-RDS-to-Timescale---median-query-timings-table.png" class="kg-image" alt="A table with the benchmarked queries and the Timescale and RDS times in miliseconds. Timescale outperformed RDS in all these queries." loading="lazy" width="1352" height="1824" srcset="https://timescale.ghost.io/blog/content/images/size/w600/2024/01/How-to-migrate-from-AWS-RDS-to-Timescale---median-query-timings-table.png 600w, https://timescale.ghost.io/blog/content/images/size/w1000/2024/01/How-to-migrate-from-AWS-RDS-to-Timescale---median-query-timings-table.png 1000w, https://timescale.ghost.io/blog/content/images/2024/01/How-to-migrate-from-AWS-RDS-to-Timescale---median-query-timings-table.png 1352w" sizes="(min-width: 720px) 720px"></figure><h3 id="95-storage-savings">95% storage savings</h3><p>In our benchmark, all data, except for the most recent partition, is compressed into our<a href="https://timescale.ghost.io/blog/building-columnar-compression-in-a-row-oriented-database/"><u> native columnar format</u></a>. This format utilizes advanced algorithms to reduce the required storage space for the CPU table significantly. Despite compression, you can still access the data as usual, but with the added advantages of a smaller size and a columnar structure.</p><p>Reducing storage usage can result in smaller volumes, lower costs, and faster access. We achieved a 95&nbsp;% reduction, from 159 GB to 8.6 GB—these numbers are not uncommon for real customer workloads.</p><figure class="kg-card kg-image-card"><img src="https://timescale.ghost.io/blog/content/images/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-total-database-size.png" class="kg-image" alt="Timescale dramatically compresses database size compared to PostgreSQL: the graph shows two bars, one with 8.6 GB for TimescaleDB and another with 159 GB for Postgres" loading="lazy" width="1006" height="648" srcset="https://timescale.ghost.io/blog/content/images/size/w600/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-total-database-size.png 600w, https://timescale.ghost.io/blog/content/images/size/w1000/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-total-database-size.png 1000w, https://timescale.ghost.io/blog/content/images/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-total-database-size.png 1006w" sizes="(min-width: 720px) 720px"></figure><h3 id="and-even-more-features">And even more features</h3><figure class="kg-card kg-image-card"><img src="https://timescale.ghost.io/blog/content/images/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-more-features.png" class="kg-image" alt="A summary of more Timescale benefits for time savings, performance at scale, and cost-efficiency—all built on Postgres. " loading="lazy" width="1123" height="558" srcset="https://timescale.ghost.io/blog/content/images/size/w600/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-more-features.png 600w, https://timescale.ghost.io/blog/content/images/size/w1000/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-more-features.png 1000w, https://timescale.ghost.io/blog/content/images/2024/01/How-to-Migrate-from-AWS-RDS-for-PostgreSQL-to-Timescale-more-features.png 1123w" sizes="(min-width: 720px) 720px"></figure><p>Check out the <a href="https://timescale.ghost.io/blog/timescale-cloud-vs-amazon-rds-postgresql-up-to-350-times-faster-queries-44-faster-ingest-95-storage-savings-for-time-series-data/" rel="noreferrer">Timescale vs. Amazon RDS PostgreSQL benchmark blog post</a> for more details.</p><h2 id="next-steps">Next Steps</h2><p>If you’re considering migrating from AWS RDS but still want to keep the PostgreSQL you know and love, this playbook provides a good overview of how quickly and easily it is to migrate to Timescale using the live migrations solution. We hope it will help you achieve your desired destination—a high-performance PostgreSQL but faster database with predictable, unambiguous billing. If you’re still on the fence, <a href="https://console.cloud.timescale.com/signup"><u>create a free Timescale account</u></a> and try it out for 30 days.</p><p>We will continue to iterate on our migration strategies—we have more aside from live migrations, targeted at different needs and use cases—to ensure that moving to Timescale is just as easy as using our cloud database platform. Read our documentation for more details:</p><ul><li><a href="https://docs.timescale.com/migrate/latest/playbooks/rds-timescale-live-migration/"><u>Live migrations from AWS RDS for PostgreSQL to Timescale</u></a></li><li><a href="https://docs.timescale.com/migrate/latest/playbooks/rds-timescale-pg-dump/"><u>Pg_dump/pg_restore from AWS RDS for PostgreSQL to Timescale</u></a></li><li><a href="https://docs.timescale.com/migrate/latest/live-migration/?ref=timescale.com"><u>Live migrations for any PostgreSQL to Timescale</u></a></li></ul>]]></content:encoded>
        </item>
    </channel>
</rss>