Category: All posts
Dec 02, 2025

Data infrastructure requirements are changing. Modern applications combine operational transactions, time-series, events and vector data all while integrating the lakehouse backbone for model training, historical analytics, and enterprise insights.
Developers building modern applications start with Postgres. It’s familiar, flexible, and handles transactions well. And every framework, ORM, and service on AWS speaks fluent Postgres.
But as workloads evolve, teams outgrow what vanilla Postgres can comfortably handle. Companies ingest more data, expand real-time analytics, store longer histories, and increasingly support AI-driven features. To accommodate new requirements, developers incorporate specialized data store technologies alongside extended pipelines required to keep data in-sync. Over time, data drifts, operational burden increases and developer velocity tanks as teams spend more time gluing databases together and maintaining complex architectures rather than building new applications.
Developers on AWS want a data layer that unifies transactional, analytical and agentic workloads on top of Postgres and integrates seamlessly within the broader AWS ecosystem.
That’s why we’re partnering with AWS: to jointly solve data fragmentation and deliver the unified data infrastructure developers have been asking for. Through our Strategic Collaboration Agreement announced last week, we’re working with AWS to provide this unified infrastructure to every team building modern applications.
In this blog post, we’re showing the incredible progress we have made over the last 12 months to make this architecture native to AWS including the announcement of two new releases: Tiger Lake public beta and S3 Connector GA.
Developers on AWS already rely on Postgres to power their operational systems. Tiger Data builds on that foundation by extending Postgres to support time-series, vector and full-text search, advanced analytics, and tight lakehouse integration, all while fitting naturally into the AWS ecosystem through our growing collaboration.
The result is Postgres that does everything developers wish Postgres could do, without introducing new query languages, new operational paradigms, or new systems to manage.
At the core of this extended engine are four pillars:
TimescaleDB brings a purpose-built time-series engine directly into Postgres. It handles massive ingest rates, automatic partitioning, hybrid row–column storage with 90%+ compression, SIMD-accelerated scans, incremental materialized views, and keeps recent data fast while automatically tiering older data to Amazon S3. It also includes Hyperfunctions, a rich set of SQL analytics functions for things like statistical aggregates, percentile approximations, gap-filling, and downsampling, so teams can run advanced time-series analytics directly in Postgres without external systems or pipelines.
pgvectorscale introduces high-performance vector search with a DiskANN index, supporting large, high-dimensional embedding workloads with fast filtering and optimized storage. Instead of deploying a separate vector database, developers can store embeddings alongside time-series and relational context, enabling AI-driven features on top of unified data.
pg_textsearch provides modern full-text search powered by a BM25 ranking model and a memtable architecture that enables fast incremental indexing and low-latency search, right inside Postgres.
Layered on top of these engines is deep integration with the AWS platform for secure connectivity, streaming and batch ingest, observability, analytics, AI and billing, which has been a key area of focus for us over the last 12 months.
The past 12 months of collaboration with AWS were shaped around one theme:
Make Tiger Cloud feel native inside AWS architectures.
We approached this holistically spanning secure connectivity, observability, billing, ingest and finally lakehouse interoperability.

Ingest has been one of the biggest areas of investment this year. Across different industries, AWS customers generate time-series data in many ways: IoT fleets send telemetry through IoT Core or drop device exports into S3, financial firms stream market data through Kafka or Amazon MSK, and application teams accumulate large volumes of event or log data that regularly land in S3. These are different workloads and different customers, but they all shared the same pain: getting time-stamped data into Postgres required a patchwork of custom pipelines, consumers, and periodic backfills that were hard to scale and harder to maintain.
To simplify this, we spent the past year building native ingestion paths for Kafka / Amazon MSK, RDS for PostgreSQL, and Aurora PostgreSQL—all currently in beta — so that streams and operational data can flow directly into Tiger Cloud without bespoke glue code.
With our Postgres source connectors, customers can replicate existing time-series tables from RDS or Aurora into Tiger Cloud, where they’re converted into optimized hypertables for high-ingest workloads and fast queries, all without modifying their application or schema.
S3 Connector Is Now Generally Available
Alongside these efforts, we also introduced a new S3 Connector, which has quickly become one of the most common ingest paths for AWS users. And today, we’re announcing its general availability. It continuously loads Parquet and CSV files from S3 into hypertables, handles late-arriving files, and makes historical data immediately available for real-time analytics without the Glue jobs, Lambdas, or custom ingestion services teams used to build.
Taken together, these capabilities create a cleaner, more AWS-native ingest model: whether your time-series data originates from IoT devices, market data feeds, event systems, or existing Postgres deployments, it can now flow directly into Tiger Cloud without additional code or operational overhead.
For many AWS customers, Amazon S3 serves as the foundation of their analytics and AI platforms, the place where governed, long-term datasets live and where engines like Athena, EMR or SageMaker expect to read data. Earlier this year, we introduced Tiger Lake in private beta, our Apache Iceberg integration, to make it easy for teams to expose curated operational and time-series data from Tiger Cloud directly into their S3-based lakehouse.
Tiger Lake works by using Postgres change data capture (CDC) to track every insert, update, and delete in source tables or hypertables. It then converts those changes into Iceberg-compliant commits and writes them to Amazon S3 via the S3 Tables interface. Because the output is a native Iceberg table stored in S3, AWS analytics and AI services can immediately query or train on the data using their existing tooling without batch exports, Spark pipelines, or glue code required. Operational changes in Tiger Cloud flow directly into the lakehouse as versioned Iceberg snapshots.
Tiger Lake Is Now Public Beta
Today we’re announcing that Tiger Lake is available in open beta, enabling any table or hypertable in Tiger Cloud to be continuously published as an Iceberg table on S3.
Tiger Cloud powers the real-time, high-ingest workloads that vanilla Postgres struggles with, while your AWS analytics and AI stack reads the same data through Iceberg on S3. It’s a natural bridge between operational Postgres workflows and the lakehouse architectures that drive analytics, ML, and enterprise intelligence on AWS.
Customers want their databases to plug into AWS environments in the same way they already connect to RDS, Aurora, or Redshift to ensure their data and applications remain secure. Tiger Cloud has supported VPC peering for years, making private, single-VPC deployments straightforward. At the beginning of this year, our new Transit Gateway support expanded that pattern to multi-account, multi-VPC organizations. Today, customers can connect Tiger Cloud using well-understood AWS constructs, without VPNs, proxies, or public endpoints.
Observability has been AWS-native in Tiger Cloud for a long time. Tiger Cloud integrates directly with Amazon CloudWatch for both metrics and logs, so teams can monitor their database using the same tooling they rely on for EC2, EKS, Lambda, MSK, and the rest of their AWS environment.
Tiger Cloud streams operational metrics into CloudWatch Metrics and sends structured logs to CloudWatch Logs, making it easy to build dashboards, set alarms, and satisfy compliance requirements without new tooling.
Finally, we wanted the commercial experience to feel as seamless as the technical one. Tiger Cloud is fully integrated into AWS Marketplace, allowing customers to use the same procurement paths they already use for other AWS services.
For teams that want to get started quickly, Tiger Cloud supports pay-as-you-go billing directly through their AWS account. There’s no new vendor onboarding or separate invoice; usage simply appears on the existing monthly AWS bill.
For larger organizations with specific architectural, security, or cost requirements, we also support private offers, giving enterprises the ability to secure annual commitments, customized pricing, and tailored deployment guidance, all handled through AWS Marketplace.
Speedcast runs a global telecom network for remote industries, combining satellite and terrestrial links to keep ships, rigs, and NGOs online.
Previously, Speedcast had to juggle separate geospatial, relational, and time-series stores plus aging SCADA systems, stitching them together with fragile ETL pipelines that slowed insights and raised operational risk. With Tiger Lake in their AWS + Tiger Data stack, Speedcast dropped custom scripts and batching, using native integrations between their data lakehouse and Tiger Cloud to move toward a continuous data integration pipeline with Tiger Cloud at the center.
With Tiger Cloud as the “spider at the center of the web,” operations, data scientists, and customers now have a single, authoritative data source instead of hunting for data across systems. Platforms, dashboards and events are powered by the same database in real-time, regardless of workload pattern, with every system able to communicate with Apache Iceberg.
"We stitched together Kafka, Flink, and custom code to stream data from Postgres to Iceberg. It worked, but it was fragile and high-maintenance,” said Kevin Otten, Director of Technical Architecture at Speedcast. “Tiger Lake replaces all of that with native infrastructure. It’s the architecture we wish we had from day one."
As Speedcast plans for service expansions and continues installing beyond 12,000 Starlink Terminals globally, Tiger Lake’s ingest pipeline will scale with them. For example, Speedcast can monitor usage patterns and spot emerging service-area outages in real-time before customers feel the impact. When a new service ticket is generated, Speedcast can drill into location, usage, and history with a single SQL query instead of bouncing between silos, reducing the time to resolution.
Read the full case study in our blog.
Modern applications shouldn’t require four different databases, half a dozen pipelines, and constant backfills just to keep data consistent. With Tiger Cloud and AWS, you get a unified Postgres engine that handles real-time ingest, high-performance time-series, vector search, and lakehouse integration—all inside the AWS architecture you already trust.
This is the future we’re building with AWS: simpler stacks, fewer moving parts, and one Postgres data layer for operational, analytical, and agentic workloads.
You can get started in minutes through the AWS Marketplace or sign up directly at tigerdata.com for free. We can’t wait to see what you build.