Category: All posts
Oct 24, 2025

Your coding agent just shipped what looks to be a perfect migration. All tests pass. The schema refactor is elegant. It's Friday at 4pm.
Are you brave enough to deploy?
Most developers aren't. And for good reason. That AI-generated migration hasn't touched real production data. You don't know about the edge cases lurking in 40 million rows. You can't predict if the backfill takes 5 minutes or 5 hours. And you definitely don't know about that legacy integration from 2019 storing user IDs in a weird format.
So you wait until Monday. The agent did its job in minutes. You'd spend the weekend worrying.
That's why we built forks into Agentic Postgres: so you can let your agents experiment freely, without risking production. Fast, zero-copy forks powered by our new Fluid Storage technology let you test against your live database without touching production. They're isolated, fast, and cheap, making it safe to test, debug, and optimize against real production data.
Agentic Postgres gives AI agents a playground—and forks make it safe for developers to let them play.
In this post, we'll show how forks transform how you build and deploy in the AI era: from testing AI-generated migrations, to debugging performance issues, to spinning up realistic preview environments for every PR.
Database forks are exactly what they sound like: take your database at a point in time and create a working copy. Schemas, users, hypertables, data—everything. The parent keeps running untouched while you experiment on the clone.
The magic is in how we make this fast and cheap.
Traditional database clones copy everything, which is slow and expensive. We built something different: Fluid Storage, a copy-on-write block-storage architecture that lets production and fork share the same physical data. Only pages modified after the fork-point get stored separately.
This means:
Right now, Fluid Storage powers our free services, with a broader rollout coming soon. But you can use forks today through our CLI and GitHub Actions.
Let's see what this unlocks.
Your AI coding agent just analyzed your schema and suggested a performance optimization: convert the user_id column from VARCHAR to BIGINT for better join performance and storage efficiency. The migration script looks reasonable and your test suite passes. But you're about to run this against a table with 40 million rows in production.
For zero downtime, the coding agent generated a multi-step migration: add new column, backfill data, update app to write to both, switch reads to new column, drop old column. Solid approach: the agent knows that altering a column type otherwise requires an ACCESS EXCLUSIVE lock, which blocks all other reads and writes to the table. But test data doesn't reveal the edge cases. You don't know if the backfill will take 5 minutes or 5 hours. You can't predict lock contention during the dual-write phase. The agent also doesn't know about that legacy integration from 2019 that stored user_ids in a weird format, which the migration needs to fix as well.
Here's how forking fixes this:
# Fork production to test the migration
tiger service fork prod-db-x8unopqbx3 --now
# Fork created: test-db-k3mxp9wq7r
# Run the AI-generated migration on the fork
tiger db connect test-db-k3mxp9wq7r -- -f \ migrations/0042_optimize_user_lookups.sql
# Migration failed! Found 2,847 rows with non-numeric user_ids
You just caught a production incident before it happened. Those 2,847 rows with non-numeric IDs would've broken the migration mid-backfill.
Now you update the migration script to handle the edge case. Now we can test again on a fresh fork:
# Test the fixed migration on a clean fork
tiger service fork prod-db-x8unopqbx3 --now
# Fork created: test-db-j8kqp2mn5x
tiger db connect test-db-j8kqp2mn5x \
-f migrations/0042_optimize_user_lookups_v2.sql
# Success! Migration completed in 4m 23s
# New index improves query performance by 85%
# Clean up
tiger service delete test-db-j8kqp2mn5x
tiger service delete test-db-k3mxp9wq7r
Now you merge with confidence.
For CI/CD, add this to your GitHub Actions workflow and every migration gets tested against production data before merge:
- name: Fork Database
id: fork
uses: timescale/fork-service@v1
with:
project_id: ${{ secrets.TIGERDATA_PROJECT_ID }}
service_id: ${{ secrets.PROD_SERVICE_ID }}
api_key: ${{ secrets.TIGERDATA_API_KEY }}
forking-strategy: last-snapshot
cleanup: true
- name: Test migration
env:
DATABASE_URL: postgres://tsdbadmin:${{ steps.fork.outputs.initial_password }}@${{ steps.fork.outputs.host }}:${{ steps.fork.outputs.port }}/tsdb?sslmode=require
run: alembic upgrade head
The cleanup: true option automatically deletes the fork when the workflow completes, perfect for PR-based testing.
Every PR tests against production data.
Every migration is validated before it touches prod. Your CI/CD pipeline just became your safety net.
You have a slow query killing production performance. Your AI agent suggests three different optimization approaches, but testing on toy data won't reveal real-world performance characteristics. And for safety, you can't experiment directly on production.
Fork production and let your agent analyze the real thing:
# Fork production for optimization work
tiger service fork prod-db-x8unopqbx3 --now
# Fork created: optimize-db-m4xtp8wn2k
Now your AI agent can run EXPLAIN ANALYZE on actual data, test different index strategies, and benchmark improvements against real table sizes. All safely isolated from production.
❯ claude
> This query is slow in production:
SELECT u.*, COUNT(o.id) FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at > '2024-01-01'
GROUP BY u.id;
⏺ Let me analyze this query...
Current Issues:
- No index on orders.user_id for the JOIN
- Missing index on users.created_at for the WHERE filter
- GROUP BY forcing sort on potentially millions of rows
Recommended Fix:
-- Add composite index for efficient JOIN + filtering
CREATE INDEX CONCURRENTLY idx_orders_user_id_created
ON orders(user_id, created_at);
CREATE INDEX CONCURRENTLY idx_users_created_at
ON users(created_at) WHERE created_at > '2024-01-01';
Expected improvement: ~90% faster execution time by enabling
index-only scans and eliminating the sort step.
Want me to generate the full migration?
The agent creates the indexes on your fork and runs the query:
Before: 2,847ms
After: 127ms (95% improvement)
Perfect. Now validate the migration end-to-end on a fresh fork before deploying:
tiger service fork prod-db-x8unopqbx3 --now
tiger db connect test-db-n7xtp2wn4k -f migrations/0043_optimize_user_query.sql
# ✓ Migration completed successfully
# Deploy with confidence
git add migrations/0043_optimize_user_query.sql
git commit -m "Optimize user query - tested on production fork"
Your agent analyzed real data distribution, tested multiple strategies, and verified the fix. All without touching production. Production-tested solutions in development time.
For both examples, you may have noticed we used a --now flag when forking the database. However, there’s a few different options that we provide you that give the best trade offs for different scenarios.
You have three forking strategies depending on your needs:
--now (Latest Data)Creates a fork from the current database state. Every transaction up to the moment you run the command is included.
Use this when testing migrations that depend on the absolute latest data, debugging a production issue happening right now, or when AI agents need to analyze current state.
Trade-off: May take longer for very active databases (minutes instead of seconds)
tiger service fork prod-db-x8unopqbx3 --now
--last-snapshot (Fast Path)Forks your database from the most recent snapshot (up to an hour old). Much faster for large databases.
Use this for CI/CD pipelines where speed matters, preview environments for feature review, iterative testing where a few minutes of stale data is fine, or when you're creating multiple forks rapidly.
Trade-off: Might miss very recent changes
tiger service fork prod-db-x8unopqbx3 --last-snapshot
--to-timestamp (Point-in-Time Recovery)Fork from a specific point in time. Perfect for investigating issues that occurred at a specific moment or reproducing bugs from production.
tiger service fork prod-db-x8unopqbx3 --to-timestamp 2025-10-01T15:29:00Z
Use this when debugging an incident that happened at a specific time, testing how a migration would have performed against historical data, or reproducing a reported bug from a specific date.
Forks turn your database into a sandbox: safe, reproducible, disposable.
No more waiting until Monday for migrations you could deploy Friday. No more crossing your fingers when AI-generated optimizations touch production. No more "works on my machine" when your machine has 1,000 rows and production has 40 million.
Install the Tiger CLI:
# macOS
brew install timescale/tap/tiger
# Linux/WSL
curl -fsSL https://cli.tiger.cloud/install.sh | sh
# Authenticate
tiger auth loginBasic workflow:
# Create a new database service
tiger service create
# List your services
tiger service list
# Fork an existing database
tiger service fork <service-id> --now
# Use the fork for testing, this requires psql to be installed
tiger db connect <fork-id>
# Clean up when done
tiger service delete <fork-id>With Agentic Postgres, every agent gets a playground, and every developer gets peace of mind.
Fork production. Let your agents cook. Validate everything. Then deploy.
It's Friday at 4pm. Your migration is tested against real data. You have the weekend free.
Ready to make Friday deploys boring? Install the Tiger CLI, add the GitHub Action to your workflows, or check out the documentation.
About the authors
Matty Stratton
Matty Stratton is the Head of Developer Advocacy and Docs at Tiger Data, a well-known member of the DevOps community, founder and co-host of the popular Arrested DevOps podcast, and a global organizer of the DevOpsDays set of conferences.
Matty has over 20 years of experience in IT operations and is a sought-after speaker internationally, presenting at Agile, DevOps, and cloud engineering focused events worldwide. Demonstrating his keen insight into the changing landscape of technology, he recently changed his license plate from DEVOPS to KUBECTL.
He lives in the Chicagoland area and has three awesome kids and two Australian Shepherds, whom he loves just a little bit more than he loves Diet Coke.
Jascha Beste
Jascha Beste is a Software Engineer on the agent team at Tiger Data, where he focuses on making AI development as smooth and intuitive as possible. Based in Berlin, he's the builder behind an internal AI agent that transformed how Tiger Data's sales team scores leads and researches prospects: think customized deep research that actually works. Now, he's bringing those lessons to developers everywhere, working on Tiger Data's CLI and developer experience, particularly around fast database forks that let teams iterate without friction.
Jascha's philosophy is simple: startups thrive on rapid iteration. He's passionate about building developer experiences and tooling so well-designed that they get out of your way and let you ship faster, not slower. It's this same mission that drives Kviklet, his side project and co-founded startup, which bridges the gap between smooth development, operations, and compliance.
Before joining Tiger Data, Jascha worked across various startups, always wearing many hats and feeling the pain of product orgs that slow down as they scale. When he's not optimizing developer workflows, you'll find Jascha climbing the League of Legends ranks as a diamond ADC or, when Berlin's fleeting summer arrives, out on the water sailing.
Noah Hein
Noah Hein is a Senior Product Marketing Engineer at TigerData, where he helps developers understand, adopt, and succeed with the fastest PostgreSQL platform for real‑time and analytical workloads. Day‑to‑day, he translates deep technical capabilities—like hypertables, hypercore compression, and continuous aggregates—into clear product narratives and customer stories that drive adoption and growth.
Before joining TigerData, Noah spent several years on the “builder” side of the house as both a founding engineer and an educator. He co‑created Latent Space’s three‑week AI Engineering Fundamentals course and has taught hundreds of engineers how to apply LLMs in production. Noah frequently speaks on AI‑data convergence topics; at the first ever AI Engineer Summit he led the “AI Engineering 101” workshop, walking participants through hands‑on projects.
Outside of work, Noah tries to help more people land jobs with his side project JobMosaic. When he’s not crafting launch posts, you’ll find him experimenting with edge‑AI devices, tinkering with homelab Postgres clusters, or giving impromptu botany lessons to anyone who will listen.