TigerData logo
TigerData logo
  • Product

    Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    Open source

    TimescaleDB

    Time-series, real-time analytics and events on Postgres

    Search

    Vector and keyword search on Postgres

  • Industry

    Crypto

    Energy Telemetry

    Oil & Gas Operations

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InStart a free trial
Home
AWS Timestream Alternatives: Your Migration Options After LiveAnalyticsThe Best Time-Series Databases Compared (2026)What Is Temporal Data?Time-Series Database: What It Is, How It Works, and When You Need OneIs Your Data Time Series? Data Types Supported by PostgreSQL and TimescaleUnderstanding Database Workloads: Variable, Bursty, and Uniform PatternsTime-Series Analysis and Forecasting With Python What Are Open-Source Time-Series Databases—Understanding Your OptionsStationary Time-Series AnalysisAlternatives to TimescaleWhy Consider Using PostgreSQL for Time-Series Data?Time-Series Analysis in RWhat Is a Time Series and How Is It Used?How to Work With Time Series in Python?Tools for Working With Time-Series Analysis in PythonGuide to Time-Series Analysis in PythonUnderstanding Autoregressive Time-Series ModelingCreating a Fast Time-Series Graph With Postgres Materialized Views
PostgreSQL vs. Cassandra: The Decision Framework for Time-Series and Write-Heavy WorkloadsUnderstanding PostgreSQLOptimizing Your Database: A Deep Dive into PostgreSQL Data TypesUnderstanding FROM in PostgreSQL (With Examples)How to Address ‘Error: Could Not Resize Shared Memory Segment’ Understanding FILTER in PostgreSQL (With Examples)How to Install PostgreSQL on MacOSUnderstanding GROUP BY in PostgreSQL (With Examples)Understanding LIMIT in PostgreSQL (With Examples)Understanding PostgreSQL FunctionsUnderstanding ORDER BY in PostgreSQL (With Examples)PostgreSQL Mathematical Functions: Enhancing Coding EfficiencyUnderstanding PostgreSQL WITHIN GROUPUnderstanding WINDOW in PostgreSQL (With Examples)Using PostgreSQL String Functions for Improved Data AnalysisUnderstanding DISTINCT in PostgreSQL (With Examples)PostgreSQL Joins : A SummaryUnderstanding PostgreSQL Date and Time FunctionsWhat Is a PostgreSQL Cross Join?Understanding ACID Compliance Understanding PostgreSQL Conditional FunctionsStructured vs. Semi-Structured vs. Unstructured Data in PostgreSQLUnderstanding percentile_cont() and percentile_disc() in PostgreSQL5 Common Connection Errors in PostgreSQL and How to Solve ThemData Processing With PostgreSQL Window FunctionsPostgreSQL Join Type TheoryA Guide to PostgreSQL ViewsData Partitioning: What It Is and Why It MattersUnderstanding PostgreSQL Array FunctionsUnderstanding PostgreSQL's COALESCE FunctionUnderstanding the rank() and dense_rank() Functions in PostgreSQLWhat Is a PostgreSQL Left Join? And a Right Join?Strategies for Improving Postgres JOIN PerformanceUnderstanding Foreign Keys in PostgreSQLUnderstanding PostgreSQL User-Defined FunctionsUnderstanding SQL Aggregate FunctionsUsing PostgreSQL UPDATE With JOINHow to Install PostgreSQL on LinuxUnderstanding HAVING in PostgreSQL (With Examples)How to Fix No Partition of Relation Found for Row in Postgres DatabasesHow to Fix Transaction ID Wraparound ExhaustionUnderstanding WHERE in PostgreSQL (With Examples)Understanding OFFSET in PostgreSQL (With Examples)What Is a PostgreSQL Inner Join?Understanding PostgreSQL SELECTWhat Is Data Compression and How Does It Work?What Is Data Transformation, and Why Is It Important?What Characters Are Allowed in PostgreSQL Strings?Understanding the Postgres string_agg FunctionWhat Is a PostgreSQL Full Outer Join?Self-Hosted or Cloud Database? A Countryside Reflection on Infrastructure ChoicesUnderstanding the Postgres extract() Function
How to Choose a Database: A Decision Framework for Modern ApplicationsA Guide to Scaling PostgreSQLHandling Large Objects in PostgresGuide to PostgreSQL PerformancePostgreSQL Performance Tuning: Key ParametersHow to Reduce Bloat in Large PostgreSQL TablesDetermining the Optimal Postgres Partition SizeNavigating Growing PostgreSQL Tables With Partitioning (and More)SQL/JSON Data Model and JSON in SQL: A PostgreSQL PerspectiveHow to Use PostgreSQL for Data TransformationPostgreSQL Performance Tuning: Designing and Implementing Your Database SchemaPostgreSQL Performance Tuning: Optimizing Database IndexesWhen to Consider Postgres PartitioningAn Intro to Data Modeling on PostgreSQLDesigning Your Database Schema: Wide vs. Narrow Postgres TablesGuide to PostgreSQL Database OperationsBest Practices for Time-Series Data Modeling: Single or Multiple Partitioned Table(s) a.k.a. Hypertables Best Practices for (Time-)Series Metadata Tables What Is a PostgreSQL Temporary View?PostgreSQL Performance Tuning: How to Size Your DatabaseA PostgreSQL Database Replication GuideGuide to Postgres Data ManagementHow to Compute Standard Deviation With PostgreSQLRecursive Query in SQL: What It Is, and How to Write OneHow to Query JSON Metadata in PostgreSQLHow to Query JSONB in PostgreSQLA Guide to Data Analysis on PostgreSQLGuide to PostgreSQL SecurityOptimizing Array Queries With GIN Indexes in PostgreSQLPg_partman vs. Hypertables for Postgres PartitioningTop PostgreSQL Drivers for PythonUnderstanding PostgreSQL TablespacesWhat Is Audit Logging and How to Enable It in PostgreSQLHow to Index JSONB Columns in PostgreSQLHow to Monitor and Optimize PostgreSQL Index PerformanceA Guide to pg_restore (and pg_restore Example)Explaining PostgreSQL EXPLAINHow PostgreSQL Data Aggregation WorksHow to Use Psycopg2: The PostgreSQL Adapter for PythonBuilding a Scalable DatabaseGuide to PostgreSQL Database Design
Best Practices for Postgres Data ManagementHow to Store Video in PostgreSQL Using BYTEABest Practices for Postgres PerformanceHow to Design Your PostgreSQL Database: Two Schema ExamplesBest Practices for Scaling PostgreSQLHow to Handle High-Cardinality Data in PostgreSQLBest Practices for PostgreSQL AggregationBest Practices for Postgres Database ReplicationHow to Use a Common Table Expression (CTE) in SQLBest Practices for Postgres SecurityBest Practices for PostgreSQL Database OperationsBest Practices for PostgreSQL Data AnalysisTesting Postgres Ingest: INSERT vs. Batch INSERT vs. COPYHow to Manage Your Data With Data Retention PoliciesHow to Use PostgreSQL for Data Normalization
PostgreSQL Extensions: amcheckPostgreSQL Extensions: Turning PostgreSQL Into a Vector Database With pgvectorPostgreSQL Extensions: Unlocking Multidimensional Points With Cube PostgreSQL Extensions: hstorePostgreSQL Extensions: ltreePostgreSQL Extensions: Secure Your Time-Series Data With pgcryptoPostgreSQL Extensions: pg_prewarmPostgreSQL Extensions: pgRoutingPostgreSQL Extensions: pg_stat_statementsPostgreSQL Extensions: Database Testing With pgTAPPostgreSQL Extensions: Install pg_trgm for Data MatchingPostgreSQL Extensions: PL/pgSQLPostgreSQL Extensions: Using PostGIS and Timescale for Advanced Geospatial InsightsPostgreSQL Extensions: Intro to uuid-ossp
What Is ClickHouse and How Does It Compare to PostgreSQL and TimescaleDB for Time Series?Timescale vs. Amazon RDS PostgreSQL: Up to 350x Faster Queries, 44 % Faster Ingest, 95 % Storage Savings for Time-Series DataWhat We Learned From Benchmarking Amazon Aurora PostgreSQL ServerlessTimescaleDB vs. Amazon Timestream: 6,000x Higher Inserts, 5-175x Faster Queries, 150-220x CheaperHow to Store Time-Series Data in MongoDB and Why That’s a Bad IdeaPostgreSQL + TimescaleDB: 1,000x Faster Queries, 90 % Data Compression, and Much MoreEye or the Tiger: Benchmarking Cassandra vs. TimescaleDB for Time-Series Data
What Is a Data Historian?Understanding IoT (Internet of Things)A Beginner’s Guide to IIoT and Industry 4.0Storing IoT Data: 8 Reasons Why You Should Use PostgreSQLMoving Past Legacy Systems: Data Historian vs. Time-Series DatabaseWhy You Should Use PostgreSQL for Industrial IoT DataThe Best Databases for IoT in 2026: A Practical ComparisonHow Hopthru Powers Real-Time Transit Analytics From a 1 TB TableHow to Simulate a Basic IoT Sensor Dataset on PostgreSQLFrom Ingest to Insights in Milliseconds: Everactive's Tech Transformation With TimescaleHow Ndustrial Is Providing Fast Real-Time Queries and Safely Storing Client Data With 97 % Compression Migrating a Low-Code IoT Platform Storing 20M Records/DayHow United Manufacturing Hub Is Introducing Open Source to ManufacturingBuilding IoT Pipelines for Faster Analytics With IoT CoreVisualizing IoT Data at Scale With Hopara and TimescaleDB
A Brief History of AI: How Did We Get Here, and What's Next?A Beginner’s Guide to Vector EmbeddingsPostgreSQL as a Vector Database: A Pgvector TutorialUsing Pgvector With PythonHow to Choose a Vector DatabaseVector Databases Are the Wrong AbstractionUnderstanding DiskANNA Guide to Cosine SimilarityStreaming DiskANN: How We Made PostgreSQL as Fast as Pinecone for Vector DataImplementing Cosine Similarity in PythonVector Database Basics: HNSWVector Database Options for AWSVector Store vs. Vector Database: Understanding the ConnectionPgvector vs. Pinecone: Vector Database Performance and Cost ComparisonHow to Build LLM Applications With Pgvector Vector Store in LangChainHow to Implement RAG With Amazon Bedrock and LangChainRAG Is More Than Just Vector SearchRefining Vector Search Queries With Time Filters in Pgvector: A TutorialUnderstanding Semantic SearchVector Search vs Semantic SearchHNSW vs. DiskANNWhen Should You Use Full-Text Search vs. Vector Search?Building AI Agents with Persistent Memory: A Unified Database ApproachWhat Is Vector Search? Text-to-SQL: A Developer’s Zero-to-Hero GuideNearest Neighbor Indexes: What Are IVFFlat Indexes in Pgvector and How Do They WorkPostgreSQL Hybrid Search Using Pgvector and CohereBuilding an AI Image Gallery With OpenAI CLIP, Claude Sonnet 3.5, and Pgvector
Understanding OLTPUnderstanding OLAP: What It Is, How It Differs From OLTP, and Running It on PostgreSQLColumnar Databases vs. Row-Oriented Databases: Which to Choose?How to Choose an OLAP DatabaseHow to Choose a Real-Time Analytics DatabaseData Analytics vs. Real-Time Analytics: How to Pick Your Database (and Why It Should Be PostgreSQL)PostgreSQL as a Real-Time Analytics DatabaseWhat Is the Best Database for Real-Time AnalyticsHow to Build an IoT Pipeline for Real-Time Analytics in PostgreSQL
Alternatives to RDSWhy Is RDS so Expensive? Understanding RDS Pricing and CostsEstimating RDS CostsHow to Migrate From AWS RDS for PostgreSQL to TimescaleAmazon Aurora vs. RDS: Understanding the Difference
5 InfluxDB Alternatives for Your Time-Series Data8 Reasons to Choose Timescale as Your InfluxDB Alternative InfluxQL, Flux, and SQL: Which Query Language Is Best? (With Cheatsheet)What InfluxDB Got WrongTimescaleDB vs. InfluxDB: Purpose Built Differently for Time-Series Data
Is Postgres Partitioning Really That Hard? An Introduction To HypertablesComplete Guide: Migrating from MongoDB to Tiger Data (Step-by-Step)How to Migrate Your Data to Timescale (3 Ways)Postgres TOAST vs. Timescale CompressionBuilding Python Apps With PostgreSQL: A Developer's GuideData Visualization in PostgreSQL With Apache SupersetMore Time-Series Data Analysis, Fewer Lines of Code: Meet HyperfunctionsPostgreSQL Materialized Views and Where to Find Them5 Ways to Monitor Your PostgreSQL DatabaseTimescale Tips: Testing Your Chunk Size
Postgres cheat sheet
HomeTime series basicsPostgres basicsPostgres guidesPostgres best practicesPostgres extensionsBenchmarks
Home
AWS Timestream Alternatives: Your Migration Options After LiveAnalyticsThe Best Time-Series Databases Compared (2026)What Is Temporal Data?Time-Series Database: What It Is, How It Works, and When You Need OneIs Your Data Time Series? Data Types Supported by PostgreSQL and TimescaleUnderstanding Database Workloads: Variable, Bursty, and Uniform PatternsTime-Series Analysis and Forecasting With Python What Are Open-Source Time-Series Databases—Understanding Your OptionsStationary Time-Series AnalysisAlternatives to TimescaleWhy Consider Using PostgreSQL for Time-Series Data?Time-Series Analysis in RWhat Is a Time Series and How Is It Used?How to Work With Time Series in Python?Tools for Working With Time-Series Analysis in PythonGuide to Time-Series Analysis in PythonUnderstanding Autoregressive Time-Series ModelingCreating a Fast Time-Series Graph With Postgres Materialized Views
PostgreSQL vs. Cassandra: The Decision Framework for Time-Series and Write-Heavy WorkloadsUnderstanding PostgreSQLOptimizing Your Database: A Deep Dive into PostgreSQL Data TypesUnderstanding FROM in PostgreSQL (With Examples)How to Address ‘Error: Could Not Resize Shared Memory Segment’ Understanding FILTER in PostgreSQL (With Examples)How to Install PostgreSQL on MacOSUnderstanding GROUP BY in PostgreSQL (With Examples)Understanding LIMIT in PostgreSQL (With Examples)Understanding PostgreSQL FunctionsUnderstanding ORDER BY in PostgreSQL (With Examples)PostgreSQL Mathematical Functions: Enhancing Coding EfficiencyUnderstanding PostgreSQL WITHIN GROUPUnderstanding WINDOW in PostgreSQL (With Examples)Using PostgreSQL String Functions for Improved Data AnalysisUnderstanding DISTINCT in PostgreSQL (With Examples)PostgreSQL Joins : A SummaryUnderstanding PostgreSQL Date and Time FunctionsWhat Is a PostgreSQL Cross Join?Understanding ACID Compliance Understanding PostgreSQL Conditional FunctionsStructured vs. Semi-Structured vs. Unstructured Data in PostgreSQLUnderstanding percentile_cont() and percentile_disc() in PostgreSQL5 Common Connection Errors in PostgreSQL and How to Solve ThemData Processing With PostgreSQL Window FunctionsPostgreSQL Join Type TheoryA Guide to PostgreSQL ViewsData Partitioning: What It Is and Why It MattersUnderstanding PostgreSQL Array FunctionsUnderstanding PostgreSQL's COALESCE FunctionUnderstanding the rank() and dense_rank() Functions in PostgreSQLWhat Is a PostgreSQL Left Join? And a Right Join?Strategies for Improving Postgres JOIN PerformanceUnderstanding Foreign Keys in PostgreSQLUnderstanding PostgreSQL User-Defined FunctionsUnderstanding SQL Aggregate FunctionsUsing PostgreSQL UPDATE With JOINHow to Install PostgreSQL on LinuxUnderstanding HAVING in PostgreSQL (With Examples)How to Fix No Partition of Relation Found for Row in Postgres DatabasesHow to Fix Transaction ID Wraparound ExhaustionUnderstanding WHERE in PostgreSQL (With Examples)Understanding OFFSET in PostgreSQL (With Examples)What Is a PostgreSQL Inner Join?Understanding PostgreSQL SELECTWhat Is Data Compression and How Does It Work?What Is Data Transformation, and Why Is It Important?What Characters Are Allowed in PostgreSQL Strings?Understanding the Postgres string_agg FunctionWhat Is a PostgreSQL Full Outer Join?Self-Hosted or Cloud Database? A Countryside Reflection on Infrastructure ChoicesUnderstanding the Postgres extract() Function
How to Choose a Database: A Decision Framework for Modern ApplicationsA Guide to Scaling PostgreSQLHandling Large Objects in PostgresGuide to PostgreSQL PerformancePostgreSQL Performance Tuning: Key ParametersHow to Reduce Bloat in Large PostgreSQL TablesDetermining the Optimal Postgres Partition SizeNavigating Growing PostgreSQL Tables With Partitioning (and More)SQL/JSON Data Model and JSON in SQL: A PostgreSQL PerspectiveHow to Use PostgreSQL for Data TransformationPostgreSQL Performance Tuning: Designing and Implementing Your Database SchemaPostgreSQL Performance Tuning: Optimizing Database IndexesWhen to Consider Postgres PartitioningAn Intro to Data Modeling on PostgreSQLDesigning Your Database Schema: Wide vs. Narrow Postgres TablesGuide to PostgreSQL Database OperationsBest Practices for Time-Series Data Modeling: Single or Multiple Partitioned Table(s) a.k.a. Hypertables Best Practices for (Time-)Series Metadata Tables What Is a PostgreSQL Temporary View?PostgreSQL Performance Tuning: How to Size Your DatabaseA PostgreSQL Database Replication GuideGuide to Postgres Data ManagementHow to Compute Standard Deviation With PostgreSQLRecursive Query in SQL: What It Is, and How to Write OneHow to Query JSON Metadata in PostgreSQLHow to Query JSONB in PostgreSQLA Guide to Data Analysis on PostgreSQLGuide to PostgreSQL SecurityOptimizing Array Queries With GIN Indexes in PostgreSQLPg_partman vs. Hypertables for Postgres PartitioningTop PostgreSQL Drivers for PythonUnderstanding PostgreSQL TablespacesWhat Is Audit Logging and How to Enable It in PostgreSQLHow to Index JSONB Columns in PostgreSQLHow to Monitor and Optimize PostgreSQL Index PerformanceA Guide to pg_restore (and pg_restore Example)Explaining PostgreSQL EXPLAINHow PostgreSQL Data Aggregation WorksHow to Use Psycopg2: The PostgreSQL Adapter for PythonBuilding a Scalable DatabaseGuide to PostgreSQL Database Design
Best Practices for Postgres Data ManagementHow to Store Video in PostgreSQL Using BYTEABest Practices for Postgres PerformanceHow to Design Your PostgreSQL Database: Two Schema ExamplesBest Practices for Scaling PostgreSQLHow to Handle High-Cardinality Data in PostgreSQLBest Practices for PostgreSQL AggregationBest Practices for Postgres Database ReplicationHow to Use a Common Table Expression (CTE) in SQLBest Practices for Postgres SecurityBest Practices for PostgreSQL Database OperationsBest Practices for PostgreSQL Data AnalysisTesting Postgres Ingest: INSERT vs. Batch INSERT vs. COPYHow to Manage Your Data With Data Retention PoliciesHow to Use PostgreSQL for Data Normalization
PostgreSQL Extensions: amcheckPostgreSQL Extensions: Turning PostgreSQL Into a Vector Database With pgvectorPostgreSQL Extensions: Unlocking Multidimensional Points With Cube PostgreSQL Extensions: hstorePostgreSQL Extensions: ltreePostgreSQL Extensions: Secure Your Time-Series Data With pgcryptoPostgreSQL Extensions: pg_prewarmPostgreSQL Extensions: pgRoutingPostgreSQL Extensions: pg_stat_statementsPostgreSQL Extensions: Database Testing With pgTAPPostgreSQL Extensions: Install pg_trgm for Data MatchingPostgreSQL Extensions: PL/pgSQLPostgreSQL Extensions: Using PostGIS and Timescale for Advanced Geospatial InsightsPostgreSQL Extensions: Intro to uuid-ossp
What Is ClickHouse and How Does It Compare to PostgreSQL and TimescaleDB for Time Series?Timescale vs. Amazon RDS PostgreSQL: Up to 350x Faster Queries, 44 % Faster Ingest, 95 % Storage Savings for Time-Series DataWhat We Learned From Benchmarking Amazon Aurora PostgreSQL ServerlessTimescaleDB vs. Amazon Timestream: 6,000x Higher Inserts, 5-175x Faster Queries, 150-220x CheaperHow to Store Time-Series Data in MongoDB and Why That’s a Bad IdeaPostgreSQL + TimescaleDB: 1,000x Faster Queries, 90 % Data Compression, and Much MoreEye or the Tiger: Benchmarking Cassandra vs. TimescaleDB for Time-Series Data
What Is a Data Historian?Understanding IoT (Internet of Things)A Beginner’s Guide to IIoT and Industry 4.0Storing IoT Data: 8 Reasons Why You Should Use PostgreSQLMoving Past Legacy Systems: Data Historian vs. Time-Series DatabaseWhy You Should Use PostgreSQL for Industrial IoT DataThe Best Databases for IoT in 2026: A Practical ComparisonHow Hopthru Powers Real-Time Transit Analytics From a 1 TB TableHow to Simulate a Basic IoT Sensor Dataset on PostgreSQLFrom Ingest to Insights in Milliseconds: Everactive's Tech Transformation With TimescaleHow Ndustrial Is Providing Fast Real-Time Queries and Safely Storing Client Data With 97 % Compression Migrating a Low-Code IoT Platform Storing 20M Records/DayHow United Manufacturing Hub Is Introducing Open Source to ManufacturingBuilding IoT Pipelines for Faster Analytics With IoT CoreVisualizing IoT Data at Scale With Hopara and TimescaleDB
A Brief History of AI: How Did We Get Here, and What's Next?A Beginner’s Guide to Vector EmbeddingsPostgreSQL as a Vector Database: A Pgvector TutorialUsing Pgvector With PythonHow to Choose a Vector DatabaseVector Databases Are the Wrong AbstractionUnderstanding DiskANNA Guide to Cosine SimilarityStreaming DiskANN: How We Made PostgreSQL as Fast as Pinecone for Vector DataImplementing Cosine Similarity in PythonVector Database Basics: HNSWVector Database Options for AWSVector Store vs. Vector Database: Understanding the ConnectionPgvector vs. Pinecone: Vector Database Performance and Cost ComparisonHow to Build LLM Applications With Pgvector Vector Store in LangChainHow to Implement RAG With Amazon Bedrock and LangChainRAG Is More Than Just Vector SearchRefining Vector Search Queries With Time Filters in Pgvector: A TutorialUnderstanding Semantic SearchVector Search vs Semantic SearchHNSW vs. DiskANNWhen Should You Use Full-Text Search vs. Vector Search?Building AI Agents with Persistent Memory: A Unified Database ApproachWhat Is Vector Search? Text-to-SQL: A Developer’s Zero-to-Hero GuideNearest Neighbor Indexes: What Are IVFFlat Indexes in Pgvector and How Do They WorkPostgreSQL Hybrid Search Using Pgvector and CohereBuilding an AI Image Gallery With OpenAI CLIP, Claude Sonnet 3.5, and Pgvector
Understanding OLTPUnderstanding OLAP: What It Is, How It Differs From OLTP, and Running It on PostgreSQLColumnar Databases vs. Row-Oriented Databases: Which to Choose?How to Choose an OLAP DatabaseHow to Choose a Real-Time Analytics DatabaseData Analytics vs. Real-Time Analytics: How to Pick Your Database (and Why It Should Be PostgreSQL)PostgreSQL as a Real-Time Analytics DatabaseWhat Is the Best Database for Real-Time AnalyticsHow to Build an IoT Pipeline for Real-Time Analytics in PostgreSQL
Alternatives to RDSWhy Is RDS so Expensive? Understanding RDS Pricing and CostsEstimating RDS CostsHow to Migrate From AWS RDS for PostgreSQL to TimescaleAmazon Aurora vs. RDS: Understanding the Difference
5 InfluxDB Alternatives for Your Time-Series Data8 Reasons to Choose Timescale as Your InfluxDB Alternative InfluxQL, Flux, and SQL: Which Query Language Is Best? (With Cheatsheet)What InfluxDB Got WrongTimescaleDB vs. InfluxDB: Purpose Built Differently for Time-Series Data
Is Postgres Partitioning Really That Hard? An Introduction To HypertablesComplete Guide: Migrating from MongoDB to Tiger Data (Step-by-Step)How to Migrate Your Data to Timescale (3 Ways)Postgres TOAST vs. Timescale CompressionBuilding Python Apps With PostgreSQL: A Developer's GuideData Visualization in PostgreSQL With Apache SupersetMore Time-Series Data Analysis, Fewer Lines of Code: Meet HyperfunctionsPostgreSQL Materialized Views and Where to Find Them5 Ways to Monitor Your PostgreSQL DatabaseTimescale Tips: Testing Your Chunk Size
Postgres cheat sheet
TigerData logo

Products

Time-series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Tutorials Changelog Success Stories Time-series Database

Company

Contact Us Careers About Newsroom Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2026 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

By Tiger Data Team

Updated at Apr 7, 2026

Table of contents

    The Best Databases for IoT in 2026: A Practical Comparison

    The Best Databases for IoT in 2026: A Practical Comparison

    By Tiger Data Team

    Updated at Apr 7, 2026

    Originally Published on Aug. 1, 2024

    A single industrial sensor sampling at one-second intervals generates 86,400 data points per day. Scale that to a fleet of 10,000 sensors and you are looking at 864 million rows daily. General-purpose databases were not designed for this write pattern. They handle it for a while, then they don't.

    IoT is a database problem in its own right. The workload has four defining characteristics: high-frequency time-stamped writes that arrive continuously from devices, cardinality at device-ID scale where every sensor has a unique identifier, long retention windows with aggressive compression requirements because storage costs compound fast, and frequent reads for dashboards and alerts that need to query across time ranges without scanning every row.

    The pain point that separates serious IoT deployments from prototypes is context joining. Sensor readings alone are not useful. You need to join them with device metadata: firmware version, location, owner, calibration dates. Purpose-built time-series databases without SQL make this impossible or force you into a second database for the relational side. This is where PostgreSQL-based solutions have a structural advantage, and it is the reason Tiger Data built its time-series engine as a PostgreSQL extension rather than a standalone database.

    A note on perspective: Tiger Data makes a time-series database built on PostgreSQL. This guide reflects that perspective. Where Tiger Data is not the best fit, this guide says so.

    What to Look For in an IoT Database

    Before comparing specific databases, here are the criteria that matter most for IoT workloads. Each one appears in the individual database profiles below.

    Ingestion throughput. Sustained write rates (rows per second) matter more than peak latency for IoT. Most IoT devices communicate over MQTT, the dominant protocol for constrained devices and low-bandwidth networks. Whether your database integrates with MQTT brokers natively or requires an intermediary like Telegraf determines how much pipeline engineering sits between your devices and your data.

    Cardinality handling. High-cardinality tag sets (unique device IDs, sensor serial numbers, location codes) cause index bloat in traditional time-series databases. InfluxDB's documented cardinality limitations in versions 1.x and 2.x are the most visible example of this failure mode. Any database you choose should handle millions of unique series without requiring workarounds.

    Data compression. Sensor data compresses well because it is repetitive and predictable. When evaluating any sensor data database, compression ratios directly determine how much history you can keep online and what your storage bill looks like. Tiger Data's columnstore compression achieves up to 98% size reduction in production deployments, and customer benchmarks consistently show 87-97% compression across IoT workloads.

    Query language. SQL vs. Flux vs. InfluxQL vs. proprietary. The developer community has a clear preference: SQL is the query language most teams already know, most tools already support, and most hires already have experience with. InfluxDB's move toward SQL in version 3.0 after deprecating Flux confirms this trend.

    Schema flexibility. IoT devices evolve. New sensors get added, firmware updates change payload structures, device types expand. Rigid schemas break production deployments. PostgreSQL's ALTER TABLE and Tiger Data's hypertable design handle schema evolution without downtime. Some alternatives require schema redesigns when device types change.

    Retention policies and downsampling. Raw sensor data at one-second intervals is useful for troubleshooting but expensive to store indefinitely. Tiger Data offers continuous aggregates for automatic downsampling and chunk-based retention policies for rolling expiration. These are native features. Note which alternatives have equivalents and which require external tooling.

    Deployment model. Fully managed cloud vs. self-hosted changes the operational math. Tiger Cloud handles infrastructure, backups, and upgrades. For teams running IoT at scale, the operational overhead of self-hosting a database is real engineering cost that does not appear on the sticker price.

    Ecosystem fit. Grafana, Telegraf, Home Assistant, Node-RED. These integrations are community-driven, but they determine adoption velocity. A database that works with your existing monitoring and visualization stack saves weeks of integration work.

    These criteria apply to each database profiled below.

    The Best Databases for IoT: An Overview

    "Best" depends on your workload, your team's SQL expertise, and whether you need managed cloud or prefer self-hosting. A two-person startup running Home Assistant sensors has different requirements than an energy company monitoring 10,000 field devices across three continents. The comparison table below provides a starting point. The detailed profiles that follow apply the evaluation criteria from the previous section to each database individually.

    Database

    Best For

    Query Language

    Deployment

    Open Source

    Tiger Cloud

    Fleet IoT, SQL teams, mixed workloads

    SQL (PostgreSQL)

    Managed cloud

    Yes (TimescaleDB)

    InfluxDB

    InfluxDB-native stacks

    InfluxQL / SQL (v3)

    Cloud + self-hosted

    Partially

    QuestDB

    High-throughput ingestion, OSS

    SQL

    Self-hosted / Enterprise BYOC

    Yes

    CrateDB

    Distributed SQL, industrial

    SQL

    Self-hosted / cloud

    Yes

    TDengine

    Industrial IoT, edge

    SQL-like (TDengine SQL)

    Self-hosted / edge

    Yes

    Apache IoTDB

    Enterprise industrial IoT

    IoTQL / SQL

    Self-hosted

    Yes

    MongoDB

    Document-model IoT, flexible schema

    MQL

    Managed + self-hosted

    Partially

    Grafana Cloud and Prometheus are not databases. They are a metrics and visualization layer often deployed alongside these databases. If you arrived here from a Grafana context, you are likely looking for a backend data store. Tiger Cloud and InfluxDB both work as Grafana data sources.

    Note on AWS Timestream: AWS Timestream LiveAnalytics was closed to new customers in June 2025 and is no longer available for new IoT deployments. Amazon Timestream for InfluxDB (Amazon's managed InfluxDB service) is the surviving AWS-native time-series option.

    Tiger Cloud

    Tiger Data is a managed PostgreSQL-based time-series database built on the open-source TimescaleDB extension. Full disclosure: this is our product, and this guide reflects that perspective.

    What it is: A fully managed cloud service that extends PostgreSQL with time-series primitives. The open-source TimescaleDB extension is free for self-hosting. Tiger Cloud adds managed infrastructure, automated backups, and consumption-based pricing with independent compute and storage scaling.

    Best use case: Fleet-scale IoT where SQL joins across device metadata are required. Energy and utilities telemetry. Any team that wants to avoid managing database infrastructure while keeping full SQL and PostgreSQL ecosystem access.

    IoT-specific features:

    • Hypertables partition sensor data by time automatically. You create a standard PostgreSQL table, convert it to a hypertable, and TimescaleDB handles chunk management under the hood. Queries stay bounded as data grows.

    • Continuous aggregates pre-compute rollups for dashboard queries. Instead of re-scanning millions of raw sensor readings for "average temperature over the last 7 days by device," the aggregate updates incrementally. Hierarchical continuous aggregates let you build hourly rollups from raw data, then daily rollups from the hourly layer.

    • Columnstore compression achieves up to 98% size reduction for sensor data. 

    • Chunk-based data retention drops data older than a configured threshold automatically. No manual cleanup scripts.

    • Tiered storage moves older chunks to low-cost object storage while keeping them queryable via SQL.

    MQTT ingestion path: The most common production pipeline is Telegraf with the MQTT Consumer input plugin writing to Tiger Cloud via the PostgreSQL output or InfluxDB line protocol. Telegraf is the de facto IoT ingestion standard (InfluxData built it; it works equally well with Tiger Cloud). For a detailed implementation guide, see From MQTT to SQL: A Practical Guide to Sensor Data Ingestion.

    Production IoT customers: Energy and telemetry companies including NextEra, Octave, Easee, and Flogistix run Tiger Data in production for fleet-scale sensor monitoring.

    Strengths: Full SQL with JOINs across device metadata and sensor readings in a single query. PostgreSQL ecosystem access (pgvector for ML pipelines, PostGIS for location data). No schema migration pain. Managed operations. Mutable time-series data (updates and deletes work, unlike write-once databases).

    Limitations: Not the best choice if your team is deeply invested in InfluxDB-native tooling with no SQL expertise and no plans to migrate. Cloud-first. Teams requiring air-gapped on-premises deployment should evaluate the open-source TimescaleDB path separately.

    InfluxDB

    Version fragmentation is the first thing to understand. InfluxDB 1.x, 2.x, and 3.0 are architecturally different products. Cloud Serverless and Cloud Dedicated are separate offerings. Teams migrating between versions should research compatibility carefully before committing.

    What it is: A purpose-built time-series database owned by InfluxData. 

    Pricing: Open core. Cloud Serverless and Cloud Dedicated available. Self-hosted options exist for older versions.

    Query language: InfluxQL for 1.x. Flux for 2.x (now being deprecated). SQL support in 3.0.

    Best use case: Teams already running InfluxDB 1.x or 2.x where migration cost outweighs the benefits of switching. Telegraf-native pipelines tightly coupled to InfluxDB-specific inputs.

    Strengths: Strong brand recognition. The Telegraf ecosystem is the largest collection of input plugins for data collection, and it works with most databases on this list (not just InfluxDB). MQTT Telegraf plugins are well-maintained. Large community. Strong documentation.

    Limitations: Cardinality limits in 1.x and 2.x are well-documented. Community reports significant performance degradation at high unique tag cardinalities, which is a direct problem for IoT workloads where every device has a unique ID. Flux created a skills silo and is now being deprecated. InfluxDB 3.0 is a substantial rewrite, and migration from 2.x is non-trivial. Data deletion is difficult in older versions, which is a credibility issue for IoT workloads that require GDPR compliance or data corrections. No native JOIN across device metadata tables.

    InfluxDB is a legitimate choice for teams already using it. The limitations above are trade-offs, not disqualifications. For a deeper technical comparison, see TimescaleDB vs. InfluxDB.

    QuestDB

    What it is: An open-source SQL time-series database from QuestDB with a column-oriented storage engine built for high-throughput ingestion.

    Pricing: Open-source self-hosted. QuestDB Enterprise is available as a BYOC (Bring Your Own Cloud) managed option. The previous QuestDB Cloud managed service has been discontinued.

    Query language: Standard SQL with time-series extensions. No proprietary query language.

    Best use case: Teams prioritizing raw ingestion speed and open-source self-hosting. QuestDB has a strong tutorial ecosystem for MQTT, Raspberry Pi, and Arduino projects, making it a solid entry point for IoT prototyping and smaller deployments.

    Strengths: QuestDB publishes competitive ingestion benchmarks, and community reports suggest it performs well at high write rates. SQL-native from the start, so there is no query language migration path to worry about. Clean API. Good developer experience.

    Limitations: Smaller ecosystem than InfluxDB or Tiger Data. No native JOIN to external metadata sources without extension work, which means joining sensor readings with device metadata requires application-level logic or a second database. The Enterprise BYOC model requires more operational involvement than a fully managed service like Tiger Cloud or InfluxDB Cloud. Community size is smaller.

    CrateDB

    What it is: A distributed SQL database from CrateDB with PostgreSQL wire-protocol compatibility, targeting IoT and machine data at scale.

    Pricing: Open-source self-hosted. CrateDB Cloud (managed) available.

    Query language: Standard SQL.

    Best use case: Distributed IoT workloads where horizontal sharding across multiple nodes is a hard requirement. Industrial and manufacturing use cases with large datasets spread across regions.

    Strengths: Distributed-first architecture. Horizontal scaling works without deep operator expertise. PostgreSQL-compatible SQL means teams familiar with PostgreSQL can get started quickly. Handles very large datasets across nodes. Good fit for multi-region industrial deployments.

    Limitations: PostgreSQL compatibility is not complete. Some PostgreSQL features and extensions (including pgvector for ML pipelines and PostGIS for location data) are not available. Tiger Data's continuous aggregates and columnstore compression have no CrateDB equivalent. The ecosystem and documentation depth are below InfluxDB or Tiger Data. CrateDB published a fresh IoT comparison in February 2026 and is gaining SERP presence, but community traction remains limited.

    TDengine

    What it is: An open-source database from TDengine explicitly targeting Industrial IoT and edge computing deployments. Chinese enterprise origin, which is context rather than a disqualifier. TDengine has a growing international presence.

    Pricing: Open source + enterprise edition.

    Query language: TDengine SQL, a SQL dialect that covers most common operations but is not ANSI SQL-compliant.

    Best use case: Industrial IoT with native MQTT and OPC UA protocol support. Edge computing deployments where data needs to be processed locally before syncing to a central database. Manufacturing, energy, and utilities at the protocol level.

    Strengths: Native MQTT and OPC UA support without requiring an intermediary. Built-in edge-to-cloud replication. The "supertable" concept handles high-cardinality device data by defining a template schema that each device inherits. Explicit IIoT design throughout.

    Limitations: Smaller Western ecosystem. TDengine SQL is not standard SQL, which means existing PostgreSQL or MySQL tooling does not transfer directly. Fewer integrations with Western observability tooling. Grafana integration exists but is less mature than Tiger Data's or InfluxDB's PostgreSQL/InfluxDB data source plugins. Enterprise support model is less established outside of Asia.

    Apache IoTDB

    What it is: An Apache Software Foundation project. Enterprise-grade open-source time-series database designed specifically for Industrial IoT.

    Pricing: Open source. Enterprise support available via commercial vendors.

    Query language: IoTQL and SQL.

    Best use case: Large-scale enterprise industrial deployments in manufacturing, utilities, and transportation. Organizations that require Apache governance and open-source provenance for procurement or compliance.

    Strengths: Apache Software Foundation governance provides a credibility signal for enterprise procurement. Handles very large-scale enterprise industrial data. Active development. Strong adoption in Chinese manufacturing and energy enterprises.

    Limitations: Steep learning curve. IoTQL is non-standard, so existing SQL skills do not transfer directly. Western community and documentation are smaller than alternatives on this list. Operational complexity for self-hosting is higher than managed alternatives.

    When to Consider MongoDB or Other General-Purpose Databases

    Not every IoT project needs a purpose-built time-series database.

    MongoDB works when: IoT data payloads are heavily document-oriented with irregular, schema-flexible structures. The team already has MongoDB expertise. Query patterns are document-retrieval-first rather than time-range aggregation-first. Write volume is moderate (hundreds of rows per second, not millions).

    MongoDB struggles when: Write throughput exceeds hundreds of thousands of rows per second. Retention compression is critical for cost control. Time-range queries and aggregations dominate the access pattern. These workloads push MongoDB beyond its design center.

    A note on Prometheus and Grafana: These are monitoring and visualization tools, not IoT databases. Developers who arrived here from a Grafana context are looking for a backend data store. Tiger Cloud and InfluxDB both connect to Grafana as data sources. Tiger Data uses the PostgreSQL data source plugin, which works with Grafana's time-series panel types out of the box.

    MQTT and IoT Database Ingestion

    MQTT (Message Queuing Telemetry Transport) is a lightweight publish-subscribe protocol designed for constrained devices and low-bandwidth networks. It is the dominant protocol for IoT sensor data transmission. Devices publish to topics; a broker (Mosquitto, EMQX, HiveMQ) receives the messages; a subscriber writes the data to the database.

    Getting MQTT data into your database is a solved problem, but the specific path varies by database.

    Telegraf pattern (most common). Telegraf's MQTT Consumer input plugin subscribes to your MQTT broker topics, parses payloads, batches messages, and writes to Tiger Cloud via the PostgreSQL output plugin or InfluxDB line protocol. This is the most battle-tested path for production IoT. Telegraf handles connection management, buffering, and retry logic. The MQTT + Telegraf + Tiger Cloud stack is used in production by energy telemetry customers and is a supported path. For a step-by-step implementation guide, see From MQTT to SQL.

    Direct subscription pattern. Application code subscribes to the MQTT broker, parses payloads, and writes via a PostgreSQL client library. More flexible than Telegraf, but you own connection management, buffering, and error handling.

    Which databases have native MQTT support:

    • TDengine has built-in MQTT and OPC UA connectors. No intermediary required.

    • InfluxDB uses Telegraf (built by InfluxData) as its primary MQTT ingestion path.

    • Tiger Cloud, QuestDB, CrateDB all use Telegraf or direct subscription. No native MQTT broker integration.

    EMQX deserves a mention as the MQTT broker vendor with the strongest documentation for database integration. If you are evaluating MQTT brokers alongside databases, EMQX publishes integration guides for most databases on this list.

    IoT Data Retention and Compression

    At one-second intervals, a single sensor generates roughly 86,400 rows per day. A fleet of 10,000 sensors generates approximately 864 million rows per day. Uncompressed, this becomes a storage cost problem within weeks.

    Three retention strategies handle this at production scale:

    Chunk-based expiration. Drop data older than a configured threshold automatically. Tiger Data's hypertables support this natively via retention policies. Set a policy once, and chunks older than your retention window are dropped on schedule. No cron jobs, no cleanup scripts.

    Continuous aggregates plus raw drop. The most common production pattern for dashboard-heavy IoT applications. Keep one-minute or one-hour rollups indefinitely via continuous aggregates. Drop raw data after 30-90 days. Dashboards query the aggregates for "average temperature over the last 7 days by device" without scanning millions of raw rows.

    Tiered storage. Keep recent data in high-performance storage. Move older chunks to low-cost object storage automatically while keeping them queryable via SQL. Tiger Cloud supports this through its tiered storage architecture.

    Compression comparison across databases: Tiger Data's columnstore compression achieves up to 98% size reduction for repetitive sensor data, with production IoT customers consistently reporting 87-97% compression ratios. InfluxDB's TSM compression is effective, but cardinality limits in 1.x and 2.x can force data restructuring that offsets compression gains. QuestDB uses columnar storage with competitive compression. CrateDB's compression is adequate for most workloads but lacks the continuous aggregate equivalent for reducing query load on rolled-up data.

    For dashboards querying historical sensor data, continuous aggregates eliminate the need to scan raw data. The aggregate pre-computes the result, updates incrementally as new data arrives, and returns query results in milliseconds rather than seconds.

    Decision Framework: How to Choose

    Choose Tiger Cloud if:

    • Your team knows SQL and wants to avoid learning a proprietary query language

    • Sensor data needs to JOIN with device metadata, user accounts, or location data in the same query

    • You need a fully managed cloud service with committed SLAs and no database ops overhead

    • You are building a telemetry database for fleet operations, energy monitoring, or industrial equipment where compression and long-term retention are priorities

    • You want one database for time-series data, vector search (pgvector), and relational data

    • Your team is building on PostgreSQL already and wants to add time-series capability without changing the stack

    Choose InfluxDB if:

    • Your team is already running InfluxDB 1.x or 2.x and migration cost outweighs the benefits

    • The pipeline is Telegraf-native and tightly coupled to InfluxDB-specific inputs

    • You are evaluating InfluxDB 3.0 and want to stay within the InfluxData ecosystem

    • SQL is not yet a priority (though InfluxDB 3.0 is moving toward SQL)

    Choose QuestDB if:

    • Maximum ingestion throughput is the primary requirement

    • The team wants open-source self-hosted without vendor lock-in

    • SQL is required but fully managed cloud is not a priority (QuestDB Enterprise uses a BYOC model)

    • Budget is constrained and open-source licensing matters

    Choose TDengine if:

    • The workload is Industrial IoT with native MQTT or OPC UA at the edge

    • Edge-to-cloud replication is a hard requirement

    • The team is comfortable with TDengine SQL and a less Western-centric ecosystem

    Choose Apache IoTDB if:

    • Apache Software Foundation governance is a procurement or compliance requirement

    • The scale is enterprise industrial (manufacturing, utilities, transportation)

    • The team has resources for operational complexity and a longer learning curve

    Choose a general-purpose database (MongoDB, PostgreSQL without TimescaleDB) if:

    • IoT payloads are heavily irregular and document-oriented, making column schemas difficult

    • Write throughput is low (hundreds of rows per second, not millions)

    • The team has deep existing expertise in the general-purpose database and IoT time-series is a secondary use case

    Migrating to Tiger Cloud from a Legacy Historian or TSDB

    Migrations are a real pain point for industrial IoT teams. Here are the four most common paths.

    From InfluxDB (1.x or 2.x). Telegraf can write simultaneously to InfluxDB and Tiger Cloud during a parallel run. Schema mapping from InfluxDB measurements and tags to hypertable columns requires planning. This is not an automatic conversion. The InfluxDB line protocol is supported by Tiger Cloud, which simplifies the ingestion side.

    From OSIsoft PI or Wonderware (legacy historians). These are OPC UA-native systems. TDengine and Apache IoTDB have native OPC UA connectors. Tiger Cloud requires an intermediary: Telegraf with the OPC UA input plugin, or a custom bridge. This is a real engineering effort, not a weekend migration. Plan accordingly.

    From PostgreSQL (without TimescaleDB). The simplest migration path. Existing tables can be converted to hypertables. Schema and data stay in place. The main change is configuring the time-based partitioning strategy and enabling compression.

    From MongoDB. Requires schema design work to move from the document model to a relational/time-series schema. Worth doing if write throughput is becoming a bottleneck or time-range queries are slow, but it is not trivial.

    For enterprise migration planning, see Tiger Data's migration documentation and the energy and telemetry page for production deployment references.

    FAQ: Best Database for IoT

    What is the best database for IoT sensor data?

    It depends on workload, existing stack, future scalability needs, and team context. Tiger Data’s Tiger Cloud (built on PostgreSQL) is the strongest multi-model (general-purpose and time-series) choice for teams that need SQL JOINs across sensor and device metadata, long-term retention with compression, and a managed cloud service. InfluxDB is the default for teams already invested in that ecosystem. QuestDB is the strongest open-source option for raw ingestion throughput.

    Can I use PostgreSQL for IoT data?

    Yes. PostgreSQL with the TimescaleDB extension (the basis of Tiger Data) is a well-established pattern for IoT. The Home Assistant community has broad adoption of the TimescaleDB + PostgreSQL + Grafana stack for sensor logging. Standard PostgreSQL without TimescaleDB works at low volumes but shows performance degradation at high write rates and large data volumes. See Storing IoT Data: 8 Reasons Why You Should Use PostgreSQL for the full technical case.

    What database does MQTT use?

    MQTT is a protocol, not a database. It handles message transport from devices to a broker. The broker (Mosquitto, EMQX, HiveMQ) then needs to write to a database. Telegraf is the most common bridge: it subscribes to the MQTT broker and writes to Tiger Cloud, InfluxDB, QuestDB, or other targets via configured output plugins.

    Is InfluxDB good for IoT?

    InfluxDB is a common choice for IoT, particularly for teams already using Telegraf. However, InfluxDB has documented cardinality limitations in versions 1.x and 2.x, Flux (the 2.x query language) is being deprecated, and InfluxDB 3.0 is a significant rewrite with a non-trivial migration path from 2.x. Teams evaluating InfluxDB for new deployments should research the version differences carefully before committing.

    What is the best open-source database for IoT?

    QuestDB and TimescaleDB (the open-source project underlying Tiger Data) are the two strongest options. QuestDB optimizes for ingestion throughput with SQL. TimescaleDB optimizes for compression, continuous aggregates, and SQL compatibility with the full PostgreSQL ecosystem. TDengine and Apache IoTDB are strong options for Industrial IoT specifically.

    How do I store MQTT data in a database?

    The most common production pattern is: MQTT broker (Mosquitto or EMQX) to Telegraf (MQTT Consumer input plugin) to your database (Tiger Cloud, InfluxDB, QuestDB). Telegraf handles topic subscription, payload parsing, and batched writes. For teams that want more control, direct MQTT subscription in application code writing via a PostgreSQL or HTTP client is also viable. See From MQTT to SQL: A Practical Guide to Sensor Data Ingestion for the full implementation walkthrough.

    How do I handle IoT data retention without exploding storage costs?

    The standard pattern is a two-tier strategy: keep raw data for 30-90 days (depending on compliance requirements), maintain continuous aggregates (hourly or daily rollups) indefinitely, and drop raw chunks on a rolling schedule. Tiger Data's retention policies and continuous aggregates automate this. For longer-term cost reduction, tiered storage moves older chunks to object storage automatically.

    What is the best database for Industrial IoT (IIoT)?

    TDengine and Apache IoTDB are both built explicitly for IIoT with native OPC UA and MQTT support. Tiger Data is a strong choice for IIoT teams that prioritize SQL, cloud-managed operations, and integration with analytics tools. Energy and utilities companies including NextEra and Easee run Tiger Data in production. The choice depends on whether protocol-native edge capabilities (TDengine, IoTDB) or SQL analytics depth (Tiger Data) is the primary requirement.

    Can I use MongoDB for IoT?

    MongoDB works for IoT when data payloads are document-oriented and irregular, write volume is moderate, and the team is already on MongoDB. It is not well-suited for high-frequency time-series ingestion, compression-critical retention, or time-range aggregation queries at scale. Teams hitting MongoDB performance limits on IoT workloads typically migrate to a purpose-built TSDB. For a deeper comparison, see How to Store Time-Series Data in MongoDB and Why That's a Bad Idea.

    What happened to AWS Timestream?

    AWS Timestream LiveAnalytics was closed to new customers in June 2025 and is no longer an option for new IoT deployments. Amazon Timestream for InfluxDB (also known as Amazon Managed Service for InfluxDB) is the surviving AWS-managed time-series option. Teams currently on Timestream LiveAnalytics should evaluate migration paths.

    Does Tiger Data support Grafana?

    Yes. Tiger Data connects to Grafana via the PostgreSQL data source plugin. This is the same plugin used by the Home Assistant community's TimescaleDB + Grafana setup. Dashboards query Tiger Cloud via standard SQL, and time-series panel types work with Tiger Data's time-indexed hypertable schema.

    What is a hypertable?

    A hypertable is Tiger Data's (and TimescaleDB's) core abstraction for time-series data. It is a standard PostgreSQL table that is automatically partitioned by time under the hood. From a query perspective, it behaves like a regular table. SQL queries work unchanged. The partitioning enables fast time-range queries, efficient chunk-level compression, and chunk-based retention policies without manual partition management.

    Tiger Data is the company behind TimescaleDB, the open-source PostgreSQL extension for time-series data. Try Tiger Cloud free or explore the best time-series databases compared for a broader comparison beyond IoT.