Category: All posts
Nov 05, 2025

Posted by
Rajdeep Sharma
Mechademy monitors the critical assets of some of the world's largest oil, gas, and energy companies, including 6% of the world's LNG production, where every downtime event can cost millions.
Their flagship platform, Turbomechanica, fuses physics-based turbomachinery models with domain-informed machine learning to build hybrid digital twins of compressors, turbines, and full refrigeration trains. These twins continuously reconcile expected and observed behavior, detect early degradation, and prescribe fixes that improve uptime by 2–10% and deliver ~15× ROI across fleets exceeding 2.5 million horsepower of driver power.
These twins continuously reconcile expected and observed behavior, detect early degradation, and prescribe fixes that improve uptime by 2–10% and deliver ~15× ROI across fleets exceeding 2.5 million horsepower of driver power.
In Mechademy’s early days, the mission was clear: build a platform capable of modeling the real-world behavior of complex industrial assets and orchestrating data between machine learning and physics-based engines to create true digital twins.
At that stage, the priority was speed and flexibility, not perfection. The focus was on enabling the rapid design of complex data structures that could represent equipment behavior and support the development of diagnostic hypotheses. The goal was to collect as much operational data as possible and experiment, learning how to evolve diagnostics that could automatically detect issues with varying signatures and temporal patterns.
Given that philosophy, MongoDB made perfect sense. Its flexible document model allowed rapid iteration on changing data structures without rigid schema constraints. Combined with the MERN stack, it empowered the team to move fast, prototype ideas quickly, and focus on data orchestration rather than database design.
However, as the diagnostics framework matured, the data model itself became a bottleneck. Diagnostic tests began requiring time-aligned data at multiple resolutions, 15-second raw streams, 1-minute aggregates, and hourly summaries, depending on the test type. Since MongoDB didn’t support time-series data natively at the time, the team implemented manual bucketing strategies to emulate time-series performance. Over time, these workarounds ballooned into deeply nested aggregation pipelines that were increasingly brittle and expensive to operate.
What had once been fast and flexible now required constant tuning and maintenance to remain performant.
The challenges weren’t just architectural; they were operational and financial. As diagnostic workloads scaled, MongoDB’s resource utilization skyrocketed. Even for small tenants processing around 10,000 tests every half hour, CPU utilization hovered above 95%, and query targeting routinely exceeded 1,000.
Even for small tenants processing around 10,000 tests every half hour, CPU utilization hovered above 95%, and query targeting routinely exceeded 1,000.
To keep up, Mechademy vertically scaled clusters from M20 to M50 in just 6–8 months, but each upgrade only brought limited breathing room. Every new diagnostic capability demanded more complex queries and higher performance thresholds, leading to an unsustainable cycle of scaling and reengineering.
At that point, the team faced a critical decision:
The move to Tiger Data wasn’t about replacing NoSQL; it was about realigning the data infrastructure with Mechademy’s evolving mission: to process massive time-series workloads efficiently, reliably, and at scale.
Tiger Cloud offered exactly what the platform needed:
The transition also simplified the surrounding architecture. Mechademy introduced a unified data ingestion and orchestration layer, using DLT for structured data loading and Celery for distributed job scheduling, so that sensor data from multiple sources could be cleaned, transformed, and streamed directly into Tiger Cloud. From there, continuous aggregates and compression handled data retention and resolution without manual intervention.
What had once been a network of brittle pipelines became a single, predictable, self-managing system.
The results were immediate and transformative.
On the same tenant that once required an M50 MongoDB cluster, Mechademy now processes 10,000,000 diagnostic tests every half hour on an M20-equivalent TimescaleDB cluster.
CPU and memory utilization remain stable, maintenance overhead is near zero, and compression has drastically reduced storage costs.
Hypertables and continuous aggregates eliminated a massive amount of operational complexity. Adding a new diagnostic test with a new data-resolution requirement is now a simple configuration change, not a new service or migration plan.
Adding a new diagnostic test with a new data-resolution requirement is now a simple configuration change, not a new service or migration plan.
| Query Type | Performance Improvement | Memory / Scan Reduction |
|---|---|---|
| Base table | 66% faster | Optimal memory usage |
| 1-minute Continuous Aggregate | 18% faster | 45% less data scanned |
| 10-minute Continuous Aggregate | 81% faster | Dramatic efficiency gains |
| 1-hour Continuous Aggregate | 95% faster | Outstanding planning |
Today, Tiger Data serves as the foundation for Mechademy’s time-series architecture, powering everything from real-time customer dashboards to large scale analytics and diagnostics. The team continues to expand its use of compression, tiered storage, and continuous aggregates, and is exploring managed data-lake integrations as Tiger’s ecosystem evolves.
The shift from MongoDB to Tiger wasn’t just a technical migration. It was a strategic transformation. The migration allowed Mechademy to move from managing infrastructure to delivering intelligence, scaling diagnostics seamlessly while cutting costs and complexity.