Category: All posts
Dec 04, 2025

This is an installment of our “Community Member Spotlight” series, in which we invite our customers to share their work, spotlight their success, and inspire others with new ways to use technology to solve problems.
In this edition, Danny Burrows, Vice President of Software Engineering at Flogistix by Flowco, an Oil & Gas vapor recovery business, shares how Tiger Data improved performance while drastically cutting costs for their sensor data collections and real-time analytics.
I’m Danny Burrows, Vice President of Software Engineering and Data Intelligence at Flogistix by Flowco. Flogistix is an Oil & Gas field service company focused on vapor recovery. We maximize well performance by capturing and processing fugitive emissions and turning those into usable substrates to prevent damaging gases entering into the atmosphere, primarily with the use of data.
What that means in practice is that we design, build, manufacture and maintain a large fleet of compression equipment across U.S. oil and gas fields. Our equipment packages weigh from 8,000 pounds to 250,000 pounds operating in remote locations. My team supports ~250 field technicians and a fleet that’s grown to roughly 3,700 pieces of equipment. We ingest operational metrics continuously, approximately 100 data points every minute for the entire fleet, and then turn those data points into decisions that keep equipment healthy, production efficient, and crews safe.
Before Tiger Data, our data layer couldn’t keep up with how we operate:
We needed a system our team could run with existing skills, that scaled for time-series data, exceeded our customer SLAs, and provided field technicians with operational and historical data insights on the fly.
Our evaluation criteria for our existing tech stack replacements were simple and non-negotiable: data rolloff, aggregations, and compression. We needed clean data rolloff from hot to cool to cold so operations could understand current state instantly while longer-horizon reporting tolerated looser SLAs. We also had to package results and cut down on the noise: nobody wants minute-by-minute plots over six months, and we weren’t about to stream that volume over HTTP to the front-end, so both precomputed and on-the-fly summaries were essential. And with the sheer quantity of telemetry we ingest, compression had to keep storage lean without hurting query performance.
In short: speed, performance, and cost, delivered through rolloff policies, smart aggregations, and aggressive compression.
Based on our evaluation criteria, Tiger Data uniquely fit the bill:

Here’s the high-level flow we implemented to keep the edge resilient and the cloud fast, with minimal operational overhead. It’s designed to buffer through spotty connectivity, compress and summarize efficiently, and isolate analytics from real-time ingest so dashboards stay responsive as the fleet scales.
Within the first month of real usage, we saw the combination of compression + CAGGs eliminate our worst pain points around cost and query latency. Reliability of inbound data jumped (no more “mystery gaps”) and long-range queries became practical because teams pivoted to CAGGs for most views while raw data remained available for deep dives.
By working with Tiger Data, we modernized our data pipeline to ingest at the edge, keep dashboards fast with continuous aggregates, and retain raw data for deep analysis. With native compression and tiering, our data reliability jumped from ~95% to well above 99%, and by using standard SQL on Postgres, we’ve cut costs and boosted developer velocity.
We’ll keep tightening CAGG refresh policies, add indexes on our most-hit aggregates, and continue tuning compression windows as ingest patterns evolve. We’re also exploring new data tiering strategies and broader replica usage for heavy analytics.