Category: All posts
Jul 23, 2025
Speedcast is a leading communications and IT services provider, delivering critical communications services to the Maritime, Energy, Mining, Media, Telecom, Cruise, NGO, Government, and Enterprise sectors. To ensure reliable connectivity for customers, Speedcast provides services through hybrid network solutions, offering a combination of low-earth-orbit (LEO) satellites, geostationary satellites (GEO), and terrestrial equipment that constantly reports on link quality, bandwidth usage, latency, and device health. Speedcast utilizes its recently enhanced SIGMA network management platform to deliver connectivity and other services to meet strict service-level agreements. To that end, Speedcast must ingest, analyze, and visualize a massive volume of telemetry data in real time. That’s where Tiger Cloud comes in.
As the Director of Technical Architecture, I oversee the end-to-end data architecture that powers Speedcast’s global infrastructure, from ingesting terminal information (e.g. Starlink, Eutelsat OneWeb data) to building real-time insights into our operational platforms that allow us to optimize and boost overall network reliability.
In the past, combining satellite feeds, IoT telemetry, and terrestrial-link metrics meant juggling separate geospatial, relational, and time-series data sources, in addition to a patchwork of aging SCADA systems. Combining siloed data sources required fragile Extract, Transform, Load (ETL) pipelines that delayed insight and increased operational risk.
Today, we ingest about 20 gigabytes per hour, and this number increases as we continue to grow our business. Instead of using brittle ETL pipelines, we stream every data point in real-time via Confluent Cloud Kafka into Tiger Cloud. This allows us to utilize a unified back-end for various tools such as Speedcast’s Compass Portal for customers, delivering real-time updates, loading over a million rows of data for visualization in mere seconds. This means that Tiger Cloud truly is a one write path, one query interface.
With Tiger Lake now part of the TigerData ecosystem, it allows for further native integration between Speedcast’s Data Lakehouse and Tiger Cloud. Pushing boundaries further, we’re able to discard custom scripts and painful batching processes, allowing a true, shift-left mentality in code.
What this means is that we can shift away from the traditional medallion architecture linear path of ‘Bronze - Silver - Gold’ towards a more continuous data integration pipeline with Tiger Cloud at the center.
This will allow various platforms, dashboards and events to be triggered to the same database in real-time – no matter the workload pattern – with every system able to communicate with Apache Iceberg.
As we plan for service expansions and continue installing beyond our current 12,000 Starlink Terminals globally, Tiger Lake’s ingest pipeline scales with us, letting us monitor usage patterns and spot emerging service-area outages in real-time before customers feel the impact.
My primary goal is to increase data reliability. Networks fail far more often than databases, so the surest way to raise reliability is to simplify the architecture. By making Tiger Lake the hub of Speedcast’s data plane, it allows us to focus on data analytics versus keeping data in sync. With Tiger Lake acting as the “spider at the center of the web,” every stakeholder, from operations engineers and data scientists to customers, have access to authoritative data from a single source instead of hunting across multiple systems.
One of our core use cases for Tiger Lake is to increase our visibility and alleviate the number of tickets generated, reducing the time to resolution. We are always hunting for service-area outages, which means sifting through a steady stream of analytical data and tickets to spot anything that might threaten service reliability. Because Tiger Cloud sits at the center of our stack, pulling in real-time satellite performance, IoT sensor readings, and network-health metrics, we have all the context we need in one place. When an alert fires, we can drill into location data, usage patterns, and historical incidents with a single SQL query instead of bouncing between silos. That speed matters; the sooner we understand the full picture, the sooner we can judge whether an alert is noise or requires action.
Consider line-of-sight gaps we observe in the canyons of Sweden and Norway. When we receive alerts of service outages in this region, we can immediately compare the ticket against years of satellite-position data and past outages to see if it is a known, self-resolving issue. If so, we downgrade or close the ticket before it clutters the queue. Consolidating data in Tiger Lake results in fewer duplicate tickets, faster root-cause analysis, and engineers who can stay focused on more impactful tickets in the queue.
What’s next for Speedcast? We’re looking ahead to AI-driven incident response, specifically using filtered vector search for automatic root cause analysis. When generating a large volume of tickets, using metadata tags to identify each satellite, site location and other qualifiers becomes a simple SQL query. By quickly filtering through layers of history, issues and patterns for triaging incident tickets, we can reduce the noise for engineers and improve overall efficiencies.