---
title: "How Speedcast Built a Global Communications Network on Tiger Lake"
published: 2025-07-23T12:35:55.000-04:00
updated: 2026-01-23T16:52:59.000-05:00
excerpt: "Discover how Speedcast built a unified global communications network using Tiger Lake, processing 20GB/hour of real-time satellite and IoT data to reduce support tickets and improve service reliability across 12,000+ terminals worldwide."
tags: Dev Q&A, Tiger Lake, Time Series Data, real time analytics
authors: Kevin Otten, Nicole Bahr
---

> **TimescaleDB is now Tiger Data.**

## About the Company

[Speedcast](https://www.speedcast.com/) is a leading communications and IT services provider, delivering critical communications services to the Maritime, Energy, Mining, Media, Telecom, Cruise, NGO, Government, and Enterprise sectors. To ensure reliable connectivity for customers, Speedcast provides services through hybrid network solutions, offering a combination of low-earth-orbit (LEO) satellites, geostationary satellites (GEO), and terrestrial equipment that constantly reports on link quality, bandwidth usage, latency, and device health. Speedcast utilizes its recently enhanced SIGMA network management platform to deliver connectivity and other services to meet strict service-level agreements. To that end, Speedcast must ingest, analyze, and visualize a massive volume of telemetry data in real time. That’s where Tiger Cloud comes in.

## About the Team

### _Kevin Otten, Director of Technical Architecture at Speedcast_

As the Director of Technical Architecture, I oversee the end-to-end data architecture that powers Speedcast’s global infrastructure, from ingesting terminal information (e.g. Starlink, Eutelsat OneWeb data) to building real-time insights into our operational platforms that allow us to optimize and boost overall network reliability.

## About the Project 

In the past, combining satellite feeds, IoT telemetry, and terrestrial-link metrics meant juggling separate geospatial, relational, and time-series data sources, in addition to a patchwork of aging SCADA systems. Combining siloed data sources required fragile Extract, Transform, Load (ETL) pipelines that delayed insight and increased operational risk.

Today, we ingest about 20 gigabytes per hour, and this number increases as we continue to grow our business. Instead of using brittle ETL pipelines, we stream every data point in real-time via Confluent Cloud Kafka into Tiger Cloud. This allows us to utilize a unified back-end for various tools such as Speedcast’s Compass Portal for customers, delivering real-time updates, loading over a million rows of data for visualization in mere seconds. This means that Tiger Cloud truly is a one write path, one query interface.

With [Tiger Lake](https://docs.tigerdata.com/use-timescale/latest/tigerlake/) now part of the [Tiger Data ecosystem](https://www.tigerdata.com/blog/tiger-lake-a-new-architecture-for-real-time-analytical-systems-and-agents), it allows for further native integration between Speedcast’s Data Lakehouse and Tiger Cloud. Pushing boundaries further, we’re able to discard custom scripts and painful batching processes, allowing a true, shift-left mentality in code. 

What this means is that we can shift away from the traditional medallion architecture linear path of ‘Bronze - Silver - Gold’ towards a more continuous data integration pipeline with Tiger Cloud at the center.

This will allow various platforms, dashboards and events to be triggered to the same database in real-time – no matter the workload pattern – with every system able to communicate with Apache Iceberg.

Powered by TimescaleDB on AWS, Tiger Cloud runs on Amazon EC2 with S3 tiered data storage.

![Tiger Data Speedcast](https://storage.ghost.io/c/6b/cb/6bcb39cf-9421-4bd1-9c9d-fa7b6755ba0e/content/images/2025/07/2025-July-18-speedcast-diagram.png)

As we plan for service expansions and continue installing beyond our current **12,000 Starlink Terminals** globally, Tiger Lake’s ingest pipeline scales with us, letting us monitor usage patterns and spot emerging service-area outages in real-time before customers feel the impact. 

My primary goal is to increase data reliability. Networks fail far more often than databases, so the surest way to raise reliability is to simplify the architecture. By making Tiger Lake the hub of Speedcast’s data plane, it allows us to focus on data analytics versus keeping data in sync. With Tiger Lake acting as the “spider at the center of the web,” every stakeholder, from operations engineers and data scientists to customers, have access to authoritative data from a single source instead of hunting across multiple systems.

## Ticket Reduction Use Case

One of our core use cases for Tiger Lake is to increase our visibility and alleviate the number of tickets generated, reducing the time to resolution. We are always hunting for service-area outages, which means sifting through a steady stream of analytical data and tickets to spot anything that might threaten service reliability. Because Tiger Cloud sits at the center of our stack, pulling in real-time satellite performance, IoT sensor readings, and network-health metrics, we have all the context we need in one place. When an alert fires, we can drill into location data, usage patterns, and historical incidents with a single SQL query instead of bouncing between silos. That speed matters; the sooner we understand the full picture, the sooner we can judge whether an alert is noise or requires action.

Consider line-of-sight gaps we observe in the canyons of Sweden and Norway. When we receive alerts of service outages in this region, we can immediately compare the ticket against years of satellite-position data and past outages to see if it is a known, self-resolving issue. If so, we downgrade or close the ticket before it clutters the queue. Consolidating data in Tiger Lake results in fewer duplicate tickets, faster root-cause analysis, and engineers who can stay focused on more impactful tickets in the queue.

## Future Plans: Machine Learning and AI

What’s next for Speedcast? We’re looking ahead to AI-driven incident response, specifically using filtered vector search for automatic root cause analysis. When generating a large volume of tickets, using metadata tags to identify each satellite, site location and other qualifiers becomes a simple SQL query. By quickly filtering through layers of history, issues and patterns for triaging incident tickets, we can reduce the noise for engineers and improve overall efficiencies.