Skip to content

Instantly share code, notes, and snippets.

@imajes
Created October 22, 2025 03:56
Show Gist options
  • Save imajes/d515c5abb064b1629c9474fbb5e7c9ce to your computer and use it in GitHub Desktop.
Save imajes/d515c5abb064b1629c9474fbb5e7c9ce to your computer and use it in GitHub Desktop.

The Speed of Light is Not Negotiable: A Plex Latency Analysis

TL;DR: Hosting Plex's API servers in eu-west-1 (Ireland) adds ~200ms of latency for US users because light only travels so fast, and this absolutely matters for interactive applications. Here's the data to prove it.

What Are We Even Looking At?

Let's start with the basics. This analysis is based on a HAR file (HTTP Archive format) — essentially a detailed recording of every single network request your browser makes when loading a page. Think of it as a black box recorder for web traffic. When you open your browser's developer tools and load a page, you can export a HAR file that captures:

  • Every URL requested
  • How long each phase of each request took (DNS lookup, TCP connection, SSL handshake, server processing, data transfer)
  • Response codes, sizes, timings — the whole enchilada

HAR files are the standard format for debugging web performance issues because they give you microsecond-level visibility into what's actually happening on the network. No guessing, no hand-waving — just timestamps and measurements.

For this analysis, I captured a HAR of app.plex.tv loading in my browser over a 46-second window, which generated 195 HTTP requests to 86 unique URLs.

The Setup

  • When: October 22, 2025, 00:11:44 → 00:12:30 UTC (45.659 seconds of captured traffic)
  • Where: US East/Central region (me) → eu-west-1 (Plex's API servers)
  • What: Normal Plex web app usage — loading the interface, fetching metadata, pulling thumbnails, etc.
  • How: Chrome DevTools with cache disabled to get a realistic "first load" experience

The Methodology (AKA: I Did My Homework)

I wrote a parser to analyze the HAR file and extract meaningful metrics. Here's what we're measuring:

Key Metrics

TTFB (Time To First Byte): This is the wait timing in HAR parlance — the time from when the browser finishes sending the request to when it receives the first byte of the response. This includes:

  • Network round-trip time to reach the server
  • Server processing time
  • Network round-trip time back

TTFB is the single best proxy for "how far away is this server?" because it captures the fundamental constraint: round-trip time.

Total Time: End-to-end request duration (entry.time in HAR), including DNS lookup, TCP connection, SSL handshake, request transmission, TTFB, and response download.

Phase Breakdown: DNS resolution time, TCP connection time, SSL/TLS handshake time, request send time, wait time (TTFB), and response receive time.

Grouping Strategy

I grouped requests by their effective TLD+1 (the "real" domain — e.g., clients.plex.tv, together.plex.tv, images.plex.tv). But here's the critical distinction:

  • plex.tv domains: These are Plex's API/web servers, hosted in AWS eu-west-1 (Ireland)
  • plex.direct domains: These are local direct connections to your Plex Media Server via "wormhole" routing — essentially LAN-speed connections to your own hardware

This distinction is crucial because it gives us a perfect control group. Both request types are going through the same browser, same network connection, same everything — except one is local and one is 3,500 miles away in Ireland.

The Results: Physics Doesn't Care About Your Infrastructure

The High-Level Numbers

plex.tv (Ireland):        Avg TTFB = 232.4 ms
plex.direct (Local):      Avg TTFB = 33.2 ms
Delta:                    +199.2 ms

Let's be crystal clear about what this means: On average, every API call to plex.tv takes 200ms longer than a local call simply because of distance.

Breaking Down plex.tv Performance

Metric Value
Requests Analyzed 80 (38 unique URLs)
Average TTFB 232.4 ms
Median TTFB 88.9 ms
95th Percentile TTFB 498.0 ms
Average Total Time 283.4 ms

Handshake Overhead (average):

  • DNS: 8.0 ms
  • TCP Connect: 48.0 ms
  • SSL/TLS: 22.3 ms
  • Total Handshake: 78.4 ms

The handshake overhead is actually quite reasonable — about 78ms on average. This is TCP and TLS doing their job. The problem isn't the handshake; it's the distance.

Breaking Down plex.direct Performance (Local)

Metric Value
Requests Analyzed 114 (48 unique URLs)
Average TTFB 33.2 ms
Median TTFB 13.6 ms
95th Percentile TTFB 105.7 ms
Average Total Time 134.3 ms

Handshake Overhead (average): ~11.9 ms total

These are LAN-speed requests. They're fast because they're going to a server in my house, not a server across the Atlantic Ocean.

Let's Talk About Some Specific URLs

The Slowest Offenders (Top 10)

URL Requests Avg TTFB p95 TTFB
clients.plex.tv/api/v2/shared_servers/owned/accepted 1 5,128.9 ms 5,128.9 ms
together.plex.tv/rooms 2 919.5 ms 1,678.9 ms
clients.plex.tv/api/users 1 637.0 ms 637.0 ms
community.plex.tv/api 6 460.1 ms 1,503.3 ms
images.plex.tv/photo 1 446.3 ms 446.3 ms
clients.plex.tv/photo/:/transcode 1 420.6 ms 420.6 ms
clients.plex.tv/devices.xml 1 398.8 ms 398.8 ms
clients.plex.tv/api/v2/user/view_state_sync 1 382.3 ms 382.3 ms
clients.plex.tv/api/home/users 2 359.4 ms 477.6 ms
clients.plex.tv/api/invites/requests 2 269.0 ms 320.8 ms

Notice something? Every single one is a plex.tv domain. These aren't slow because the servers are overloaded or the code is bad — they're slow because they're far away.

The Fastest Endpoints

URL Requests Avg TTFB
analytics.plex.tv/collect/event 3 0.0 ms
10.0.x.x.plex.direct:49374/statistics/resources 10 4.4 ms
10.0.x.x.plex.direct:49374/status/sessions 8 13.6 ms
10.0.x.x.plex.direct:49374/photo/:/transcode 38 14.6 ms
10.0.x.x.plex.direct:49374/statistics/bandwidth 46 15.2 ms

The pattern is obvious: local requests (plex.direct) are consistently an order of magnitude faster.

The Physics of Why This Matters

Here's where we need to talk about the speed of light, because apparently this is controversial.

The Theoretical Minimum

The great circle distance from Chicago (roughly US Central) to Dublin, Ireland is approximately 3,500 miles (5,630 km).

Light in fiber optic cable travels at roughly 200,000 km/s (about 2/3 the speed of light in a vacuum due to the refractive index of the fiber).

Theoretical minimum one-way trip: 5,630 km ÷ 200,000 km/s = 28.15 ms

Theoretical minimum round-trip time (RTT): 56.3 ms

That's the absolute physical minimum for a round trip, assuming:

  • Perfectly straight fiber
  • No routing hops
  • No processing time
  • No queuing delays
  • Light literally cannot go faster

The Reality

Real-world internet routing is not a straight line. Undersea cables don't take the great circle path. Packets get routed through multiple hops. There's processing delay at each router. You're probably going through 15-20 hops with queuing delays at each one.

A realistic RTT for US Central → Ireland is more like 100-150ms for an optimal path. Add in server processing time, TCP overhead, and normal internet variability, and you get... exactly what we measured: ~200ms average latency penalty.

Why This Kills Interactive Performance

Interactive applications (like, say, a web UI) need to feel responsive. Research consistently shows that:

  • 0-100ms: Feels instant
  • 100-300ms: Perceptible delay but tolerable
  • 300-1000ms: Sluggish, annoying
  • 1000ms+: Task flow is broken

Every API call to plex.tv is starting with a 200ms penalty before the server even starts processing the request. Chain a few of these together (which the Plex web UI absolutely does during initialization), and you've got a sluggish interface.

Look at the data:

  • clients.plex.tv/api/v2/shared_servers/owned/accepted: 5.1 seconds
  • together.plex.tv/rooms: 920ms average
  • community.plex.tv/api: 460ms average

These aren't outliers — they're the expected result of hosting interactive API endpoints an ocean away from your users.

But What About CDNs and Edge Networks?

This is where someone inevitably says "but CloudFront!" or "but Cloudflare!"

CDNs are great for static content: images, videos, JavaScript bundles, CSS files. They cache this content at edge locations close to users. That's why the app.plex.tv/desktop/static/ requests are fast — they're served from a US-based CDN edge.

But CDNs cannot cache dynamic API responses that:

  • Require authentication
  • Return user-specific data
  • Change based on real-time state
  • Need to be consistent across requests

All those clients.plex.tv/api/* endpoints? They're hitting origin servers in Ireland every single time. There's no magic technology that makes this faster without moving the origin servers closer.

What Could Plex Do?

  1. Deploy regional API endpoints: Put origin servers in us-east-1, us-west-2, eu-west-1, ap-southeast-2, etc. Route users to the nearest region.

  2. Split critical path from non-critical: The initial page load needs user data immediately. Social features, recommendations, and analytics can be lazy-loaded.

  3. Aggressive caching: Some API responses can be cached (feature flags, system status, static metadata) even if they expire quickly.

  4. GraphQL with field-level caching: Allow clients to request exactly what they need and cache partial responses.

But here's the thing: all of these are non-trivial engineering efforts. They require changes to architecture, deployment processes, and potentially application logic. They cost money and engineering time.

I'm not saying Plex should or shouldn't do these things — I'm saying that hosting in a single region far from your users has a measurable, significant performance cost, and that cost is dictated by physics.

Caveats and Limitations

Let's be intellectually honest about what this analysis does and doesn't show:

  • Single capture window: This is one 46-second capture at one point in time. Internet conditions vary.
  • Sample size: While 195 requests is substantial, it's not exhaustive.
  • No control over routing: I can't control which specific path my packets take or what the traffic conditions are on those paths.
  • Server load unknown: If the Plex servers happened to be under heavy load during this capture, that would inflate TTFB numbers.
  • My local network: I'm assuming my local network and ISP connection are reasonably fast and stable, which they are, but YMMV.

That said, the pattern is clear and consistent. The delta between local and remote requests is almost exactly what physics predicts.

The Bottom Line

When you host interactive API endpoints in eu-west-1 and serve users in the US:

  1. You pay a ~200ms latency penalty due to the speed of light and internet routing
  2. This penalty is unavoidable without moving the servers closer
  3. This penalty is measurable in real-world usage
  4. This penalty affects user experience for interactive applications

This isn't speculation. This isn't "it should be fine." This is measured, timestamped, undeniable data showing exactly what the performance cost is.

Physics doesn't care about your infrastructure decisions. The speed of light is not negotiable.


Appendix: The Full Data

URL Distribution by Domain

plex.tv domains (remote, Ireland):

  • 80 requests across 38 unique URLs
  • Includes: clients.plex.tv, together.plex.tv, community.plex.tv, images.plex.tv, features.plex.tv, discover.provider.plex.tv, metadata.provider.plex.tv

plex.direct domains (local):

  • 114 requests across 48 unique URLs
  • These are RFC1918 addresses (10.0.x.x, 172.18.x.x) hitting local Plex Media Server

Statistical Distribution (plex.tv)

Percentile TTFB Total Time
p50 (median) 88.9 ms 154.4 ms
p75 218.8 ms 281.2 ms
p90 401.0 ms 427.8 ms
p95 498.0 ms 737.3 ms
p99 2,593.3 ms 2,669.3 ms

The tail latencies are particularly brutal. The p99 is over 2.5 seconds for TTFB alone.

Statistical Distribution (plex.direct)

Percentile TTFB Total Time
p50 (median) 13.6 ms 19.5 ms
p75 19.4 ms 31.8 ms
p90 27.4 ms 83.6 ms
p95 105.7 ms 159.0 ms
p99 607.2 ms 646.6 ms

Even the p99 for local requests (607ms) is faster than the average for remote requests (232ms).

Glossary

  • HAR (HTTP Archive): A JSON-formatted archive of web browser interaction with websites, capturing detailed timing information for every network request
  • TTFB (Time To First Byte): The time from sending a request to receiving the first byte of the response; primarily reflects network RTT plus minimal server processing
  • RTT (Round Trip Time): The time for a signal to travel from source to destination and back
  • p50/p95/p99: Percentiles. p95 means 95% of requests were at or below this value; useful for understanding tail latency
  • eTLD+1: Effective top-level domain plus one label (e.g., plex.tv, google.com)
  • Origin Server: The actual server hosting the application logic, as opposed to CDN edge caches

Analysis generated from HAR capture of app.plex.tv on 2025-10-22. Raw data, parsing scripts, and full methodology available on request.

For those still skeptical: I encourage you to capture your own HAR file and run your own analysis. The data speaks for itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment