HomeAuditBlogAboutBook a Call →
← Back to Blog
PerformanceJanuary 5, 20269 min read

Fixing Ping & Jitter:
A Technical Deep Dive

Network latency optimization for applications that can't afford to miss a frame. From TCP tuning to edge deployment — a complete walkthrough for engineers who need real numbers, not vague advice.

Latency is the gap between action and response. For most web applications, 200ms is acceptable. For real-time applications — games, financial platforms, collaborative editors, video call infrastructure — 200ms is broken. Understanding where that latency comes from and how to systematically reduce it is the difference between a product that feels alive and one that fights you.

Layer 1: Physical Distance and the Speed of Light

The absolute floor on latency is set by physics. Light travels through fiber optic cable at roughly 200,000 km/s. The US is about 4,800 km wide; a coast-to-coast fiber path takes ~24ms one-way. New York to London is about 70ms RTT. These numbers are not negotiable — they're determined by routing distance.

Implication: If your user base is geographically distributed and your server is in one region, you cannot fix the distance penalty with application code. You need servers or edge nodes in multiple regions. Everything else in this article assumes you've addressed geographic load balancing.

Layer 2: TCP's Hidden Overhead

The SYN-ACK cost

Every new TCP connection requires a three-way handshake before any data can be sent. For a NY→London connection at 80ms RTT, that's 80ms of setup time before your first byte lands. Fix: keep connections alive. Use WebSocket persistent connections rather than polling.

Nagle's Algorithm

By default, TCP buffers small packets and waits until it has a full segment. This can add 40–200ms of artificial delay to small messages — exactly the kind you send in real-time apps.

// Disable Nagle's Algorithm in Node.js const socket = new net.Socket(); socket.setNoDelay(true); // Sets TCP_NODELAY // For WebSocket servers (ws library): wss.on('connection', (ws) => { ws._socket.setNoDelay(true); });

TCP Head-of-line blocking

TCP guarantees ordered delivery. If a packet is lost, TCP stops delivering subsequent packets until the lost one is retransmitted — even if those packets have already arrived. For video or game state where a dropped frame is better than a delayed one, this is harmful. The fix is UDP for game state and media streams, with your own lightweight reliability layer for packets that actually need it.

Layer 3: Measuring Jitter Correctly

Jitter isn't just annoying — it's often more damaging than high average latency. A consistent 80ms feels smooth. An average 50ms with spikes to 300ms feels broken.
// Client-side jitter estimation (RFC 3550) let lastArrival = null; let jitterEstimate = 0; ws.onmessage = (event) => { const now = Date.now(); const data = msgpack.decode(event.data); if (lastArrival !== null) { const interArrival = now - lastArrival; const diff = Math.abs(interArrival - TICK_INTERVAL_MS); jitterEstimate += (diff - jitterEstimate) / 16; } lastArrival = now; };

Layer 4: Edge Deployment

Once application-layer optimizations are done, geographic proximity is the highest-leverage remaining lever:

  1. CDN for static assets — HTML, JS, CSS served from the nearest edge node. Not relevant for game state, but critical for your client's initial load.
  2. Anycast DNS — Routes clients to the nearest data center for UDP traffic. Used by major game backends.
  3. Cloudflare Workers / Durable Objects — Stateful compute at the network edge in 300+ cities. For WebSocket-based apps where state can be scoped per-room, this gives sub-10ms latency to most users without managing a multi-region server fleet yourself.
The optimization sequence

1. Baseline: instrument server-side processing time per action type. 2. Remove I/O from the hot path. 3. Switch to binary serialization. 4. Add delta encoding. 5. Set TCP_NODELAY. 6. Evaluate geographic distribution. Each step is measurable — define your target ("under 50ms, under 20ms jitter at p95") before you start, and stop when you hit it.

Latency optimization without a target is just tinkering. Every step in this sequence produces measurable results. Define your threshold, measure it, fix the biggest contributor, repeat. Most teams see 40–70% jitter reduction from the application-layer fixes alone — before touching a single server or CDN setting.

Is Your Real-Time Stack Slow Where It Matters?

A Venom-Audit identifies the specific bottlenecks adding latency and jitter in your application layer — with benchmarks, not guesses.

Book an Audit →