May Your Buffer Never Bloat
Guess what: The Internet is getting bloated. “Buffer-bloated,” specifically.
Buffer bloat is a big thing in the lives of the people who work on network protocols and big-iron router stuff. Some even smoosh it into one word: Bufferbloat.
In theory and in practice, buffers are meant to smooth out; to level. They’re short-term storage, when packets – the envelopes of data transmission – move from source to destination.
But when buffers get overrun with bits, they themselves cause delays. The remedy becomes the culprit. That’s buffer bloat.
Buffer bloat is on the rise because of how much video we’re sending and receiving, over the Internet – from the two-minute clip shot on a smartphone, to the Netflix stream, to the live video coming from whichever webcam, to wherever we are.
Every time a packet transits a network, it runs into buffers. The “big iron” routers that run the Internet juggle billions of packets, from thousands of different places, all the time.
Their job is to see where each packet is going, and to find the best route to get it there. As routers route, packets pile up in buffers – more so during heavy volume.
Video is heavy to begin with, relative to a phone call or a web page request. And think about how much more video you’re doing on your phones and tablets than you did two years ago.
It’s getting to be a problem because the usual elixir – “throw more bandwidth at it!” – isn’t enough. At issue is a tenet of how stuff moves over the Internet, using TCP/IP (Transmission Control Protocol over Internet Protocol), which requires an acknowledgement on every packet sent. (In the lingo, they go by “acks.”)
Turns out that round trip time (“RTT”) impacts network performance as much or more so than available bandwidth. Latency trumps capacity.
Help is on the way, of course, under a general mantle of “active queue management.”
One remedy, developed by Google, is a protocol called “SPDY” (pronounced as “speedy” and not an acronym for anything.) It aims to improve on HTTP (Hypertext Transport Protocol) by using fewer TCP connections.
Ever get a “connection timeout” when loading a web page? (The maximum number of active connections in HTTP is six. Who knew?) SPDY fixes that, by multiplexing (smooshing) the next connection, onto an existing connection. Also in SPDY: compression and prioritization mechanisms. Some browsers (Chrome, Firefox, Opera) already use it.
Another goes by “CoDel,” for “controlled delay.” People pronounce it as a word: “Coddle.” Its inventors, Kathleen Nichols (girl power!) and Van Jacobson, describe it as a “no knobs” way of keeping delays low, even during big traffic bursts. They published a chewy paper about it called “Controlling Queue Delay,” available here: http://bit.ly/LDPNUp.
Discussing the paper with ARS Technica writer Iljitsch Van Beijnum, last May, Jacobson dropped this tasty tidbit: “Things would probably go fastest if we had some interested party who would apply it, for example in the cable data edge network.”
Sounds like a gauntlet thrown! Either that, or, maybe the Internet needs a Fitbit.
This column originally appeared in the Platforms section of Multichannel News.