The Tech Side of Net Neutrality: QoS
by Leslie Ellis // February 20 2006
By now it’s fairly clear that the high-stakes commotion known as “network neutrality vs. network diversity” is really just a more genteel version of “dumb pipe” vs. “smart pipe.”
What’s perhaps not as clear are the technologies angling into the heart of the matter — specifically “Quality of Service,” or “QoS,” the subject of this week’s translation.
But first, a quick review of the “neutrality v. diversity” scuffle. On one side are the Dumb Pipe (“neutrality”) contenders. All bits are created equal, they say. All broadband networks should be “neutral.” No bits should be blocked or disrespected in any way.
The arguments for “network neutrality” come from companies like Amazon.com, Google, and Vonage.
On the other side is the Smart Pipe (“diversity”) crowd — those who built, own and maintain broadband networks. That means cable and telephone companies. (Oh, the irony.) They maintain that all bits are not created equal. Some need special attention, in order to work well. It’s not about blocking or degrading anything. It’s about flow control.
At the technological heart of all this is that awkward term: “QoS,” short for “quality of service.” As a descriptor, it’s just as tacitly negative as “network neutrality” and “network diversity” is tacitly positive.
Here’s why. If Service Offering A (voice) has “quality of service,” does that mean Service Offerings B and C (web surfing and email, let’s say) lack quality?
But wait, there’s more: The generally accepted term for services that don’t need special handling is “best effort,” which reeks of low expectations.
QoS, despite its lackluster name, is about the part of network management that is flow control. It goes like this: There are pipes. They are fat. The carrying capacity of a modern HFC (hybrid fiber-coax) plant, fully digitized, is something like 5.2 Gigabits-per-second worth of reconfigurable, re-usable speed.
Many bits ride inside the fat pipe. Some are the bits that constitute a request for a web page. Others are email bits. Some are video bits. Some are voice bits.
QoS sorts the flow inside the pipe. Its job is to make sure each bit gets what it needs, from the pipe, in order to get where it’s going — and work as expected, when it gets there.
QoS is already in use by those cable operators offering voice-over-IP services. Reason: Voice bits need to get where they’re going fast, and in order, because conversations are live. Disorderly arrival messes with the quality of the call.
How QoS Works
QoS works differently, depending on the direction bits are traveling.
In the downstream (toward homes) direction, QoS works by prioritizing packets. Bits can be tagged with a priority, from zero to seven. Most voice-over-IP bits, for instance, carry a higher priority than, say, web surfing bits.
In the upstream direction, the QoS toolkit is bigger, because the upstream path is notoriously skinny and mean. Congestion is plausible. That’s why upstream QoS includes five different ways for bits to “reserve” a ride. They go by names like “unsolicited grant service,” “near real-time polling,” “UGS with activity detection” — which, happily for you, fall outside the scope of this translation.
It’s Like Priority Mail
QoS has a known parallel. It’s trite, but it works: The U.S. postal system.
When Customer Jane wants to send a piece of paper with words on it to someone somewhere else, she has two options. She can stamp an envelope, and drop it in the mailbox. It’ll get there when it gets there. Best effort.
Or, she can go to the post office, and spend a little extra for priority handling. Maybe she wants it there the next day. Or two days from today.
In that case, Customer Jane parted with the extra cash for priority handling. Notably, the “network diversity” side doesn’t exactly see it that way. They realize that it’s hard to justify additional fees, atop existing broadband fees, to add priority handling to the bits that constitute an advanced service.
More, they’re interested in developing a new batch of customers. Let’s say, for the sake of plausible argument, that it’s any of the growing list of companies using broadband pipes to stream video, or offer voice services. Vonage is a known example.
What’s for sale? Network features, built into the plant. Like QoS — the “priority handling” for advanced services. The pitch would go something like this: If you want to send your video bits over the network we built and support, and you want those video bits to act like video bits, not best-effort bits, we can help you with that.
That’s money talk. And that’s why there’s this big debate. It’s less about whether pipes are dumb or smart, and more about who pays for priority handling.
This column originally appeared in the Broadband Week section of Multichannel News.
IPTV at ET: The Devil is (Still) in the Details
by Leslie Ellis // February 06 2006
Every January, usually right after the Consumer Electronics Show, cable’s technologists pack their bags and head to the Society of Cable Telecommunications Engineers’ annual Conference on Emerging Technologies.
Attending ET means sitting in a darkened room for two days, listening to technical people deliver technical papers. Translations, context, and interstitials come from VIP moderators Tony Werner, CTO of Liberty Global, and Chris Bowick, CTO of Cox, each anchored a half-day session.
Rising up from the dense thicket of terminology this year was an ever-tantalizing subject: How telco-styled IPTV, or Internet Protocol Television, is unique from cable-delivered video.
The goods came from Nimrod Ben-Natan, a vice president in Harmonic Inc.’s Convergent Systems Division. Although Harmonic sold a pile of gear to Verizon last year, through Tellabs, Ben-Natan’s remarks focused more on video delivered over telco DSL (digital subscriber line) networks, like what SBC is putting together.
Note: DSL is rooted in IP. Thus, IPTV is synonymous with sending TV over DSL.
And with no further ado, here’s his take on the Two Big Things that make video over DSL different than “traditional” digital cable networks. And you can bet your bippy, as my mother would say, that these two things will be heavily marketed by IPTV purveyors, like SBC.
Fast Channel Change
Number one is that much-touted fast channel change. How fast is fast? Less than 200 milliseconds, technically. Visually, 200 milliseconds looks as fast as channel changes used to look, back when the F-connector plugged into the back of the TV set. Before digital boxes.
If you’ve been following the IPTV action, you’ve heard this one before. Lots. So far, what’s said about how it works is a plausible-enough explanation: The tuners aren’t nested inside the box. They’re up in the network somewhere. (Make a left at the pedestal, go a couple miles.)
Here’s more on how it works, technically. Say that’s Consumer Jane, there on the couch. She gets digital video from her local telephone company, which sends it to her over a DSL connection.
Remote in hand, Jane thumb-surfs a channel-up. The set-top (or media center) that came with the service issues what’s known as a “join” request. It wants to dip into a pre-cached set of video frames.
The request zings up the phone wire, to that buffer. Maybe it’s in an “edge aggregator,” or maybe it’s in the “D-SLAM,” or Digital Subscriber Line Access Multiplexer. (The latter is telco-speak for the thing that sits between a bunch of incoming DSL lines, and a fast Internet backbone.)
Either way, the tuners aren’t inside anything at Jane’s house.
The bits that make up the video frame in the buffer zing back into the box. The channel Jane requested, appears — and it appears fast enough to make known alternatives look slow.
The boxes used by cable and satellite operators work differently. On-board tuners work by literally jumping frequencies, each time somebody invokes a zap with the remote control. Then, they need to demodulate the incoming signal, stabilize it, and deal with any error correction activities. If the processing chip isn’t beefy enough, that to-do-list can bog down. Symptom: Slower zapper action.
Let’s switch to Consumer Bob on game day. He’s watching his favorite team. On the same screen, in smaller video boxes, are three other games. Or, maybe he’s watching three other camera angles of the same game.
A heavy offering of picture-in-picture screens is the second offering unique to video over DSL, Ben-Natan said at the conference.
Technically, here’s what happens: Each smaller video box gets filled with a lower resolution, 200 kilobit-per-second stream of video. Using advanced compression, ordinary (not high-def) streams earmarked for DSL video weigh about a Megabit per second. Those smaller, picture-in-picture boxes can get skinnier streams because well, because they’re smaller.
Serving video multi-taskers with cable or satellite boxes is generally limited by the number of tuners in those boxes. That places picture-in-picture near the top of the benefit list for IPTV’s “tunerless” architecture.
Telco video isn’t without challenges, though, Ben-Natan said notably bandwidth. Supporting a home with two HD sets, each eating up 6 Mbps, removes more than half of the available bandwidth on a 20 Mbps connection the current deployable max for advanced DSL gear.
Another IPTV challenge: Picture quality. When shelf space is an issue, and advanced compression techniques are available to squeeze the bits out of something, the tendency is to crank the dial to “max squeeze”. But max squeeze can trigger quality issues.
Plus, because most DSL-styled video efforts use constant bit rate (CBR) encoding, it’s difficult to agree on one bit rate that can be applied to all forms of video content. A high definition movie, for instance, might be encoded for delivery at 6 Mbps — but if the next channel is high-def sports, maybe 9 Mbps is a better number.
Lastly, there’s the matter of scale. Piling tuners for instant channel change into the network may indeed make things faster — but each new box, hurling channel change commands into the network, adds another straw onto the camel’s back.
In closing: The good thing about technical conferences is the possibility of seeing that devil, lurking in all those details. The bad thing about technical conferences is waiting for the right details, depending on what devil you’re after. If your devil is “IPTV,” this year’s ET Conference delivered.
This column originally appeared in the Broadband Week section of Multichannel News.