Adaptive Bit Rates and Picture Quality
by Leslie Ellis // December 05 2011
Ever wonder what could happen to picture quality when a given screen is displaying a “downshifted” stream of video, sent using adaptive bit rate techniques?
I did, and was glad to soak up a session about it at the recent SCTE Cable Tec Expo. Short version: Arris CTO Tom Cloonan and colleague Jim Allen built an emulator in their lab, to sample what happens when different types of traffic gets smooshed together on the IP plant.
Refresher: Tons-o-video moving over the Internet. Unprecedented growth. Uses a lot of bandwidth, comparatively. Everyone’s working on it – by adding IP bandwidth, and by working the end points. The “clients,” in the lingo, meaning your other screens – laptops, tablets, smart phones.
From a bandwidth perspective, here in the twilight of 2011 (and the eve of big channel bonding), adding more IP bandwidth means going beyond the 2 to 4 downstream digital channels reserved for broadband and voice-over-IP services. (Watch for this to rise to 12-18 bonded channels, in the next few years.)
Consequently, and inevitably, video service providers will start increasing the types of traffic sent over the IP (Internet Protocol) part of the plant. That means plain old web browsing, plus whatever’s moving “over the top” on the public Internet, plus the newer “managed IP” video services.
On the client (screen) side of the equation, adaptive bit rate streaming (a.k.a. “fragmented” streaming), is big. It works by chunking video streams into different sizes – in the Arris experiments, 3, 2.1, 1.5 and 1 Mbps – so that if bandwidth isn’t available to play the bigger chunk, the client can request a smaller chunk next.
Which brings us back to the question of what happens, on your various screens, when network congestion causes a downshift in video bit delivery?
Nice descriptive language in this wheelhouse, by the way. Example: Things that can go wrong crop up as “rendering engine starvation” and “video resolution dithering.”
Both conditions stem from network congestion — the former when the software in the end point device (tablet, TV) doesn’t get enough bits; the latter when not enough bits arrive to render a good quality picture, causing the screen to “dither” between 1080P and lower resolutions.
Also factored into the simulator: An “aggressiveness factor,” to assess who does what when bandwidth does become available. As it turns out, some client software is more aggressive than others – meaning they jump up to a higher resolution chunk, lickety-split.
Generally speaking, though, the simulator found that most adaptive streaming protocols back off quickly in times of congestion. Sort of a digital cacophony of “after you.” “No, after you.” “No, after YOU.”
This just scratches the surface of the 34-page paper, and companion presentation, titled “Competitive Analysis of Adaptive Video Streaming Implementations.” For more, contact the SCTE (www.scte.org) .
This column originally appeared in the Platforms section of Multichannel News.