By now, hopefully, you’ve heard that there’s a new chapter coming in cable modems. It’s the latest iteration in the specification known by technologists as “DOCSIS 3.1” for “Data Over Cable Service Interface Specification.”
DOCSIS 3.1 is a doozy — both in terms of what it will do for broadband capacity, and the sheer density of the tech talk that surrounds it.
Hey! Let’s face it. “QAM” is a little long in the tooth, as impressively nerdy industrial tech-talk goes. Not to worry. With 3.1, you too can impress your friends and colleagues by blurting out 3.1-speak like “Orthogonal Frequency Division Multiplexing with Low Density Parity Check.”
Feature-wise, DOCSIS 3.1 is so crammed with improvements that some of us wondered why it didn’t qualify as “DOCSIS 4.0.” (Answer: To thwart any misperceptions from the investor community about “forklift upgrades.”)
First off: DOCSIS 3.1 matters and was devised because of the billowing consumer demand for broadband usage — 50% and higher compound growth, since about 2009. Again: In the history of consumable goods, nothing has grown at a sustained rate of 50%, year over year.
DOCSIS 3.1 basics: When complete (2013) and in market (2014?), it will expand the industry’s downstream and upstream carrying capacity for digital, IP traffic by 50%.
“Half as much again” is always a big deal, especially for that spectrally anemic upstream signal path.
Also impressive about DOCSIS 3.1: It could enable connection speeds of 10 Gigabits per second (Gbps.) Note: Don’t inhale too deeply on this one. It’s 10 Gbps if and only if all other channels on a system are empty. No analog, no SD or HD video, no broadband, no voice.
Let’s get back to the tech-talk of 3.1. What makes for these enormous gains in IP capacity and speed is a new (to cable) form of modulation called “OFDM” (see above.) OFDM, when coupled with a new (to cable) form of forward error correction (LDPC), brings the 50% efficiency gains.
OFDM is widely used by mobile carriers, because they’re already pretty bandwidth-challenged (ship any video from your phone lately?). It works by chopping the typical 6 MHz digital cable channel into smaller “subcarriers,” in the lingo. That’s good for both transmission and dealing with impairments.
That’s the basics of DOCSIS 3.1 — why it matters, and how to talk about it with aplomb. Watch for it to be a major undercurrent of the 2013 cable-tech scene.
This column originally appeared in the Platforms section of Multichannel News.
It happens about every decade, and the third one is almost upon us: A new standard for video compression, bound to make video shipping better.
It’s called “HEVC,” for “High Efficiency Video Coding.” You’ll see it demo into the industrial mainstream at the 2013 Consumer Electronics Show, in January, and into your handhelds and TVs a year hence from that.
The skinny (heh): Another doubling of how much video can be stuffed into the same space as what’s stuffable using today’s best compression techniques. Or, it’s a way to send the same stream with more bits. More bits, better quality.
HEVC improves upon H.264 (also known as “AVC” and “MPEG-4”), which improved on MPEG-2, the granddaddy of digital video compression, dating back to the earliest digital set-top boxes (circa 1995.)
With each new compression chapter, efficiency roughly doubled: HEVC is 2x better than H.264/MPEG-4; H.264 is 2x better than MPEG-2. It follows that HEVC is 4x better than what’s inside millions of already-fielded digital set-tops.
Who benefits most: Mobile carriers, already vexed with trying to keep up with how much video we’re shipping to each other from our camera-bejeweled handhelds.
Another potential beneficiary: Over-the-top video providers (think Netflix, Amazon, Hulu, etc.), which will likely opt for the “more bits” stance. Capacity? Eh! To them, bandwidth is free. Why bother with conservation?
No reason the home team (multichannel video providers) can’t look happily upon HEVC, too. With the pursuit of “all-IP” (Internet Protocol) networks comes the ability to harness the goods of that world. HEVC isn’t by definition an “IP thing,” but it’ll play sooner and with more gusto on the IP side of the plant.
What’s different between HEVC and H.264/MPEG-4: Nothing huge. Both use the same core techniques. (Advanced class: Block-based motion compensation, entropy coding, predictive coding, quantization into i-frames, b-frames and p-frames.)
More, HEVC makes existing compression ingredients more flexible. Recall that compression is all about finding and removing redundancies in pictures. In H.264, motion blocks were fixed; in HEVC, they’re variably-sized.
Instead of encoding the entire yellow wall, frame to frame, for instance, HEVC can “mark” it for reconstitution as such on the end screen (“yellow wall here,” in a gross oversimplification.)
The tradeoff is computational intensity – up 35-50% — particularly on the decode end: TVs, handhelds. But, computational complexity is symbiotic with Moore’s Law. So, processors are already 10x stronger than they were when MPEG-4/H.264 came out, 10-ish years ago.
At last month’s IBC, encoder maker Elemental Technologies showed attendees its HEVC work in two ways. One demo showed a 1080p HD stream compressed to 5.2 Mbps — which “weighs” about 8 Mbps, when compressed with H.264. Another showed side-by-side 1080P streams, HEVC and H.264, both compressed to 5.2 Mbps. The point: To show off the additional picture quality afforded by HEVC in the same amount of bandwidth.
Pretty nifty, by all accounts so far.
This column originally appeared in the Platforms section of Multichannel News.
Here’s something happening in the tech background that rattles the origins of television: The undoing of the 6 MHz channel spacing, common to broadcast and cable television since the 1940s.
What’s going on? Progress, in the form of advanced modulation and distribution techniques (here’s that migration to IP again) seeking to wring every literal bit of capacity on communications networks.
But first, a little background. Why are video channels sized at 6 MHz, anyway? Set the way-back machine to 1941, when the National Television Systems Committee (NTSC) made a plan for black-and-white TV channel distribution. (Color was added in 12 years later.)
The NTSC’s work at the time was to define how much bandwidth it would take to move broadcast TV from stations to homes. Answer: 4.2 MHz, with an extra 1.8 MHz for modulation (the process of imprinting a video channel onto the carrier that moves it) and guard band (so info from one TV channel didn’t smear into any adjacent channels.)
And, here we are, seven decades later, still using “size six” channel widths.
For that reason, 6 MHz is to the video engineer what the inch is to the carpenter: An enormously familiar, tried-and-true unit of measure. So, as cable lingo goes, 6 MHz is the good old wagon: Steady, reliable, fundamental.
Then, digital happened. Video channels, after being digitized, could be squished down – hello, compression – such that many more fit in the space of the original analog channel width. Ten to 12 standard definition, two to three high definition, goes the math, for video compressed with MPEG-2.
That made the thinking change. If all channels are available digitally, and can be compressed not just with MPEG-2, but with MPEG-4, and beyond it, H.265/HEVC, and if the carrying capacity for digital is measured in Megabits-per-second (Mbps), not so much MegaHertz (MHz), is it still relevant, to think in those old-fashioned, analog, size six chunks?
Probably not, but don’t watch for any weird flash-cut to erase 6 MHz spacing. Nor is it likely to anticipate a different official sizing – the 3 MHz channel, or the 1 MHz channel.
Instead, and as traditional 6 MHz channels get bonded together to make larger passageways for IP-based services, we’ll wind up with several really big “channels” – 24 MHz, 48 MHz, and so on – with differing “service flows” of voice, video and data running inside them. Ultimate big-size channel? Depends on the upper spectral boundary, but upwards of 700 MHz, anyway.
When does this happen? As with the transition to all-IP, the forces driving it will continue pushing and pulling – until one day, they’ll be gone, and we won’t really notice the difference. But your engineers will, and they know how to get you there. They’re already doing it.
This column originally appeared in the Platforms section of Multichannel News.
A few weeks ago, an engineering elder called to pose this bit of industrial wisdom: “For the last 20 years, we’ve seen the monetization of Moore’s Law. From here on out, we’ll see the monetization of Shannon’s Law.”
Haven’t heard of Shannon? Welcome to this week’s translation.
First off, one important distinction: There are laws, and then there are “laws.” Think laws of gravity, motion, thermodynamics, and physics here. Not legal law, or laws of unintended consequences, or marketing lingo that sounds peppier with “law” in the title.
In that sense, Moore’s Law isn’t technically a law; Shannon’s Law is a law of physics. It’s a physical law, meaning it’s true, universal, simple, absolute, and stable.
Moore’s Law is more of an economic observation, eponymized by Gordon Moore, co-founder of Intel Corp., who wrote a paper in 1965 stating that the number of transistors (processing power) within chips was doubling about every 18 months. It’s still true.
By contrast, and more relevant every “connected” day, is Shannon’s Law. It’s named for Claude Shannon, who did his work 20 years before Moore, in the 1940s.
Shannon’s Law defines “the theoretical maximum rate at which error-free digits can be transmitted over a bandwidth-limited channel in the presence of noise.” (It comes with an equation but we’ll spare you the math.)
In other words, Shannon figured out a way to calculate how much stuff can be crammed over a broadband network, without problems, even when there is noise, which there always is.
The dramatic rise in broadband usage – upwards of 50% compound annual growth – is true on fixed and mobile networks. In London last week, some social media outlets got bogged down because of all the gadgetry trying to send Olympics pictures and videos. We are gunking up networks.
Which is why it’s important to be able to calculate throughput maximums on data networks. And to be able to ease the situation – by adding spectrum, or mitigating noise.
In cable tech circles, invoking Shannon usually means you’re having a conversation about upstream (home to headend) signaling. It’s why there’s so much talk about advanced modulation, and finding ways to make that slender spectral area carry more stuff.
Will Shannon’s Law get monetized like Moore’s Law did, with a fury of investment and development that lasted a half century? Let’s hope so, for the sake of clear connections and unclogged networks.
This column originally appeared in the Platforms section of Multichannel News.
In this late 2008 interview at Cox’s Atlanta headquarters, Chris Bowick discusses the “then and now” scene for network upgrades. Especially important: Considering customers when implementing upgrade procedure. (Yes, engineers DO think about customers. 🙂 Directed and produced by the fabulous David Knappe with equally fabulous Joe Bondulich on camera and lighting.
Video courtesy Multichannel News.
Upgrading bandwidth to 1 GHz almost always leads to discussion about amplifier spacing — a task that can be tedious and time consuming. Chris explains why NOT having to respace is important. He also explains the company’s strategic “Eon” program. Directed and produced by the fabulous David Knappe with equally fabulous Joe Bondulich on camera and lighting.
Video courtesy Multichannel News.
Comcast CTO Tony Werner discusses the company’s communications campaign about its transition to digital terminal adapters (DTAs), and offers tips for engineers considering similar bandwidth upgrades.
Video courtesy Multichannel News.
Ron Wolfe, Senior Product Marketing Manager of Big Band Networks, explains why marketers — at cable operators and program networks — can use switching to their advantage. Upshot: Bandwidth conservation, addressable advertising, and serving ethnic neighborhoods with native language programming are all within reach. Aired during the 2007 CTAM Summit to assure sound slumber.
Video courtesy The Cable Channel.
Leading into the 2007 Cable Show, I sat down with Tony Werner, CTO of Comcast, to discuss some of the hot issues of the time: Switched digital video, HDTV, and OCAP.
Video courtesy Multichannel News.
At the 2006 SCTE Cable-Tec Expo, I moderated the annual CTO Panel included Marwan Fawaz, then in transition between Adelphia and Charter (“Chartelphia!”; Dave Fellows/Comcast, Paul Liao/Panasonic, and Vince Roberts/Disney, ABC. In this first section we discuss video, bandwidth, and switched digital video.
Video courtesy SCTE.
© 2000-2016 translation-please.com. All Rights Reserved.