Current Events in Advanced Video Compression
by Leslie Ellis // April 30 2007
It was way back in the summer of 2004 when this column last checked in on advanced video compression. Back then, it “wasn’t yet ready for prime time.” For digital video service providers, doing it meant deploying a new line of digital boxes.
Generally speaking, that’s a deal-breaker: The deployed base of boxes can’t do it? Good luck. Thanks for playing.
And here we are, three years later — and two weeks after the show floor of the National Association of Broadcasters (NAB) teemed with makers of advanced video encoders. From the looks of things, a lot of innovation dollars are pouring into the making of better video squishers.
Why is that? One big driver is AT&T, which is building its residential video delivery service to run over DSL (digital subscriber line), which runs over twisted-pair phone wires. DSL pipes aren’t as roomy as hybrid fiber-coax (HFC) or fiber-to-the-home pipes.
To get video stuff through DSL — HDTV, especially — it needs serious squeeze. More than what’s available now, known as MPEG-2.
Programmers and MPEG-4
At the other end of this story are the program networks. Say you live in that world, and you want to supply your content to AT&T. Then you learn that AT&T wants your stuff compressed at a much greater extent than you currently offer.
If that’s you, then someone in your company is probably looking at advanced video encoders — maybe for your company, maybe to see what could happen if your pictures pinch through someone else’s compressors.
At the same time, you’re probably wondering if this means you have to send the stuff two ways — the existing way (MPEG-2), and the new way, MPEG-4 (which also goes by an assortment of reasonably synonymous acronyms: AVC, VC-1, H.264.)
But wait! Say the encoder suppliers. There’s a sexy economic angle here. Satellite transponders are expensive — upwards of $125,000 per month, each. What if you could send everything up in MPEG-4, then convert it back to MPEG-2 on the ground, if you need to? And if you don’t, just send it along.
The meat of the pitch: Mash your merchandise down twice as much, before it goes up into space. Save serious cash on satellite costs. (And you need encoders to do it.)
For the technical people who oversee the making of video content, though, such ideas tend to elicit this loud and constant refrain: “Be careful with the encoding. It’s my baby you’re squishing.” Primarily, they’re concerned about the amount of digital man-handling that will happen to their products, en route to all those discerning eyeballs.
Let’s take it from the perspective of a digitized movie. It starts life as a digital video tape — which means it’s already been compressed once, from the studio source feed, to fit the tape. Next, it gets compressed in the MPEG-2 format, for transport.
Let’s say it then goes to an aggregator, where it is “up-rated” back to roughly the size it was when it was on tape, then re-coded into the MPEG-4 format. That’s four manipulations.
In both cases — changing something encoded in MPEG-4 into an MPEG-2 format, and changing something encoded in MPEG-2 to an MPEG-4 format — you need encoders.
What About the Box?
And then there’s that pesky box issue. The stark realities of legacy gear apply here: Today’s deployed base of digital boxes just don’t come with chips that can decode MPEG-4 streams.
Ask around, though, and you’ll hear more and more operators say the boxes they deploy after July of this year, or by ’08 at the latest, will contain chips capable of decoding both MPEG-2 and MPEG-4 pictures.
Some cable technologists say that MPEG-4 will enter the scene as a means to compress HD pictures. That takes care of the box takeup issue: If you sign up for HD, after July, you theoretically get the box that has the dual decoder chip.
Then there’s the issue of sending video in yet another way, which requires bandwidth. Less bandwidth, sure, but bandwidth.
To that, some on the cable tech side are mulling whether they could send the MPEG-4 material through the video switches they’re gearing up to install. The logic: Only send what is requested. If what’s requested is an HD stream, pluck it off in MPEG-4 and send it through the switch.
Over time, advanced compression phases in.
A quick review on digital video compression: In general, it works by removing the parts of a picture that remain the same, one frame to the next. This is true of MPEG-4 and MPEG-2.
The kind everyone (cable, satellite, and Verizon) uses now is MPEG-2. It’s what makes it possible for 10 standard definition video streams, or two high definition video streams, to fit into the spectral space taken by one analog video channel.
The kind that’s coming has, in essence, way more knobs. It can crank the same video down at least twice as much, so far, for the same quality. Maybe more.
At NAB, the bragging rights amongst encoder suppliers centered around who put the biggest squeeze on HD content. Some said they could go as low as 6 Megabits per second; some said 5 Mbps. One brave soul said 4 Mbps. (By contrast, an HD stream compressed with MPEG-2 weighs in at around 16 to 19 Mbps.)
Is there a picture quality difference? Depends on who you ask. Already, there are those who submit that video compressed with MPEG-4 looks better than video compressed with MPEG-2. Seems counter-intuitive: Squeeze it down twice as much, yet it looks better?
Again, they say. Lots more knobs.
Other video aficionados argue that it’s silly to say MPEG-4 is better than MPEG-2, or vice versa, because it depends on how well the encoder is made. Instead, it’s a matter of saying which implementation of MPEG-4 (or MPEG-2) you like best.
This time around, advanced video compression shows all signs of being ready for prime time — which means it’s probably time to start paying attention to the details.
This column originally appeared in the Technology section of Multichannel News.
Logical v. Physical Node Splitting
by Leslie Ellis // April 16 2007
If you spend much time listening to cable tech people talk about their bandwidth development options, you’ve probably heard the one about the “node-split.” It’s one of the six or seven “tools in the toolbox” that operators are actively applying, to make sure they have enough shelf space for everything that’s coming.
Lately, though, the “node split” talk comes with a prefix. Some say “logical” — the “logical node split.” Others say “virtual node split.”
Just to make sure it’s confusing, the “virtual node split” has lots of aliases: “One-to-one combining,” “serving area group reduction,” and “laser fan-out reduction.”
Don’t be afraid. It’s just tech-talk. If you haven’t yet stumbled upon this language, chances are high that you soon will. This week’s translation sorts it out.
First things first. In this particular instance, the word “physical” implies the actual fiber optic cables in a cable system, and where they physically are. “Logical” refers to the laser that’s positioned at or near the center of the network, to transmit “downstream” into fibers.
The Node-Split Two-Step
Tactically, logical node splits tend to precede physical node splits. It’s a bandwidth two-step. Partly, that’s because logical node splits happen “higher up” in the network — again, at that spot where lasers launch light into the fibers heading out toward neighborhoods. This can be at the headend, or at a regional “distribution hub.”
In the early days of hybrid fiber-coax (HFC) design, you see, laser transmitters were expensive. Digital services were just getting started, with very low subscription rates. For that reason, most operators configured their laser transmitters to split their load across multiple (usually four) 500-home nodes.
This means that right now, your cable modem is connected to a piece of coax, which connects to a 500-home node, which, along with three or so other 500-home nodes, links over fiber — to one transmit laser.
There’s a certain beauty to this, especially if you’re an efficiency buff. Say one side of town contains a college. Or a bunch of Cisco employees. Or both. Bandwidth usage is probably pretty high there. But on the other side of town, let’s assume that broadband service penetration and usage is comparatively low.
Say you have one node in the high-usage area, and three nodes in the low usage area. The campus node starts to get hot — way more people using way more bandwidth. What to do?
Step one is the logical node split: Get that node off the shared laser. Instantly, the campus node gets more bandwidth, and the three nodes still on the first laser get less congestion. (This costs about $2,000 for the laser, and is considered a part of normal headend maintenance.)
This sequence goes on, in a usage-initiated way, until each node is serviced by a single laser. That’s why some people call this “one-to-one combining” — one laser to one node.
Now let’s say you’ve completed all of the logical node splits you can do. You’re at 1:1 combining. And again, that campus node is redlining in usage.
What to do? Step two: The physical node split.
Physical node splitting means lighting up another strand of fiber between that node, and the headend. Technically, it involves putting a laser transmitter/receiver on each end of the glass, and it costs about $2,500, all in.
How much unlit glass is there, running out to nodes? When cable’s HFC networks were first being built, 20 or so years ago, engineers did have the presence of mind to leave some growing room, in the form of dark fibers. Generally speaking, four to six individual fibers run out to each node. One to two of them are usually in use before the first split needs to happen.
On split number 1, the 500 home node becomes a 250 home node. Split number two halves the 250 home node to a 125-home node. And so on, down to around 31 homes per node (split number four.)
Kind of makes Verizon’s 32-home serving areas seem a bit less daunting, no?
This column originally ran in the Technology section of Multichannel News.