Monthly Archives: October 2007
The Language of Video Encoding
by Leslie Ellis // October 29 2007
By now you’ve probably heard the one about the big cable operator with the big plan to boost digital capacity by 30% next year.
That’s good for the HDTV channel explosion, obviously. Happiness accelerator: No ditching of the 14 million or so digital boxes already working in people’s homes.
The big cable operator is Comcast. The capacity gain involves a compression improvement, disclosed partially at recent investor’s conference.
The compression technique launches widely in January, so details will assuredly follow soon. What’s likely to be involved is a largely overlooked member of the bandwidth preservation family: The digital video encoder.
For that reason, the subject of this week’s translation is a brush-up on the language of video encoding. Chances are high that this topic will nudge its way into your conversational life very soon, especially if you follow cable’s shelf space situation with any fervor.
Knowing going in that this batch of improvements necessarily centers partly on the existing type of video compression, known as MPEG-2. (The “MPEG” stands for “Moving Pictures Experts Group.”) It also probably involves some new video processing techniques that are heavily focused on measurable video quality.
Know also that there’s really no other way to apply newer types of compression, like MPEG-4, to a bandwidth problem, without installing set-top boxes that know what to do with an incoming MPEG-4 stream.
Three terms tend to pop up repeatedly when talking about how to squeeze – encode – a digital video signal: “Dual pass,” “open loop vs. closed loop,” and “lossy vs. lossless.”
Let’s start with dual pass. Not surprisingly, it’s a way of compressing video in two swipes. Swipe one is the encoder’s best shot at using the components of the MPEG-2 standard to squish down a video.
Swipe two is almost always the secret sauce of the encoder manufacturer. It’s a full second look at the compressed stream, to find ways to squeeze the bit rate down even more.
Right now, with HDTV mostly a volume game, bit rate reduction often leads any compression discussion. The next HDTV chapter, though, will be about picture quality. The ideal encoder accomplishes a good squish without noticeably degrading the quality of the picture.
Fact: Most professional-grade encoders use “dual pass” techniques. The real action is in what they do within that second pass.
Soon after “dual pass,” you’ll run into “closed loop” and “open loop.” The loop is the linkage (or not) between a video encoder and a statistical multiplexer.
Refresher: If the work of a video encoder is to squish one digital video stream, the work of the statmux is organizing lots of those squished streams for the ride toward homes. (Lingo translation: “Statmux” is tech-talk shorthand for “statistical multiplexer.” “Mux” is also acceptable. Both remove more than five syllables.)
A good statistical multiplexer is like my friend Diana, who can take one long look at the overhead bin on a small airplane, and at the pile of stuff needing stowage — then magically fit the hat box, guitar, duffels, crutches, rolling bags, ficus tree, parkas and backpacks into the bin.
The loop that’s being closed in a “closed loop” scenario is the one that’s created between the encoder and the statmux. If it’s closed, those two machines are working together to organize bits for the ride. If it’s open, they work independently. There are pros and cons to both.
At some point, you may hear mention of “lossy.” (If you do, you’re hanging with the advanced class.) What’s lost in “lossy” compression is an exact pointer to the original material. Lossy algorithms tend to make files smaller.
Lossless compression is a more mathematical bit tossing, so that the underlying content can always be perfectly reconstructed. It’s sort of like when you use a “zip” program to shrink an important file you need to send to Harry, because otherwise Harry’s email server keeps kicking it back to you with a note that it’s too big.
Encoder-speak is coming back into the high-tech vogue because necessity is the mother of invention: HDTV signals are huge. Bandwidth is precious. The installed base of digital boxes using MPEG-2 compression dwarfs the number of boxes that can “see” MPEG-4 signals.
Something had to be done.
This column originally appeared in the Technology section of Multichannel News.
A Programmer’s Guide to the “Up-Rez”
by Leslie Ellis // October 15 2007
Three things are indisputable about high definition television: More channels are coming, more homes can afford the HD sets to display them, and consumers harbor big opinions about which service gives the best – and worst – HD pictures.
Right now, those opinions are all over the place. Depending on the blog, the “hands down best” HD pictures come from AT&T. And DirecTV. And cable, and EchoStar, and Verizon.
Ditto for the “hands down worst.”
DirecTV is on the receive end of most of the recent blog angst, mostly because it’s moving at ramming speed toward its year-end promise of 100 channels.
It’s risky, though, to take comfort in DirecTV’s woes.
Why? Because there is no official technical benchmark for what constitutes “an HDTV picture.” Ditch any daydreams about a button on the remote that lets you know what resolution you’re getting, like you can do on the PC to find out what broadband speed you’re really getting. So far, it doesn’t exist.
What we’re left with is … people’s opinions. Even the experts in picture resolution are quick to point out that quality is highly subjective. My eyes see differently than yours, and your eyes see differently than the person nearest to you right now. It’s a byproduct of being a human.
Plus, glitches in picture quality can’t easily be pinned to how a program is distributed – over satellite, over cable, over fiber, over copper. From the time a program is created to the time it shows up on your snazzy new flat-panel HD set, it’s probably been “touched,” meaning manipulated, at least four times.
Then there’s the simple fact that today’s larger, higher quality TV sets show glitches larger and more distinctly. (Aside: At the Consumer Electronics Show, in January, a chief technologist from a major program network said that his big “aha” was that in 2008, TV displays will outperform distribution networks, in terms of how much picture information they can display.)
But back to the blog buzz, which seems to center on what programs are “true HD,” versus an “up-rezzed” version. (“Up-rez” is HD shorthand for “up-resolution,” also known as “up conversion.”)
Here’s what that means: At any program network, right now, some fraction of its content library was mastered in an HD format. The rest was not. The latter category will need more bits, in order to “look good” on those big, beautiful HDTV displays. That’s the up-rez.
The extent to which a program or movie can be “up-rezzed” also depends on how it was stored. If it’s on film, you’re good. If it’s on videotape, not so good.
The process for creating master film reels in a digital, high definition format is known as “telecine” (pronounced “tele-sinny.”) It usually starts with a clean-up, to remove any dirt, scratches, hair, or other visible glitches. It’s expensive and time consuming, but at the end, its true HD.
The process for up-rezzing videotape content is less accurate, and is part (part!) of the reason why some pictures look better than others on HD screens.
Up-resolution, as a technique, has two main components: Line-doubling, and interpolation.
Line-doubling is a method used on the vertical part of the picture, as it is “drawn” on the screen – more lines, more bits, more picture. Interopolation is the addition of bits within the horizontal lines, in a way that is hopefully creative enough to estimate what’s really happening in the picture. If the interpolator sees a line of red dots, maybe it adds another red dot, for instance.
The point is, up-rezzing is based on clever guessing, but it’s a guess. It is not “true HD.” From there, that up-rezzed show is compressed and sent along — where it might be compressed, uncompressed, and re-compressed a few times before it gets to the HDTV in front of Consumer Jane’s couch.
Right now, the name of the HD game is volume: Who has the most channels. The next chapter will probably be about quality.
Quality, in picture resolution, is about how much picture information there is, which depends on how many bits are used, which depends on how much bandwidth is available.
This column originally appeared in the Technology section of Multichannel News.
The Three Things That Nearly Always Hamstring New Tech Vendors in Cable
by Leslie Ellis // October 01 2007
It used to be that three blockers predictably conspired to gum up innovation and slow down new product introductions on the video side of the cable house.
One was the billing system. The bindweed of the back office, its tendency was to wind its way into all other mission-critical realities, like new service activation, customer care, even dispatch.
Two was “the guide.” Like a gas, it seeped into all available space within those early, already-constrained digital boxes. Plus it seemed to want to grow up to be the foundational platform for anything that showed up on the screen. It wanted to be the guide and the middleware.
Three was the conditional access and encryption system. Because the supply side for keeping content safe was split down the middle – between Motorola and Scientific-Atlanta – making changes meant constantly nudging two companies with mutually exclusive technologies.
The three blockers had three things in common: Making changes took too long, cost too much, and yielded less control than operators wanted.
That’s when the game changed. Maybe the time had come to take over for real, various operators said. Maybe it’s time to build our own (fill in the blank).
Fast forward to now: Billing still tilts toward “buy” in build v. buy. It’s painful enough just to modify the billing system. Changing it out? You might as well remove your own veins.
The guide and the conditional access mechanisms, however, did tip toward “build.”
Two things happened to the guide. One was the OpenCable Application Platform — what we used to call “OCAP.” It solved how the guide wasn’t to be the middleware. Plus, OCAP solved another problem: How to give cable services a national footprint.
The guide also became a “build” item for some operators – notably Comcast and Time Warner Cable. Both made substantial investments in the “build your own” alternative, either by absorbing guide-oriented companies outright (Comcast), or by assembling brainpower in-house (Time Warner).
From the outside looking in, it’s difficult to quantify whether these “build” moves made costs drop and innovation accelerate. Here’s one way to look at it: Inventions like “Start Over,” “Look Back,” subscription VOD, and the ability to do a self-upgrade from the TV screen are all reasonably new. They came, in part, from the “build your own” guide camp.
A couple of things happened on the conditional access side, too. The biggest chapter, since the “good old days” of embedded security, was the CableCard. It achieved the goal of a national cable footprint, for those consumers who purchase a TV set with a CableCard slot, take it home, and decide to get a scrambled service.
Somewhere in all of that, an effort called “downloadable conditional access” (“DCAS”) was born, in the form of an operator-owned company, PolyCipher. PolyCipher is based in Denver, but incorporated in Delaware, as part of “NGNA” — the “Next Generation Network Architecture” joint venture between Comcast, Time Warner, and Cox.
To put this in perspective, NGNA is known within technical circles as a place where strategic necessities are turned into a plan for products. The channel bonding aspects of DOCSIS 3.0, for instance, were born within NGNA. So was the notion of modular headend gear for broadband data. Ditto for the set-top line Comcast calls “RNG” – initially short for “Real Next Generation,” then changed to “Residential Network Gateway,” depending on whose version of the story you hear.
The difference this time is that the “build” was done as a separate company — understandable, given that this is crypto stuff. But still, it doesn’t take that much of a logic leap to wonder if the work of PolyCipher fits into the “get it done, move on” model of the prior NGNA efforts.
In other words, design the DCAS chip, find someone to make it, participate in the tape-out (initial layout) get it into production – then retreat. Move on to the next strategic priority.
What’s better, build v. buy? It’s a controversial philosophy question that varies, depending on what’s being built or bought. At the very least, though, it probably means that those operators entrenched in the “build your own (fill in the blank)” are a bit more empathetic to the timing and cost issues once shouldered solely by their suppliers.
This column originally ran in the Technology Section of Multichannel News.