An Introductory Stroll Through HDTV’s Problem Areas
by Leslie Ellis // July 22 2002
Pity technology’s new-again beauty, high definition television — that breathtakingly beautiful girl at the dance, so lovely that not a single boy can muster the courage to ask her to the floor.
Everybody wants HDTV, but nobody wants to pay for it. The reaction to it is rapturous — “Mesmerizing!” “Better than the eye can see!” – yet less than three of every 100 American homes contains an HDTV set. Of those, most are used to view DVDs more so than to tune into a growing, but still slim, amount of HD television content.
From origination to destination, and at every junction along the way, HDTV has issues. Big, techno-political issues, rooted in fear and cost – the dynamic duo of “do nothing.”
This week’s translations will stroll the signal path of a broadcast HDTV transmission, with brief stops at the trouble zones. Subsequent columns will further explore the many strife points.
Troubles lurk right from the start of an HDTV program. First is cost. It’s high, to say the least. HD-equipped TV production trucks can run north of $500,000. Transmitting an HDTV show costs in duplicate staff and resources, because it doesn’t naturally supplant existing, analog broadcasts.
Also omnipresent is the bulk of the HDTV signal. The extra information that justifies the high-definition label is plump – so much so that two HDTV channels can barely wriggle into the bandwidth used by 10 of today’s digitized cable channels. Yet bandwidth is not infinite, or free.
When an over-the-air HDTV signal arrives at a cable headend, it needs processing known as “re-modulation.” That’s because sending stuff through the air is more harsh than sending it over a wire. (Anticipating this, and adding information to compensate for it, is another reason HDTV signals are so stout.) As a direct result, broadcasters use a method called “vestigial sideband (VSB)” to convey signals. Cable uses QAM, or quadrature amplitude modulation.
Engineers say it’s not a big deal to re-modulate from VSB to QAM. Being realists, though, they usually add that any signal conversion can introduce problems.
And then there’s the consumer side of HDTV, where there’s way more questions than answers, and the answers are hiding within unresolved arguments. Most, if not all, of the in-home HDTV problems have to do with the set itself. Is an HDTV set the same as a digital TV set? And for that matter, what’s digital about a “digital TV,” as described by consumer electronics and retail stores?
In these nascent, confusing days of HDTV, many consumers think they already own an HDTV set, but when the cable installer gets there with an HD-capable set-top box, they learn otherwise.
It seems that “digital,” as an adjective, is as watered-down as “new and improved,” especially when it comes to HDTV.
Part of this descriptive problem links to a decades-old argument of surprising intensity, given the blandness of the protagonists: Connectors. Yet it is precisely at the “gozintas” of HDTV that things get ugly.
In a huge oversimplification, there are two types of connectors that feed HD signals into HD displays: Analog, and digital. Of the digital, there are also two types: Firewire, also known as IEEE 1394, and “DVI,” for “digital visual interface.” (DVI also has a next-generation version known as HDMI, for high definition media interface.)
Firewire makes it easy to daisy-chain more electronic things to the HDTV set – like digital recorders. This, of course, makes Hollywood queasy, not to mention vocal, about rights management and copy protection. Firewire also caps the types of advanced graphics that can accompany a show, because its speed capabilities, while fast, aren’t fast enough for HD graphical overlays.
DVI solves those problems, but it precludes the attachment of the other stuff that consumer electronics companies would like to sell, which makes them unhappy.
And, while less of an issue now, the matters of resolution linger, which further muddy the understanding of HD. This is the “720p” and “1080i” tags that describe lines of resolution, and how they’re painted on the screen.
But wait, there’s more….Questions also linger around the notion of adapting a digitally-encoded show from one resolution to another, often referred to as “up-rez’ing” or “down-rez’ing” an HDTV image. To “up-rez,” for example, is to add detail that wasn’t in the original, standard-definition, digital picture. It’s like trying to make a pineapple upside-down cake from two twinkies and a can of fruit cocktail.
Why care? Why now? Because if you work for one of the top-10 U.S. cable operators, you’re part of a commitment to deliver five HDTV channels, including broadcast transmissions, next year.
After 20 years in the making, it’s starting to look like the most tangible change in television since color may finally get its turn on the dance floor.
This column originally appeared in the Broadband Week section of Multichannel News.
To Gig-E or Not to Gig-E: Part 2
by Leslie Ellis // July 08 2002
When we left off last time, we’d tapped into the meaning of this new thing called “Gigabit Ethernet,” or “Gig-E”. This time, a look inside the thought processes underway by the industry’s architecturally-minded technologists, about whether Gig-E’s contributions to signal transportation make good sense.
Generally, Gig-E’s proponents see it as an inexpensive way to outfit cable plant for the billowing bandwidth needs of on-demand TV, beyond films. Detractors say that Gig-E may not be as cheap, nor as fast, as promised.
The answers lie partly in a cost comparison between digital video storage and bandwidth, and partly in common sense about what works and doesn’t work for specific cable systems. Thus, this week’s translation will be largely the philosophy of technology decisions.
To believe in the cost benefits of Gig-E is to believe in the inevitability of the many letters now preceding “-OD” (on demand) in industry conversations. There’s “V” and “SV,” the old standbys, for “video” and “subscription video.” But there’s also “FOD” (free on demand), “EOD” (everything on demand), “GOD” (games on demand) and “E-I-E-I-OD.” (kidding.)
Offering on-demand service is suddenly about a lot more than movies. TV shows, sporting events, short-format how-to clips, and any other digitized video material would also roost on those video on demand (VOD) servers, along with the movies. Consumers could watch what they wanted, when they wanted, with all the VCR-like features of VOD – fast forward, rewind, pause.
Today’s VOD offerings – again, mostly films – are usually stored on servers in distribution hubs. Each hub manages the flow of signals to and from about 200,000 homes passed by cable service. The hub is also the aggregation point for the 500-home nodes you always hear about when people talk system architecture.
Say a system has five such hubs. Making more on-demand stuff available for customers would mean duplicating all of that material, five times. Doing so is expensive and unwieldy, Gig-E people say. Maybe it’s cheaper to centralize the servers in one headend, and switch the video out over Gig-E to the hubs.
Here’s where it makes sense to look a bit deeper. Let’s assume a 50% penetration of digital video service to that hub that passes 200,000 homes (some MSOs go much higher). You’re down to 100,000 homes, ready to watch TV and movies on demand.
This is where the math changes. Early on-demand experiments with video other than movies show radically different usage patterns. Most common VOD (i.e. movies on demand) models assume that at any given time, the network must be ready to support 10% of the people in that hub (1,000) to request the same video stream, at the same time. They call this the “peak simultaneous” usage rate.
Yet in TV on demand, usage peaks could go much, much higher. Nobody knows for sure how much higher – it’s too soon. Let’s go completely mad and say that at any one time, half of the people who could do on-demand TV viewing, would do on-demand TV viewing. That would mean a need to store and prepare as many as 50,000 streams at one time – and that’s just for one of the five hubs.
The other side of the model is to ascertain how much it would cost to centralize all the servers in the headend, put in a fast transportation method between the headend and the hubs – like Gig-E – and fold in a video switching mechanism to use the available bandwidth more efficiently.
If you lived through cable’s early 1990s technology chapters, when cable and telcos were poised to raid each others’ core businesses, you’re probably close to a gasp right now. Switch the video? Isn’t that industrial blasphemy?
If you didn’t live through it, here’s the recap: Back when cable and the telcos were starting to square off, some telcos picked switched digital video (SDV) technology to get TV signals into homes. For cable technologists, SDV was an easy target, scoffed at as “gold-plated,” “frivolously expensive,” “unnecessary.”
But then again, VOD was little more than convention glitz back then. Video wasn’t yet digital. Hot sellers in set-tops were Scientific-Atlanta’s 8600x, and General Instrument’s CFT-2200, both analog. Gig-E was yet to become the grandchild of 10 Megabit-per-second Ethernet, the fastest at that time.
There are no right or wrong answers yet to the question of Gig-E’s applicability in lots-on-demand systems. In hubs where space is tight, it may make sense to put the servers elsewhere; other local conditions will skew the logic every which way.
But it’s safe to say that Gig-E is worth consideration, and that switched video perhaps isn’t the pariah it once was.
This column originally appeared in the Broadband Week section of Multichannel News.