by Leslie Ellis // October 14 2002
An expression common to computer scientists is on the rise among cable technologists, and it’s a doozy. Usually it crops up in conversations amongst software engineers, about digital video hardware and software.
The expression: “Abstraction layer.”
Abstraction layers are everywhere in software. Industrially, an abstraction layer is something software architects build. Its intent is to take something complicated, with many possible outcomes, and to put something on top of it that yields a simple way of doing it, that works in lots of different places.
Clear as mud, right?
Let’s break it down further, starting with the word “abstraction.”
Outside of techno-interpretations, “abstraction” carries at least seven meanings (and that’s without consulting the Oxford English Dictionary). Many of its definitions seem only vaguely related to the next: There’s abstraction as in “lost in abstraction.” Or, abstraction as in a removal of something. And, abstraction as the inventive isolation of an object’s characteristics, like when sorting something into its genus or species.
The whole abstraction thing, then, is fairly cerebral, and it doesn’t get much better when hitched to “layer” and whisked into the lexicon of software engineering.
The general invisibility of software is why software people are experts at drawing piles of rectangles, when explaining the “hows” of their world. Most of this stuff is hard to envision without the rectangles, layered to make a stack.
A fairly typical depiction will show a rectangle marked “set-top hardware” at the bottom, “operating system” above it, “middleware” above that, and “applications” at the top.
The north-south intersections of those four rectangles are where the abstraction layers do their work. Abstraction layers essentially say “do it,” instead of listing long instruction sets about how to do it.
Abstraction layers bring with them their own set of prefixes: Hardware abstraction layer, software abstraction layer, database abstraction layer, network abstraction layer. In every sense, the intent is to simplify, for the next software module in the stack, how to proceed with an activity.
So a hardware abstraction layer, in the case of the rectangle at the bottom, marked “set-top hardware,” will summarize for the box above it, marked “operating system,” how to proceed with a desired activity. And so on, up the stack.
If there weren’t abstraction layers, software programs would exist as big blobs, not all that re-useable, and not at all happy or speedy about handling the inevitable changes that happen during the course of business. Without an abstraction layer, then, deployments of whatever the advanced digital video product may be – VOD, SVOD, you name it – would wind up as massive, custom integration projects.
Example: The navigational part of a VOD system, which in the stack of rectangles would sit at the top, as an “application,” just wants to tune a movie. So its abstraction layer says to the one below it, “fetch me the stream for “Waking Ned Divine, please.”
It doesn’t say, “tune the following 6 MHz carrier, find me the program identifier for the MPEG-2 stream related to “Waking Ned Divine,” isolate the index frames, begin filling the MPEG buffer, decompress the video, and get it on the screen, please.”
The kissing cousin to the abstraction layer is the “API,” or “application program interface.” APIs are the software tools used for the layers to talk to one another. People who work at the hardware level — engineering chips onto boards, and writing machine language code to make the chips do their work — need to know how to make all of that sensible to the next thing that needs it. In the case of the set-top box, the next thing that needs it is usually the operating system.
APIs, then, help the operating system know what’s below it. APIs at the middleware level, above the operating system, help applications developers know how to get to what’s below it, and so on.
Building abstraction layers, computer scientists assure, is an art. There are people whose entire lives are spent building abstractions. It’s not always perfect: Sometimes, getting to a high level of abstraction necessitates throwing away some good stuff, too.
Abstractions, as cerebral as they are, weren’t meant to be a confusion device developed by computer scientists to make the rest of us feel stupid. They were meant to simplify. They theoretically afford bigger portions of an R&D budget to go to the actual product, not to custom integration.
And, as cable executives up to the CEO level know all too well, anything that relieves the stacks of dollars going to custom integration is worth its weight in abstractions.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // September 30 2002
In a market reeling with jailed executives, growth-suppressing accounting changes, and a depressing increase in severance package discussions, any bit of good news is refreshing.
Which is why it was vaguely comforting to learn of a zero-cost way to double the upstream capacity for cable modems, as part of a longer-term move to a 12-fold increase.
As quirky as it is, the upstream signal path matters. It matters because every significant new service slated for launch on cable plant requires it. On-demand video needs it to pass along the customer-generated fast forward, rewind, play and pause commands inherent to the service. Telephony, whether IP or circuit switched, needs it to haul everything that happens after a person picks up the phone to make a call. Broadband Internet service needs it to carry requests, like for a Web page. And so on.
Yet the upstream path – also known as the reverse path, and the return plant – is skinny, especially compared to the forward, or “downstream,” signal path (from headends to homes.)
How skinny? Picture a bookshelf. On it are six paperbacks, and 115 hardbacks. The ratio of paperbacks to hardbacks is roughly the same as upstream to downstream bandwidth, in a 750 MHz, two-way cable system.
Currently, engineers employ two architectural methods to better the upstream bandwidth experience for cable modem partakers. One is to “un-combine nodes,” and another is to “split nodes.”
Most operators, when they installed the gear to enable broadband Internet service, four or five years ago, combined as many as four, 500-home nodes to one port of the Cable Modem Termination System, or CMTS – the headend part of broadband Internet service. As penetration levels increase, and more people share the same slice of bandwidth, those nodes can be “un-combined,” by putting only two nodes, and then one node, per port. That’s node recombining.
Node splitting happens when that 500-home node is halved, so that 250 homes share the upstream signal path.
And now, methods are emerging to double upstream throughput by stretching the width of the upstream channel, from 1.6 MHz to 3.2 MHz. The extra room doubles the carrying capacity of the upstream channel, from 2.5 Megabits per second (Mbps) to 5 Mbps.
On an operational level, it means that when a system calls in for the resources needed to split a node, they may be asked to first set the CMTS at 3.2 MHz channel spacing.
Engineers familiar with the doubling process call it a no-cost no brainer: First, make sure nothing else is using the spectral chunk earmarked for the widening. Some set-tops, like those with impulse pay-per-view (IPPV) capabilities, use the upstream path, as does any telephony equipment. Once the coast is clear, type in a command that tells the CMTS to get wider. No cost; double the bandwidth.
The ability to stretch upstream channel widths is, in part, another plum salvaged from the remains of Excite@Home. Some MSOs didn’t have control over how CMTS units were configured or upgraded when they were @Home constituents; only now can they adjust CMTS parameters at will.
Splitting nodes and going to 3.2 MHz channel width means that the 5 Mbps of upstream throughput is available to half as many customers: Fewer customers get more bandwidth. Example: Splitting a 500-home node means that 50 homes, not 100 homes, in a 20% penetrated broadband Internet market, share the 5 Mbps.
And there are two other upstream throughput boosts on the horizon. One involves upshifting to 16-QAM (quadrature amplitude modulation), which, in a 3.2 MHz channel width, lifts the upstream carrying capacity of that channel to 10 Mbps.
The big kicker comes with DOCSIS 2.0, which somehow managed to harmonize two competing types of advanced modulation. Both use impressively nerdy descriptors: Synchronous Code Division Multiple Access (S-CDMA), and Frequency Agile Time Division Multiple Access (FA-TDMA).
With DOCSIS 2.0, the capacity of an upstream channel bolts to 30 Mbps, with noise safeguards. Noise management matters, a lot, because moving data that quickly tends to make it more susceptible to noise hits — and noise hits are legendary in the 5-40 MHz spectral range.
Put it all together, and the carrying capacity of the upstream signal path progresses like this: 2.5 Mbps upshifts to 5 Mbps, then 10 Mbps, then 30 Mbps, as a function of upstream channel width and modulation improvements.
Suddenly, cable’s 5-40 MHz upstream path isn’t just a curiosity because of the weird types of noise and interferences that lurk within it. Instead, today’s conversations about the upstream path center on efficiency: Which services need how much speed and space to thrive.
In other words, persistence is outweighing obstacles. It’s not a return to EBITDA accounting, nor an end to layoff anxieties. But it’s something.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // September 16 2002
Six months have passed since Excite@Home finally flamed out, and – like saplings nudging through blackened earth in wildfire country – new signs of healthy life are already emerging.
One such sign is the steady advance of “peering” agreements between cable providers and other, non-affiliated Internet service providers (ISPs).
Born in the Internet’s go-go days, peering is one of the few ideas of that era that didn’t wind up in the dot-com junkyard. To peer, in this sense, is to agree to link up, not to stare. You peer with someone, not at them.
Specifically, peering is what occurs after a cable operator, who offers broadband Internet service, realizes that he’s exchanging so much Internet traffic with someone, on such a regular basis, that its probably time to link to them directly.
The moment it becomes more expensive to pay someone to haul traffic to an entity than it is to link to that entity directly is the moment when peering discussions begin.
ISPs, regardless of breed (dial-up, cable modem or DSL) pay in the range of $150 per Megabit to move their customers’ traffic to its destination. (People with less traffic pay more; people with more traffic pay less.) The Megabit calculation is based on peak traffic times.
Example: Say you’re an ISP, and you’ve put the mechanisms in place to examine what tech people call “network flows.” That means you’ve purchased software that automatically monitors the traffic through your core routers. With it, you see how and where the data to and from your network is moving.
Over a period of time, you realize that you’re consistently sending lots of traffic to another ISP, and receiving similar amounts of traffic from that source. Maybe you’re sending 50 Mbps, at peak times, to them, and they’re sending 50 Mbps, at peak times, to you.
In this very simplified example, you’re both paying different transit providers. These are generally the long-haul companies, like AT&T, WorldCom, or Sprint. You’re paying a lot, actually: At $150 per Megabit, each of you is shelling out $7,500/month to get your traffic to and from each other.
Wouldn’t it be cheaper, you wonder, to make a direct link to each other?
Doing the math involves three main cost points. First is the price of getting your traffic to a mutually-agreeable exchange point, called a “NAP,” for “Network Access Point.” A NAP is a place – usually a nondescript room in a nondescript building – populated with racks of routers. Once you’re there, you’ll need to rent some rack space and buy a port into the matrix of gear there that moves your customers’ data to and from the wider Internet.
If the math works – and if it looks like a peering breakeven point is within reach – the next step is to figure out where to do it. In cable terms, this is the tricky part, right now. The demise of Excite@Home caused all of its former constituents to build their own regional and national backbones, which don’t necessarily intersect.
Most MSOs are glad to have some control over their networks. (Mostly, they’re glad to have the whole @Home inferno behind them.) However, in @Home’s wake, all previously affiliated MSOs are lacking the backbone that at one time WAS their peering agreement.
This mutual access among MSOs doesn’t matter so much right now. There just isn’t a lot of activity in advanced IP applications that could dramatically benefit from a mutual, nationwide backbone, like the former @Home network.
As services like voice-over-IP telephony develop, though, peering agreements among MSOs will make more and more sense. Recall the tone of the benefits of cable-delivered VoIP is the ability to handle a telephone call without ever touching – which is to say paying – to hop on and off of the public switched telephone network.
At the moment, Cox appears to be the most assertive in MSO-to-MSO peering. So far, it maintains or plans to maintain a presence in four NAPs, which are located in cities that map with its seven regional clusters. Other MSOs say that peering is on their radar.
As the discussions advance, haggling points will probably revolve around traffic sizing: the “but I’m sending you more than you’re sending to me” stuff that can temporarily stymie progress.
In the end, though, the benefits of peering will far outweigh the economics of maintaining networks that don’t touch each other. Peering saves money – for Cox, which averages about 6 Gigabytes of monthly traffic, that’s been $100 and $300 per Megabit, per month.
These days, anything that saves money is good. That’s why peering is poised to become a big part of broadband Internet discussions over the next few years.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // September 02 2002
Every so often, a new technical term tiptoes in through a side door. Before long, you hear it twice a week, then maybe three times a week. When it gets up to four times a day, it’s probably time to inspect this new expression a little more closely.
Meet “the monitor app,” a flourishing addition to the language of digital video software. It hails from the OpenCable side of the house, and specifically from its software work, known as “OCAP,” for “OpenCable Applications Platform.”
Given the mixed meanings of the word “monitor,” a quick distinction: This is monitor like hall monitor, not monitor like Harriet the spy. “App” is short for “application.”
To know the monitor app is to clear your mind of everything you know about digital cable boxes, at least as a start. Think instead about the future consumer device that has a built-in digital set-top box. To simplify matters, let’s say it’s a TV.
It turns out that there are lots of decision points, mostly played out in software, on the road to the TV/set-top combination. Even if you cede the obvious to the TV’s control mechanisms – volume control, changing channels, controlling a built-in DVD player – there’s a ridiculously complicated matrix of stuff that needs attention.
In general, then, the monitor app is the shepherd of the cable-specific parts of that matrix. It takes care of bare basics. As a point of reference, the monitor app, in today’s digital boxes, is generally called a “resident app,” meaning that it’s omnipresent inside the box. It does things like fetching and displaying the volume banner or electronic program guide when invoked, or any other mechanisms that qualify as “settings.”
But the monitor app is more like a “co-resident app” in the TV/set-top combo, because the CE/cable combo unit itself is a sort of duplex. The monitor app gets downloaded from the cable operator when a customer brings the new TV/set-top home from the store, plugs it in, and wants to summon whichever premium services and applications are of interest.
Mostly, the monitor app is designed to let MSOs decide what to do when certain situations arise — like when security is required (premium apps), or when a rogue app tries to bring the network to its knees (think virus here).
If an application has to go on or come off at specific times (start/end boundaries), the monitor app makes sure it lives within its lifespan – a mechanism software people call “applications lifecycle management.”
If two different applications are elbowing for the same resource at the same time – a nook of memory, a kick from the processor – the monitor app mediates.
(This latter point makes consumer electronics manufacturers uneasy. They don’t want cable’s monitor app to futz with any of the mechanisms they consider “theirs.” And vise versa. This mutual-futzing worry is the crux of most of the issues that will arise between cable and the consumer electronics industry over the next few decades, as these combination devices evolve.)
There’s assorted other language that pops up around the monitor app. The way it gets to its destination (the combo TV/set-top), for example, is within an impressively nerdy electronic table, called “XAIT,” for “Extended Applications Information Table.”
More lingo: The monitor behaves as an “unbound app,” which means it has no correlation to any channel or program that may be showing on the TV.
When an MSO’s monitor app gets to a combo TV/set-top, it loads itself into the types of memory chips designed to keep their contents, even without power. These chips are called “flash” memory. When a monitor app makes itself permanent in flash, it “flashes itself.” (Who says software engineers don’t have a sense of humor?)
Monitor apps specific to OCAP don’t exist yet. Consumer electronics manufacturers are developing prototypes that will likely appear by the year-end convention season. MSO technologists involved in OCAP are just now mulling who will write their specific monitor app. (Recall that one of the root reasons for OpenCable and OCAP in the first place was for MSOs to get a better sense of control over their own competitive destiny, by not placing themselves into proprietary vendor locks.)
Timing specifics aside, know that there are those who believe, with increasing fervor, that the intersection between consumer electronics devices and cable is non-negotiable. It fuels the race to innovation and price reduction – the missing two “must haves” in the industry’s ongoing duel with satellite providers.
Regardless, count on the monitor app to chew up big chunks of meeting time, once MSOs (and their food chain) start planning how to launch OpenCable and OCAP.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // August 19 2002
Lo and behold, just two columns into the intricate hues of digital and high-definition TV, a rule on the subject emerges from the Federal Communications Commission.
It says that big screen TV sets (>= 36″) made in and after 2005 must include circuitry, interchangeably referred to as “tuner” and “receiver,” that can recognize and administer digitally-transmitted, over-the-air TV signals. By 2007, the rule continues, all new TVs need a way to receive and display the stuff that comes in digitally, from the antenna on the roof.
In the fog of digital TV lingo, the FCC’s mandated tuners tune the stuff of “digital terrestrial television” — the digitized programs you get, or will get, from broadcast stations.
As with most other entertainment media, the very process of “going digital” usually includes methods to automate related business functions. Remember the column (4/15/02 edition, “How a Film Becomes a VOD”) about “metadata,” the data that describes other data? It discussed how a digitized, compressed film gets packed along with things like its promotional materials, trailer and airing dates, before it makes its journey to become a VOD offering.
Traditionally, those materials would either go on the sticker of the hand-delivered film can, or arrive under separate cover.
Digitally, they’re all in the bit stream.
The same is true of broadcast digital signals. The extra stuff that goes with a broadcaster’s digital signal goes by an acronym: “PSIP,” as in, rhymes with “key clip,” or “be hip.” PSIP stands for “Program and System Information Protocol.” It’s a standard, built by the Advanced Television Systems Committee. (For those who care, it is the ATSC’s A65/A standard.)
In a sense, PSIP is mega-metadata — an information sherpa, hauling a ton of descriptive data, organized into lots of tables – eight, at a minimum.
One table keeps track of time. Another holds ratings information. A third tells the tuner where to look to pluck the digital channel, or channels, out of the incoming bit stream. Yet another table contains data describing the programs on those channels. A minimum of four other tables, each with three hours worth of upcoming program information, make the TV smart enough to know what’s on digital broadcast stations for the next 12 hours.
Those tables get sliced into sections, and slotted into the MPEG-2 transport stream – picture an airplane, with 188 seats, one for each byte of info. It flies the chopped-up PSIP tables to rooftop antennas, and down into digital tuners inside TVs.
Some of these tables matter to cable because they describe merchandise that may or may not be a part of carriage agreements.
Indeed, in the technical documentation that describes the PSIP standard, it is a foregone conclusion that the 19.2 Megabits per second of digital throughput, originally awarded to the broadcast industry for high definition television, will also be used to convey multiple, “standard definition” channels. Even NVOD applications are discussed.
Those standard definition channels are known in the PSIP standard as “minor channels,” as opposed to a “major channel,” which is whatever channel a particular broadcaster is on today, in analog. So if PBS is on channel 12, its extra programming goes on “12-1,” “12-2,” “12-3,” and so on. Channel 12 is the major channel. Anything after the delimiter (the dash, or the dot, or whatever is ultimately decided) is the minor channel. Both are held in a PSIP table called a “VCT,” for “Virtual Channel Table.”
If cable is the storefront, and its bandwidth the shelves, then PSIP looks sort of like a supply truck that pulls up out front to unload the thing you’ve agreed to put on a shelf – and with it, several other things that you perhaps didn’t agree to display.
Therein lies some concern: If those minor channels fall outside the bounds of a cable/broadcaster carriage agreement, can the DTV receiver tune them anyway? Probably. The fact that this kind of activity is technically capable is disturbing, to some cable strategists.
However, it probably also makes sense to consider the overall likelihood of PSIP as a Trojan horse. Broadcast TV, at least for now, is free. For broadcasters to fill the “minor channels,” they need more content. That takes resources and money. If the plan is to somehow forge a for-pay model, the logistics involved in setting up provisioning and billing systems, not to mention mitigating consumer backlash, are daunting.
Either way, expect to run into this “PSIP” term on a fairly regular basis, especially if you’re the one that will undertake operational, tactical or strategic liaisons with broadcasters and TV manufacturers.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // August 05 2002
Last time, we toured the route of the high definition television signal, pausing to peek at the rough spots along its way. This time, a more thorough look at what it takes, technically and operationally, to offer HDTV over cable.
Despite the techno-politics and daunting economics that have long occluded the migration to high definition television, the plumbing of it over cable isn’t all that difficult, technologists assure. Get the signal, manipulate it, stuff it into the transmission mechanism. At the house, a new box.
An HDTV signal arrives at the cable headend from one of two places: Satellite, or broadcast. Broadcasters increasingly send their television payload (both analog and digital) directly over a fiber link, although most still transmit their stuff over-the-air, for the sake of consumer TVs equipped to receive digital signals from the antenna on the roof.
Step two – signal manipulation – is fairly similar to what gets done to “standard definition” digital TV signals. “Standard definition” means a digitized and compressed version of regular old analog TV. SD, then, is what “digital TV” is today: Multiple channels, usually 10, of digitized and compressed TV, slotted into one, 6 MHz channel.
HD is also a digitized signal; it also uses the MPEG-2 compression mechanism. The difference is, what’s being compressed contains a lot more information – more than 6x that of a “regular” digital video picture.
In that sense, then, and before we go any further, it’s useful to note that what’s “digital” about “digital TV” is the journey, not the destination. It’s the transmission mechanisms, not the set itself. TVs themselves are not “digital,” really. The vast majority, for example, don’t yet have a digital input connector.
A TV sold as “digital,” then, is a TV that contains the receiver circuitry to pluck a digitally-transmitted signal out of the air, or off of a wire, and display it.
So, when people talk about “digital HDTV” sets, they’re usually referring to high-end sets that can display the extra information that comes with a digitally-conveyed HDTV signal. HD sets are built to display a different type of “pixel” (short hand for “picture element”) that’s square instead of rectangular, and they’re capable of rendering those pixels in a widescreen format similar to movie screens.
Before any HD channels can be squirted into the plant to travel to connected homes, a few bandwidth decisions must be made. Six times more picture information has a predictable effect on bandwidth: HDTV needs more.
In raw numbers, a compressed HDTV signal needs 19.2 Megabits per second (Mbps). By contrast, most SD signals currently take up about 3.5 Mbps.
As discussed many times before in this column, cable uses a modulation type called “QAM,” for “quadrature amplitude modulation,” to move digital signals from headend to home. The earliest form of QAM was 64-QAM, which affords about 27 Mbps of useable bandwidth. Today’s QAM implementations run at 256-QAM, which boost the rate to just under 39 Mbps.
Simple math (19.2 Mbps x 2) shows that two HDTV signals can fit into one 6 MHz channel modulated with 256-QAM, and that’s exactly what some MSOs are doing. In some cases, three HD channels can slip into a 256-QAM channel, depending on the source material. Talk also persists about further manipulating the incoming HD signal to pack even more bits into the transmission pipe.
(In practice, re-squeezing HDTV pictures will probably elicit skirmishes. Remember the early days of video compression, when 24 channels of video were going to fit snugly into one 6 MHz channel? That much snug affected picture quality, which made content creators grimace. Now, most operators don’t push more than 10 or so SD channels into one 6 MHz channel.)
At the house, two things can happen, depending on which digital set-tops are in use for “regular” digital TV. Some suppliers, like Scientific-Atlanta, offer an “integrated” HD set-top. That means that the stuff that knows how to recognize and deal with HD signals is built-in. Others, like Motorola, offer a “sidecar” HD device. When HD signals enter the existing digital box, it sends them off to the sidecar for processing.
The output of the HD box, the last few feet of wiring that moves the HD signal from the box to the TV, and the input to the TV is perhaps the most contentious of HDTV’s techno-political conundrums. It’s about piracy. Digital pictures don’t degrade, and are thus prey for perfect copies. The contentious details of this discussion will fill a future column.
In short, cable technologists involved with HDTV launches are almost ho-hum when discussing the to-do list for launch. It’s no more remarkable, or unremarkable, most say, than any other new service launch.
So, if the devils of the HDTV transmission are indeed in the details, they’re not in the tactical particulars of launching the service.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // July 22 2002
Pity technology’s new-again beauty, high definition television — that breathtakingly beautiful girl at the dance, so lovely that not a single boy can muster the courage to ask her to the floor.
Everybody wants HDTV, but nobody wants to pay for it. The reaction to it is rapturous — “Mesmerizing!” “Better than the eye can see!” – yet less than three of every 100 American homes contains an HDTV set. Of those, most are used to view DVDs more so than to tune into a growing, but still slim, amount of HD television content.
From origination to destination, and at every junction along the way, HDTV has issues. Big, techno-political issues, rooted in fear and cost – the dynamic duo of “do nothing.”
This week’s translations will stroll the signal path of a broadcast HDTV transmission, with brief stops at the trouble zones. Subsequent columns will further explore the many strife points.
Troubles lurk right from the start of an HDTV program. First is cost. It’s high, to say the least. HD-equipped TV production trucks can run north of $500,000. Transmitting an HDTV show costs in duplicate staff and resources, because it doesn’t naturally supplant existing, analog broadcasts.
Also omnipresent is the bulk of the HDTV signal. The extra information that justifies the high-definition label is plump – so much so that two HDTV channels can barely wriggle into the bandwidth used by 10 of today’s digitized cable channels. Yet bandwidth is not infinite, or free.
When an over-the-air HDTV signal arrives at a cable headend, it needs processing known as “re-modulation.” That’s because sending stuff through the air is more harsh than sending it over a wire. (Anticipating this, and adding information to compensate for it, is another reason HDTV signals are so stout.) As a direct result, broadcasters use a method called “vestigial sideband (VSB)” to convey signals. Cable uses QAM, or quadrature amplitude modulation.
Engineers say it’s not a big deal to re-modulate from VSB to QAM. Being realists, though, they usually add that any signal conversion can introduce problems.
And then there’s the consumer side of HDTV, where there’s way more questions than answers, and the answers are hiding within unresolved arguments. Most, if not all, of the in-home HDTV problems have to do with the set itself. Is an HDTV set the same as a digital TV set? And for that matter, what’s digital about a “digital TV,” as described by consumer electronics and retail stores?
In these nascent, confusing days of HDTV, many consumers think they already own an HDTV set, but when the cable installer gets there with an HD-capable set-top box, they learn otherwise.
It seems that “digital,” as an adjective, is as watered-down as “new and improved,” especially when it comes to HDTV.
Part of this descriptive problem links to a decades-old argument of surprising intensity, given the blandness of the protagonists: Connectors. Yet it is precisely at the “gozintas” of HDTV that things get ugly.
In a huge oversimplification, there are two types of connectors that feed HD signals into HD displays: Analog, and digital. Of the digital, there are also two types: Firewire, also known as IEEE 1394, and “DVI,” for “digital visual interface.” (DVI also has a next-generation version known as HDMI, for high definition media interface.)
Firewire makes it easy to daisy-chain more electronic things to the HDTV set – like digital recorders. This, of course, makes Hollywood queasy, not to mention vocal, about rights management and copy protection. Firewire also caps the types of advanced graphics that can accompany a show, because its speed capabilities, while fast, aren’t fast enough for HD graphical overlays.
DVI solves those problems, but it precludes the attachment of the other stuff that consumer electronics companies would like to sell, which makes them unhappy.
And, while less of an issue now, the matters of resolution linger, which further muddy the understanding of HD. This is the “720p” and “1080i” tags that describe lines of resolution, and how they’re painted on the screen.
But wait, there’s more….Questions also linger around the notion of adapting a digitally-encoded show from one resolution to another, often referred to as “up-rez’ing” or “down-rez’ing” an HDTV image. To “up-rez,” for example, is to add detail that wasn’t in the original, standard-definition, digital picture. It’s like trying to make a pineapple upside-down cake from two twinkies and a can of fruit cocktail.
Why care? Why now? Because if you work for one of the top-10 U.S. cable operators, you’re part of a commitment to deliver five HDTV channels, including broadcast transmissions, next year.
After 20 years in the making, it’s starting to look like the most tangible change in television since color may finally get its turn on the dance floor.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // July 08 2002
When we left off last time, we’d tapped into the meaning of this new thing called “Gigabit Ethernet,” or “Gig-E”. This time, a look inside the thought processes underway by the industry’s architecturally-minded technologists, about whether Gig-E’s contributions to signal transportation make good sense.
Generally, Gig-E’s proponents see it as an inexpensive way to outfit cable plant for the billowing bandwidth needs of on-demand TV, beyond films. Detractors say that Gig-E may not be as cheap, nor as fast, as promised.
The answers lie partly in a cost comparison between digital video storage and bandwidth, and partly in common sense about what works and doesn’t work for specific cable systems. Thus, this week’s translation will be largely the philosophy of technology decisions.
To believe in the cost benefits of Gig-E is to believe in the inevitability of the many letters now preceding “-OD” (on demand) in industry conversations. There’s “V” and “SV,” the old standbys, for “video” and “subscription video.” But there’s also “FOD” (free on demand), “EOD” (everything on demand), “GOD” (games on demand) and “E-I-E-I-OD.” (kidding.)
Offering on-demand service is suddenly about a lot more than movies. TV shows, sporting events, short-format how-to clips, and any other digitized video material would also roost on those video on demand (VOD) servers, along with the movies. Consumers could watch what they wanted, when they wanted, with all the VCR-like features of VOD – fast forward, rewind, pause.
Today’s VOD offerings – again, mostly films – are usually stored on servers in distribution hubs. Each hub manages the flow of signals to and from about 200,000 homes passed by cable service. The hub is also the aggregation point for the 500-home nodes you always hear about when people talk system architecture.
Say a system has five such hubs. Making more on-demand stuff available for customers would mean duplicating all of that material, five times. Doing so is expensive and unwieldy, Gig-E people say. Maybe it’s cheaper to centralize the servers in one headend, and switch the video out over Gig-E to the hubs.
Here’s where it makes sense to look a bit deeper. Let’s assume a 50% penetration of digital video service to that hub that passes 200,000 homes (some MSOs go much higher). You’re down to 100,000 homes, ready to watch TV and movies on demand.
This is where the math changes. Early on-demand experiments with video other than movies show radically different usage patterns. Most common VOD (i.e. movies on demand) models assume that at any given time, the network must be ready to support 10% of the people in that hub (1,000) to request the same video stream, at the same time. They call this the “peak simultaneous” usage rate.
Yet in TV on demand, usage peaks could go much, much higher. Nobody knows for sure how much higher – it’s too soon. Let’s go completely mad and say that at any one time, half of the people who could do on-demand TV viewing, would do on-demand TV viewing. That would mean a need to store and prepare as many as 50,000 streams at one time – and that’s just for one of the five hubs.
The other side of the model is to ascertain how much it would cost to centralize all the servers in the headend, put in a fast transportation method between the headend and the hubs – like Gig-E – and fold in a video switching mechanism to use the available bandwidth more efficiently.
If you lived through cable’s early 1990s technology chapters, when cable and telcos were poised to raid each others’ core businesses, you’re probably close to a gasp right now. Switch the video? Isn’t that industrial blasphemy?
If you didn’t live through it, here’s the recap: Back when cable and the telcos were starting to square off, some telcos picked switched digital video (SDV) technology to get TV signals into homes. For cable technologists, SDV was an easy target, scoffed at as “gold-plated,” “frivolously expensive,” “unnecessary.”
But then again, VOD was little more than convention glitz back then. Video wasn’t yet digital. Hot sellers in set-tops were Scientific-Atlanta’s 8600x, and General Instrument’s CFT-2200, both analog. Gig-E was yet to become the grandchild of 10 Megabit-per-second Ethernet, the fastest at that time.
There are no right or wrong answers yet to the question of Gig-E’s applicability in lots-on-demand systems. In hubs where space is tight, it may make sense to put the servers elsewhere; other local conditions will skew the logic every which way.
But it’s safe to say that Gig-E is worth consideration, and that switched video perhaps isn’t the pariah it once was.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // June 24 2002
Most who cleave to technological matters are occasionally guilty of making nouns out of verbs, like saying “transport” to describe things that move signals from one place to another. It goes like this, conversationally: “It’s part of the transport,” or “it’s a transport issue.”
In cable, when people say “transport,” they mean the devices and paths linking headends and distribution hubs, more so than they mean the “last mile” of coaxial cable conveying signals to homes.
Distribution hubs are part of cable’s regional network. They connect the headend to the last mile. One hub generally serves around 200,000 homes passed, and from it, signals are sent to smaller, 500-ish home nodes. Other industries would call this region a “metropolitan area network.” Links between a headend and its distribution hubs can span a few kilometers, or 40 kilometers, depending on geography.
“Transport” discussions invariably evoke the brainiac language of fiber optics: Wave division multiplexing (WDM), and dense wave division multiplexing (DWDM), for example. Both shove more bits into more “colors” of light, along the same fiber optic cable.
Despite the waning mound of capital dollars earmarked for network rebuilds and upgrades in U.S. cable systems, suppliers of transport equipment are always coming up with new paraphernalia. Using it, mercifully, doesn’t involve digging up streets. It usually means changing out the smallish metal boxes on each end of an installed length of fiber.
The logic works: Keep the expensive part of the investment right where it is. (The expensive part of building a shiny, new, $30,000 mile of hybrid-fiber coax plant is the labor required to suspend or bury cables.) Boost the ends with better stuff that can either accommodate more, go further, or both.
The new stuff of today’s transport discussions is “gigabit Ethernet.” Its shortened version is “Gig-E,” as in, rhymes with ziggy. “Giga” means billion, so a “gigabit” is a billion bits transmitted each second. A bit is digital’s lowest common denominator. It’s the one or the zero, essentially – a binary digit, minus the “nary” and the “digi.”
Which brings us to the Ethernet part of Gig-E.
Conversationally, Ethernet is often the magic, if inscrutable, answer. How does it work, you say. It’s Ethernet, you’re told.
Aha.
As though saying “Ethernet,” with a solemn, knowing nod, will fill the knowledge gaps and make everything ok.
What is Ethernet? It’s a tried-and-true specification, now 29 years old, for hooking stuff up together to work as a group. It started out with links between PCs and printers in offices, which came to be known as “local area networks.” Ethernet’s identifying characteristics are things like data rate (a billion bits per second, in this case), maximum link length (about 500 meters for coax, and 100 meters for twisted-pair copper), media type (coax, fiber, or twisted pair), and topology (the “shape” of the network – bus, star, point-to-point).
Ethernet also describes how interconnected things communicate. Say you’re a device that speaks Ethernet. You’re at a cocktail party with 30 other Ethernet devices. Mostly, everyone stands, sips, and listens (to nothing, because no one is talking.) Suddenly, you have something to say. If you happen to blurt your packets at the same time someone else is blurting packets, your blurts collide. You stop talking, wait a random amount of time (measured in micro-seconds), then repeat what you said. So does the other person. The random wait interval helps to assure that you don’t collide again.
Couple gigabit speeds with Ethernet, and you’ve got a way to move digital information really fast, from one place to another.
Watch for discussions about Gig-E to heat up in lockstep with the notion of on-demand video, beyond films. Right now, most operators place video servers in multiple distribution hubs, and replicate the contents periodically. The thinking is that there won’t be sufficient storage, nor a way to quickly refresh the servers, when regular television (i.e., not just movies) shifts to on demand.
There’s ample reason to move carefully into Gig-E, our industry’s more cautious technologists suggest. One of the reasons Gig-E is suddenly warming to cable is the meltdown of the competitive local exchange carriers, or CLECs. When your customer base evaporates, you find a new one. Cable is the new one. That doesn’t necessarily mandate the use of Gig-E.
Also, some technologists warn: If it sounds too good to be true, it probably is. Particularly the most common rose that sweetens Gig-E discussions: It’s cheap. Commoditized.
Making sure it’s as cheap as everyone says involves modeling how bandwidth usage may change in the face of an on-demand TV surge. More on that next time.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // June 10 2002
Last time, we dissected how billing systems became inextricable from the many automated processes that run the cable business, and how that inextricability decelerates new service launches.
(The condensed version, in case you missed it: Nested within the “can we bill for it” question about any new service launch are piles of other questions. From customer acquisition, to workforce administration, to billing and collection, cable’s billing systems are the back office.)
This time, translations on three of the buzzwords that accompany the industry’s inevitable march toward transactional services: Mediation, rating, and settlement.
The shift toward transaction-oriented billing is inevitable because stasis is unacceptable. Or, as the saying goes: If you’re not growing, you’re dying. Business growth usually involves selling more things consumers want; consumers want instant gratification; instant gratification is more point-of-sale than monthly bill.
Which brings us to the heavy reality, again, of how hard it is to launch new services if the billing system isn’t nimble.
Shifting towards a transaction-oriented marketplace is considerably more complicated than, say, tacking $3.99 onto the monthly bill for a pay-per-view event. Maybe it’s deciding on a Monday to offer a free month of subscription video on demand (SVOD) on Friday, perhaps for all customers who watched more than two movies that month. Or giving customers a way to rent a combo phone/broadband Internet line for the guest room, because Uncle Bob, with the electronic toolbelt and the phone growing out of his ear, is headed in for the weekend. All the while, the new generation of transactions must know how to apply the right taxes to each service combination.
And that’s just for video and data. Adding voice services makes everything else look easy, billing aficionados say. Just adding a new telephony customer means sending alerts to the agencies of emergency/911, directory assistance, and the national databases that route calls. That’s before setting up the account for calling features, or making a way for a phone number to be returned to the local exchange carrier if a customer moves.
Suddenly, there’s lots more information that needs to be collected, from lots more equipment, in order to know what’s going on. The process of extracting the data necessary to compile a transaction is known in billing circles as “mediation.” Tactically, mediation culls the data inside the headend controllers, for broadcast video services, or from video servers, for on-demand services. Ditto for the CMTS and telephone switch, for data and phone activities.
But information is just information, without rules. That’s where rating comes to life: It establishes what rate to apply to a transaction, based on pre-established conditions. For telephony, mediation is what harvests a call detail record from the switch. Rating is what calculates the price of the call, based on time of day and rate per minute. Or, in a multiple ISP environment, mediation and rating are the enforcers of bandwidth-based contractual agreements (between the MSO and the ISP), to make sure bandwidth overages are captured.
It’s like a spreadsheet, in a way. The mediation data is what’s in the cell; the rating is the equation applied to that cell to come up with the answer.
Then, there’s settlement: the collection and remission of monies not related to consumer billing. Example: Digital interactive customers who click on an enhanced ad and request something (thus making themselves a coveted “qualified lead”) are worth something. Or: A call from Denver to Tokyo may transit eight different networks. Each hop has a cost. In both cases, something has to collect or remit money.
Billing, as it is today, is complicated. Billing, as it needs to exist tomorrow, is more complicated – just like everything else. That’s why it’s probably time to learn the rudiments of relational databases (like plastics, they’re the future, son).
It’s also probably time to warm up to those people in the computer department who come to your aid when you suddenly can’t sync your Blackberry with the office mail server. Billing systems are their life, and so the entire back office is their life. Know them and know where you’re headed.
This column originally appeared in the Broadband Week section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.