by Leslie Ellis // February 16 2001
The storied history of misplaced market expectations would make for a useful healing element in the obituary of Lucent’s scuttled cable phone line, were anyone to write it.
For starters, Lucent is hardly the first big equipment supplier to choose the hokey-pokey as its cable technology dance. Intel Corp. and Hewlett-Packard both put their left foot in, then took their left foot out, of the cable modem business. Neither stayed long enough to shake it all about.
Likewise, there’s a history between cable technologists and the company now calling itself Lucent. In the early 1990s, when Lucent was Bell Laboratories, it developed a cable phone equipment line with its then-partner, Antec Corp. The gear, branded “cable loop carrier,” ultimately tied up too many research and development dollars and was cancelled.
Last week’s Lucent development is sad, as are most layoff byproducts. The people who were making its “Pathstar” and related cable phone products are smart and earnest. They re-entered the cable phone market in 1999 with believable contrition about the company’s previous efforts. But, Lucent apparently found itself at the unhappy intersection of financial bloat and market impatience. It took its left foot out.
When big technology suppliers pull out of cable, it usually belies the un-reality of expecting massive market penetration, really fast. H-P, which entered the cable modem market with gusto in 1994 — who can forget the gigantic kayak paddles distributed around the industry? — mis-interpreted order sluggishness in late ’96 for data stasis. Hindsight shows a missed opportunity of some 4 million deployed modems in the U.S., as of the end of 2000.
Managed expectations lessen later disappointments, which brings us to cable’s current plans for voice over IP. Will it happen? Almost certainly. Why? Because all technology compass readings are pointing toward Internet Protocol; because it’s an incremental expense, and because it re-uses the data path the industry is already putting in place with cable modems. When? In four distinct and fairly predictable phases.
But first, some basics. Voice-over-IP contributes another awkward acronym to cable’s jargon cocktail: VoIP. (A few brave souls say it as a word – “voyp” – but end up sounding like an arrythmic windshield wiper or, at best, a background artist for Laurie Anderson.)
Cable VoIP has a technical shepherd: The CableLabs “PacketCable” group. In a blatant simplification, PacketCable is a set of software-based mechanisms written to do exactly what today’s analog, circuit switched phone network does, from dial tone to ring tone. Unlike other VoIP specification efforts, though, which address only portions of how to make a phone call work in IP, PacketCable maps out the entire journey. This is no minor task, yet much of the spec-writing work is already done.
Just as they did with DOCSIS, expect MSOs to vigilantly insist that vendors adhere to PacketCable. Already, MSOs are raising the red flag against any de facto standards attempts by the supplier community, warning that they’ll place their votes with purchase orders.
PacketCable has a technical pre-requisite: DOCSIS 1.1. This is the first phase in cable’s VoIP work.
PacketCable needs DOCSIS 1.1 for its quality of service (QoS) features, so that calls placed over the cable IP path (today’s cable modem path) sound clear and synchronized, and parallel the grade of service you currently get when you talk on a wired phone.
There are four test waves scheduled this year for DOCSIS 1.1. Results will be announced on March 30, June 22, September 28, and December 21. It took three full waves for suppliers to earn compliance for DOCSIS 1.0-based gear, which is the stuff being deployed today. CableLabs is in its second test wave for 1.1-based gear, with results expected at the end of next month. If history repeats itself, it will be the June 22 round that produces the first certified 1.1 gear.
Add nine months or so for PacketCable tests – this is phase two — and cable’s foray into VoIP becomes a Spring ’02 phenomenon. Until then, lab tests and market trials. That’s phase three, which will also yield knowledge on how to make the service deployable across millions of subscribers. Phase four is the launches themselves.
Everything from phase 2 onward will likely vary somewhat, MSO by MSO. AT&T Broadband has indicated a preference for “lifeline” phone, meaning the phone remains useable even if the power is out. Doing so requires shoring up the HFC plant to accommodate the powering needs of the VoIP gear when the power grid is out.
Others, like AOL Time Warner and Comcast, are more interested in voice as a sort of audio service that complements their existing data efforts.
That’s the who, what, when and why of cable VOIP. Next time, the “how.”
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // February 05 2001
It’s not easy being an interactive trigger. Its journey is arduous, sometimes traversing as many as three distinctly different broadcast networks. On arrival, the trigger doesn’t always (or even often) get a warm welcome, because most of today’s set-tops aren’t equipped to see it on the threshold.
Even if the trigger is recognized and ushered into the box, its lifespan is short: An expiration date is imprinted into it. In the sliver of time before it dies, the trigger has one purpose: To be the bright, sparkly thing, the attention-getter, that coaxes TV viewers to point the remote at it, push the button, and initiate a tryst.
Like salmon, triggers are born with a mission: To make a difficult journey, perhaps with a tryst near the end. If they’re lucky. Then it’s over. (And we haven’t even gotten to the upstream part yet.)
Yet the interactive trigger, for dozens of interactive service providers, is the capstone in the bridge between today’s broadcast world, and tomorrow’s interactive, session-based world. Regulators are also curious about triggers – especially who may or may not block them. And, fears are already mounting about the potential to slip unauthorized triggers into a broadcast, prior to established business arrangements.
What on earth is a trigger? The word itself is comes from the Advanced Television Enhancement Forum, or “ATVEF,” an ITV standards group spawned by Microsoft Corp., Intel Corp. and others. The idea was to bind interactivity into a TV broadcast. (Some now call this “program synchronous” interactivity.) The design goal was to make it quick and easy, and write once/run everywhere, for content developers.
Quick and easy usually means using existing standards, which in ATVEF’s case is HyperText Markup Language, or HTML. The “trigger” is the Web address where an interactive application is held. Because Internet pages look smeary on TV without some tweaking, the URL embedded in the trigger is usually a special destination, already tweaked for TV resolution.
Triggers ride shotgun, and in real time, within the vertical blanking interval of an analog TV broadcast. In digital broadcasts, like those based on MPEG-2, triggers splice into a private data stream. Those regions also hold things like closed captioning information.
In some cases, the TV picture and any embedded triggers blast up to a satellite in space, then down to headend receivers on Earth. There, the trigger may get removed and re-spliced into the picture before riding in one of three signal paths – VBI, in-band, or out-of-band – over cable’s hybrid fiber-coax plant to a set-top box.
Say the set-top can recognize the trigger. Say it knows to translate it into an eye-catching icon, on the TV screen. Maybe it’s an offer for a sample of the Super-Hot compost activator Roger Swain is using on “Victory Garden.” (Hey. The writer gets to pick the example.) Say the viewer decides her own pile could use a kick, and clicks.
What happens next depends on the set-top. Units equipped with a two-way IP path (such as an embedded cable modem) would likely fling the request up the cable reverse path, using the IP signal path, through the companion CMTS (Cable Modem Termination System, the headend part of high-speed Internet systems). Destination, out of the CMTS: The server holding the Web link.
Maybe that server is in the headend; maybe it’s on another continent. Latency matters here, so that’s a good thing to ask about when talking to ITV suppliers. The viewer just acted impulsively while watching a show. For her, its about the show, not the impulse. The second it becomes about the impulse, it becomes annoying.
Set-tops not equipped with a two-way IP path have a tougher time with triggers. If the box is a two-way unit with impulse pay-per-view features, it can pass the trigger along – in due time. Most IPPV boxes rely on a polling technique, orchestrated at the headend. Essentially, a companion unit in the headend pings each set-top, one by one, in a big circle. If it were a conversation, it would go like this: Headend to box: “Got anything? No? Okay, catchya next time.” And so on, box by box. The round-trip is measured in hours, not seconds.
Why should you care about triggers, especially with everyone saying 2001 is VOD’s year, with all the interactive stuff to follow? Two reasons, really. First, it never hurts to understand how things work. Second, triggers are likely to find their way into your system at some point.
If you want to be trigger-happy, be sure to ask ITV suppliers about the back-channel. Does it require a two-way IP path to a server? (If so, do you have one?) Is the server local, or remote? Is there anything proprietary in the mix?
Remember scale, too: What happens if zillions of people all click at the same time? Ask about latency. How long does it take from click to response? And, ask how the service maps onto your existing and future digital set-top deployments.
Triggers do work, by the way. Ask Mixed Signals Technologies or RespondTV, two of several providers using triggers regularly on the WebTV service.
Whether or not they work for you depends on your technology roadmap.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // January 22 2001
Anyone who ever attempted to track a technology through three days at the Consumer Electronics Show, followed by two days of the SCTE’s Emerging Technologies conference, knows that the contrasts can be just as interesting as the quest.
Such was the case with home networking glitz and grit at this year’s back-to-back technology events. At CES in Las Vegas, suppliers smoothly clamored around whichever home network type they’d chosen to endorse. There are several, which is part of the problem. In the digital glamour of it all, the home network somehow became a foregone conclusion.
The CES digital glitz showed stoves that solicited recipes from the Internet. Alarm clocks that tolled custom MP-3 music or Internet newscasts from other networked sources. Plasma displays framed in wood, like a picture, and hung on simulated residential walls to display remotely stored artwork or photographs.
Home networking matters weren’t so blasé at Emerging Technologies in New Orleans, where cable technologists absorbed the technical grit on its many complexities. Technology issues around home networks are plentiful: Security, so that a shared apartment wall doesn’t make unwitting spies out of neighbors using wireless networks. Frequency allocation, so that microwaves and klystron lights, soon to become more prevalent in public and office buildings, don’t interfere with wireless home networks. And, debate lingers over which device becomes the head unit for the home network – the set-top, a souped-up cable modem, or a wholly new gateway unit.
Through it all was the niggling issue of how to support what will assuredly be a wide mix of home network types chosen by cable customers. Here’s a distillation:
Ethernet. Chances are, if you peer around behind your office PC to see how it’s connected to your local area network, you’ll see a thick wire attached to a connector that’s slightly wider than an RJ-11 phone jack. The wire is known as “category 5,” the connector is RJ-45, and you’re on either a 10 Mbps or 100 Mbps Ethernet connection.
Most existing home networks are Ethernet-based, and were installed by early adopter types to share computer peripherals: The laser printer, back when they were really expensive, or the scanner. The problem with settling on Ethernet as a going-forward strategy to interconnect home entertainment, computing and appliances is that it usually violates the “no new wires” rule. Ethernet, as a rule, works better over the “cat 5” grade of wire. Most existing residences don’t have it.
HomePNA. The “PNA” stands for “Phone Network Alliance,” and is a consortium of 70+ companies rallying around the use of telephone wires to distribute information in a home network. The earliest work from this group is already on sale today – HPNA 1.0, which delivers data to connected devices at a speed of about 1 Mbps. Version 2.0 comes out this year, and delivers 11 Mbps. The plus is that most homes already have telephone wires nested in the walls, which means no new wires are required.
Wireless home networks: Wireless proponents tend to swirl around one of three types: Bluetooth, HomeRF, and 802.11b, an IEEE standard that is starting to become known as “Wi-Fi.” Each has more than 70 supplier proponents.
Bluetooth is slowest – around 768 kbps – and cheap, and is good for short-range stuff, like computer speakers that don’t need wires to play sound, or communication between PDAs and cell phones. Wi-Fi is faster (11 Mbps), more expensive, and spans longer distances (300 feet). HomeRF is in the middle – about 1.6 Mbps now, rising to 10 Mbps next summer. All three will face security and interference issues.
Powerline: Another multi-supplier consortium, “HomePlug,” aims to make existing power wires the distribution media of choice for home networks. Long a dream but never fast enough to do anything substantial, the latest group of powerline proponents aims for 10 Mbps this year.
Why doesn’t somebody just build a box that includes every type of home network, and be prepared for whatever consumers buy? It’s happening. Intel Corp. is working on one, as is Boston-area startup Ucentric Systems, among others. (Note that there’s serious lexicon confusion around this device. Some call it a “residential gateway.” Others call it a “home server” or a “home media gateway.”)
Regardless of what name it ultimately takes, this sort of box makes sense, but simultaneously poses strategic risk. It has to do with the links to the broadband service provider. Most gateways have two: Cable, and DSL. What if the bundled customer happily interconnects all the stuff in the house to the gateway, then decides to switch to DSL? It’s a one wire change, and you’ve just lost a good multi-pay customer. Of course, the converse is also true.
Me, I’m still holding out for the group that can sync my alarm clock with the coffeepot, so that if I happen to hit “snooze” a few times, for nine more minutes of sleep, I don’t awake to burnt coffee.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // January 08 2001
As the year of video on demand opens, it seems timely to respond to a mailbox-full of inquiries about the “hows” of VOD technology, architecture sizing, and related modulation implications.
If cable providers do what they’ve said they’ll do this year – lead interactive services with a fairly hefty push on VOD rollouts – then VOD will become the industry’s first major foray into session-based services. Watch for Time Warner to be the most active, igniting as many as half of its 38 regions with VOD by the end of 2001. AT&T Broadband, Comcast, Cox, Charter and the rest of the pack plan to be similarly aggressive in ’01 VOD launches.
Getting ready for VOD starts with understanding how much of what you’ll need, in server storage space, headend equipment, and bandwidth capacity.
If you believe that VOD simply requires servers and receptive set-tops, think again. There is more.
At the least, to properly offer VOD, you need: Video file servers; channel upconverters (to take the raw output of the servers and put it on a tuneable channel); quadrature amplitude modulation (QAM, pronounced “kwahm,” a commonly used digital modulation technique in cable); security methods; and set-tops capable of handling rentable VOD content. On the back-end, you need hooks into your billing software, to collect rental fees.
An architectural rule of thumb for VOD modeling is to begin with fully saturated homes-passed by cable services, on a per-node basis. Say you’ve already built your system to run fiber out to nodes serving 500 customers, and that, ultimately, you expect 80% of those homes to take basic cable services. That yields a starting point of 400 customers.
Next, assume that half of those homes will ultimately take a digital box. You’re up to 200 VOD-capable homes, over time. (If you start with your current digital penetration number — 20 to 30% — you run the risk of under-sizing the network’s future VOD needs, which will cost you later.)
Calculating how many simultaneous VOD streams you’ll need comes next. Currently, the thinking is to assume that at any given time, one in ten people (or 10%), will opt to watch movies at the same exact time. Using that math, you’ll need to organize equipment and bandwidth to serve 20 concurrent VOD video streams.
(It should be noted, at this point, that straight math doesn’t always correlate to actual human actions. Think of your neighborhood. Maybe you rent one or two videos each weekend. But chances are, everyone in your neighborhood doesn’t rent flicks with the same regularity. In this writer’s house, for example, about 2 films are rented every two or so months, then get returned well past the due date. That creates a bit of domestic debate over who goes to Hollywood Video the next time, because each subsequent trip necessitates stopping at the ATM to pick up enough cash to pay the late fees.)
We’re up to bandwidth sizing for 20 VOD streams. This is where digital modulation comes into play. Modulation, simply put, is the process of imprinting information onto a communications carrier, so that it can get from one place (the headend server) to another (the VOD-capable set-top). Cable currently uses a digital modulation technique known as 256-QAM, which equates to about 38 Mbps of useable bandwidth, per 6 MHz channel.
Handily, each digital video film that’s been compressed with MPEG-2 uses roughly 3.8 Mbps of bandwidth. Divide 256-QAM’s capacity (38 Mbps), by MPEG-2’s data rate (3.8 Mbps), and you get 10. This means you can stuff about 10 films into one, 6 MHz cable channel. If you need 20 simultaneous VOD streams, you need two 6 MHz channels, assuming 10% peak, simultaneous usage by VOD customers.
The process of placing digitized content into the HFC system for receipt by digital boxes varies, predictably, from manufacturer to manufacturer. Simply put, what needs to happen to get a VOD movie to a home is to:
1. Digitize and compress it (MPEG-2), 2. Store it (the work of companies like Concurrent Computer, Diva Systems, nCUBE and SeaChange International), 3. Upconvert the output of the server to a specific channel location, 4. Wrap it with conditional access and encryption safeguards against theft, 5. Multiplex (smoosh) it onto a carrier using 256-QAM, and 6. Send it to the digital set-top box.
This can happen in varying order. Both of the industry’s major suppliers, Motorola and Scientific-Atlanta, ultimately execute the same functions, but do so in different order.
As VOD becomes reality, instead of models and tests, the name of the game will be scale – the ability to augment the model with more serving capacity, and more 6 MHz channels dedicated to VOD (thus, more QAMs). Until then, the models and equipment lists cited here should yield at least an intellectual start to the process.
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // December 01 2000
The Halloween candy isn’t gone yet, the Thanksgiving feast is showing up on the bathroom scale, and we’re headlong into the high-caloric holidays…which brings to mind the ongoing talk about “thick v. thin clients.”
First, some basics. When people say “thin client,” or “thick client,” they’re talking about the digital set-top box. This is “client” as in “client-server.”
Second, thick v. thin has less to do with physical size than it does with capabilities: How much processing power, graphics capability, and memory resources are available.
Third, “thin versus thick” is really a supplier-inspired way of delineating “now versus next.” Thin is now. Thick is next. Because time is relative, what’s thick today is thin tomorrow, and what’s thin today was thick yesterday.
In today’s environment, “thin” correlates to Motorola’s DCT-2000 line, and Scientific-Atlanta’s Explorer 2000 line (including all clones). “Thick” describes both manufacturers’ more advanced boxes: Motorola’s DCT-5000, and S-A’s 6000/8000 series.
Fourth, there will always be the thick and the thin. Six of cable’s top-7 operators (Cablevision excepted) operate services on “thin” boxes to an aggregate 6.8 million U.S. households, as of September 30. They’re installing “thin” units at a combined rate of about 114,000/week.
Always seeking metaphor, I offer this: “Thick versus thin” is like Americans. Generation by generation, we get taller and heavier. Photographs of ancestors show smaller, shorter people; your grandmother’s china hardly seems big enough to hold today’s ample servings. You order a cup of coffee in another country, and embarrass yourself with your surprise at its thimble-like capacity (at which point you know for sure that you are an American).
Ditto for set-tops. Motorola’s DCT-1000 was gigantic compared to the various analog units that constituted the set-top landscape, in 1995. Those very boxes, grandfathers now to the beefier DCT-5000, seem tiny by functional comparison. And yet, some already view the DCT-5000 as emaciated.
Somehow, we’ve all gotten wrapped up in the debate, while avoiding the steeper slope. Thick versus thin is an issue, but it is not the issue. Client-server is the issue. That’s the new part, and the challenge. It is the sessions between the box and the server that matter, because “thin’ is a given.
Think of the “now:” The 6.8 million digital boxes deployed by AT&T Broadband, Adelphia Communications, Charter Communications, Comcast Corp., Cox Communications, and Time Warner Cable. They present more channels, and a navigational aid. The channels are compressed with MPEG-2, imprinted into a Quadrature Amplitude Modulation (QAM) transmission path, broadcast to the box, decompressed, and presented to the TV. The guide, in most cases, is a permanent resident in the box. No client-server activities exist, for the most part.
People who say they run on the “now” usually mean they’ve devised a way to make the box do sessions with their software, which is located elsewhere. Some call this a “virtual channel” environment. In some cases (think WorldGate, or Ron Pitcock’s new venture, Integra5), this means they’ve made their application behave as though it’s an MPEG-2 video channel.
In other cases (think all middleware providers), a built-to-fit software module sits in the box, acting as both interactive storefront and traffic cop to various, remote interactive applications.
The “next” will be video on demand (VOD), cable’s first major push into session-based interactions. VOD will pave the way for other types of interactive sessions, many of which will be tested throughout the U.S. and Canada next year.
The name of the game is and will be establishing how many simultaneous interactive sessions a box can handle. Executives with Scientific-Atlanta visited recently, to describe a chart they’d developed using deployed digital set-tops numbers. A blue line shot straight up, starting in March (not coincidentally when Time Warner turned on the digital set-top spigot). Under it was a flatter green line, depicting the number of deployed boxes capable of doing multiple, simultaneous interactive applications. The implication: The green line is about to fork wildly up, in lockstep with the blue line.
Yet, a few months back, I spoke with a cable engineer who expressed great glee at getting both Wink and WorldGate to run on a single digital set-top box, of the “thin” variety. This is where we are: Two. If it’s true that you can never be too thin, then it’s a matter of adding more without exhausting the very resources that make you skinny.
Until the thick boxes start rolling in, it’s probably wise to examine what you can do with what you already have. Tally up the resources under the hood of the boxes you already have. Compare it to the services you want to launch. Think sessions. You already are thin, with thick on the way. And once you’re thick, you’re too thin for the next stuff. If only this worked anatomically…
This column originally appeared in the Broadband Week section of Multichannel News.
by Leslie Ellis // November 13 2000
Last week, while the rest of the nation endured the last of the election frenzy, AT&T Broadband quietly fired up its “broadband choice” tests in its Boulder, Colo. system. Time Warner Cable quietly continued testing what it calls “multiple ISP” (“MISP”) access in Columbus, Ohio. To the north, Canadian MSO Rogers Cablesystems continued monitoring a two-year-old government mandate for the same thing, which they call “third party residential internet access,” or “TPRIA.”
The four labels – open access, broadband choice, MISP, TPRIA – all mean the same thing: Letting outside Internet service providers (ISPs) offer a faster connection to their customers by riding on cable’s swift lines.
Meanwhile, federal antitrust watchdogs gave the issue another vigorous shake last week, saying they’ll block Time Warner’s merger with America Online if the two don’t formalize and expand their open access plans. In defense, Time Warner and AOL told the Federal Trade Commission that they’ll stunt a potential AOL head start by not letting the online giant ride Time Warner’s broadband plant until at least one other competing ISP is there, too. In Columbus, that entity is to be Juno.
Ironically, AOL may need the head start. As it turns out, the way AOL architected its 25 million-plus subscriber network isn’t exactly a natural for a cable environment.
Why: AOL uses a method called “tunnelling” to connect each of its customers’ PCs to its web of servers. Technically, tunnelling is a protocol. A protocol is a language spoken by electronic bits travelling along a network.
Here’s how it works: When a customer logs in to AOL, a secret tunnel is instantly built between that PC, and the AOL network. Not only is each data packet encrypted (as is cable modem traffic). Each packet is also encapsulated. Encapsulation is the electronic equivalent of the plain, brown wrapper. The only visible parts are the packet’s source (who am I?) and its destination (where am I going?). Everything else is tucked inside the tunnel.
In AOL, all traffic moves in tunnelsl: E-mail, instant messages, chat, Web page requests. Say you point your AOL browser to multichannel.com. The packets comprising your request are encypted at your PC, encapsulated, and sent to an AOL server. That server says, “ah, I see a request for multichannel.com particular web site from so-and-so.” It fetches the page, encrypts it, encapsulates it, and tunnels it back to you. You never directly ping the multichannel server.
By contrast, the data packets that move to and from cable modems are encrypted, but not encapsulated.
This isn’t so much of a big deal in cable’s current ISP tests. Because there’s so many other components that need to be created, and because the service goal is limited to Internet access, AOL’s tunnels aren’t the end of the world.
However, when “broadband choice” evolves to mean multiple ISPs providing multiple IP services over cable’s plant – think Internet, plus voice, plus streaming services, for example — AOL’s tunnels could become a problem.
DOCSIS 1.1-based cable modems, which enter the market next year, bring the ability to do more than just speedy Web surfing. MSOs can mark packets as higher priority if, for example, they comprise a phone call. Or, the packets can be nailed up to a consistent bit rate for a period of time, say, for a streaming event.
But if AOL’s traffic rides in a secret tunnel from the home to its servers, how do cable providers see and pluck off the prioritized traffic? Think about what this means for things like local content, multicast events, or IP phone calls. In a tunnelled environment, accessing local stuff flings the request for it back to AOL, which then fetches it. It’s travelling the shape of a hairpin, instead of a hyphen. Ditto for a local, IP phone call. Phoning your neighbor on a future IP cable phone means sending the dialed digits through AOL’s tunnel, to AOL, in Virginia. There, AOL disassembles the tunnel, sees the IP voice bits, and presumably sets up the call. The words “complicated” and “inefficient” come to mind.
There are workarounds, of course. One option is to tear down the AOL tunnel at the headend, extract any prioritized traffic, and make it do what it’s intended to do. Requirement: More headend stuff. Another option is to give all AOL packets a high priority. Result: Inefficient bandwidth usage.
For now, this is more of a problem for AOL than it is for cable MSOs. The matter of AOL’s tunnels isn’t likely to land on the FTC’s to-do list, either: If anything, it hurts AOL more than it hurts outside ISPs. But it’s worth watching, and watching carefully, if for no other reason than AOL’s presumed spot at the helm of the No. 2 U.S. cable operator.
Consider: There are at least five things that don’t yet exist, but are critical for a multiple ISP environment. Necessity being the mother of invention, AT&T and Time Warner (and any other MSO wishing to test multiple ISPs) are building these five things themselves.
First is the screen that shows customers which ISPs they can pick. AT&T calls this a “service agent.” Second, there’s the traffic cop – the software that tracks which ISP is using how much bandwidth. Technologists call this a “mediation engine,” and view it as a way for MSOs and ISPs to color within the lines of the service agreements they forge with one another.
Third, there’s those ever-snarly billing links. Look at it logistically. Ten ISPs could mean 10 different billing systems; 10 different billing systems means 10 different required interfaces. MSOs will need to take the information tracked by the mediation engine (the traffic cop), and mete it out in an electronic format that the various ISPs’ billing systems can use.
Fourth, there’s trouble-shooting and conflict resolution. Say an AT&T Broadband customer, using AOL’s service, can’t access mail. Is it AT&T’s network, or AOL’s mail server? Who fields the call? Who fixes it? Who lets that end customer know what’s wrong, and when it’ll be corrected?
And lastly, there’s a specific equipment need. It’s known interchangeably in technical circles as a “source-based router” and a “policy-based router.” It’s needed because today’s cable modem traffic only knows one thing: Where it’s going. The destination. But, in order for MSOs to know which customer is connected to which ISP, the source of the packet becomes a necessity. This router sits at the Internet-end of the CMTS (cable modem termination system, the headend piece of cable modem networks).
Cable’s work to pry open their own networks, without donning a “common carrier” label, is perhaps the year’s most significant bit of technical pioneering.
(The use of source-based routing may come into question if the AOL/Time Warner merger proceeds. AOL’s dial-up network uses an alternative method to source-based routing, known as “tunneling.” Although Time Warner’s trial work will use source-based routers, it’s likely that AOL will have something to say in the matter. Source-based and tunneled routing methods aren’t mutually exclusive, but using both could require extra equipment, and could limit AOL’s ability to offer services other than high-speed data.)
This column originally appeared in the Broadband Week section of Multichannel News.
If nothing else was contributed to the technical lexicon this summer, it was one pair of words — “integration issues” — to describe the tortuously difficult work of getting interactive TV running.
What happened with AT&T Broadband, Microsoft, Motorola, Sun and TVGuide? Integration issues. Why did PowerTV buy integrator Presara Technologies? For experience in solving integration issues.
Some vendors learned about integration issues the hard way last year. Diva is one example: It discovered that it wasn’t enough to get a nod from Motorola to run VOD on its DCT-2000 box. It also had to hitch up to TVGuide, the primary resident application on most Motorola set-tops.
William Strunk Jr. would take one look at this word pairing, grimace, and handcuff it as a “refuge of vagueness.” Strunk and E. B. White (Strunk’s student, and later, co-author of “The Elements of Style“) had little time for abstruse descriptions.
Yet how else does one describe intangible complexities spanning technology, politics, and scheduling? “Integration issues” is a phrase here to stay. So let’s get specific about it. What does “integration issues” mean?
Integration issues are obstacles, usually in software, and usually related to making different types of software work cooperatively. Because it’s mostly software-related, an “integration issue” is not something you can pick up with your hands, examine, diagnose, and fix.
An example: The guide must be a good neighbor to other services like e-mail, Web-browsing and anything clickable. None can hog available processing power or memory. All have to harmoniously co-exist. One bad neighbor, and the box locks up. Having to tell the cable customer to recycle power on the box — to reboot — not good.
Integration issues are the reason interactive TV is so hard now. Think back to the mid’90s, when cable first started installing digital set-top boxes. Recall what those boxes did: Squeezed 10x or more channels into the space of one analog channel, and offered an electronic guide to help subscribers navigate. We didn’t hear much about integration issues then, because there wasn’t much software required.
Over the past five years, the digital boxes got beefier. Maybe this didn’t happen as quickly as some would like. I’m thinking of a keynote delivered to CTAM’s Broadband Opportunity Conference earlier this month by Jonathan Taplin, CEO of Intertainer. Taplin’s fervor was deliciously provocative: Evangalizing for faster digital set-top microprocessors that cost $45-$60 per box, he thumped incumbents Motorola and Scientific-Atlanta as “too cheap” to stay technologically current.
I played out Taplin’s request in my mind, and found myself feeling sorry for the poor chump who had to traipse in to corporate, hand outstretched, to request that silicon upgrade. For some MSOs, like AT&T Broadband and Time Warner, adding $45-$60 more per box sums to around $100 mil. in unanticipated, incremental cost — each — based on ’01 set-top order levels.
The ants in Taplin’s pants are understandable, and I commend his swashbuckling insistance on better chips. More is better, true. The problem is, more doesn’t happen overnight. The industry’s two biggest set-top makers, S-A and Motorola, are building about a million boxes each quarter. That’s a big ship to turn.
And it’s a ship that’s already turning. Look at the progression of what’s been under the hood of cable’s digital boxes. From the DCT-1000 and DCT-1200 in the mid’90s, to the DCT-2000, through to its advanced DCT-5000, Motorola increased processing power, memory, and graphics capabilities. Scientific-Atlanta did, too. Today, S-A’s Explorer 2010 boxes use a 130 MHz processor, and can be populated with double-digit RAM, in some cases as much as 50 Megabytes.
Seems like a long time since the big issue in the industry was whether or not to add another megabyte of memory — for a total of 2 Megabytes — to decode the b-frame portion of a TV picture compressed with MPEG-2. (That, too, was a $45/box decision. One megabyte of memory cost about as much back then. Cable did it, but added about a year to the digital launch timetable.)
More muscle under the hood of the digital boxes means lots more software. There’s the operating system, like Microsoft’s WinCE and PowerTV, needed to tell the chips what to do. There’s middleware on top of that (translated in in the October 2, 2000 edition of Multichannel News). And, there’s the applications: The guide, e-mail, web browsing, clickable ads, clickable content that correlates with the show that’s airing.
Part of “integration issues” are those individual software modules — the OS, the middleware, and the applications. Each has to run perfectly in isolation. Each also has to work perfectly with one another. That’s the technological to-do list comprising integration.
Integration also involves scheduling and organization. Take the case of AT&T Broadband, struggling to make an interactive TV business that runs on Motorola’s DCT-5000, loaded with Microsoft’s WinCE and Sun’s Java TV environment. Somebody had to know precisely where each partner was on the path to launch. Somebody had to mind the list of known bugs for each participant.
But what happens when that somebody is the customer – AT&T? Enter the politics portion of “integration issues.” It’s politically difficult (and I’m putting it mildly) to expose the number and location of your blemishes to your customer, no matter who you are, or what business you’re in.
Stitching together various pieces of software, under tight deadlines that involve competitors, is hard. We’ll be hearing about integration issues from now on, probably. That’s the bad news. The good news is, there are smart people working on it, and they learn more daily. This takes time. Try to respect the journey.
This column originally appeared in the Broadband Week section of Multichannel News
With little fanfare, the first batch of DOCSIS 1.1-based cable modems entered CableLabs’ test queue late last month – the week after Diversity Week, to be exact. The testing milemarker seemed like a sensible-enough reason to translate what those boxes will and will not be able to do for cable.
There are three main areas where the new, 1.1-based modems differ from the millions of 1.0-based cable modems deployed in the U.S. as of June 30. Those three areas are Quality of Service [QoS], data fragmentation, and enhanced security.
Let’s look at quality of service first. It’s abbreviated “QoS,” and spoken “Q-oh-S.” [Some hard-core data people say it as a word: “kwoss.”] Really, QoS is the high-speed data equivalent of basic and premium TV differentiation. It lets you add “grades,” or “tiers,” of data services, characterized by speed or transit timeliness.
In today’s cable modems, MSOs generally cap the downstream (headend to home) and upstream (home to headend) speeds at the time of installation, or before. Technologists call this “rate limiting,” and use it to preserve bandwidth, which is especially precious in the upstream signal path.
As explained in the Sept. 18 edition of this column, bandwidth isn’t unlimited, and needs to be monitored carefully. Most MSOs set a maximum of between 1.5-2 Mbps downstream, and about 384 kbps upstream. It’s a best-effort thing: First come, first served. Last on, you get what you get. Usually, the capping works to ensure everyone gets speedy services.
With 1.1 and QoS, operators can vary maximum data rates, on the fly. Technologists call this “committed information rate,” meaning they can nail-up different data rates to different customers, depending on their needs (and willingness to pay).
Example: User Jane wants to watch a movie on her PC. She clicks to stream it. With QoS, the modem sees that Jane is streaming video, and pops her up to a sustained and higher bandwidth for a certain amount of time. If the equipment spoke, it’d say, “Oh, Jane’s watching a movie. I’ll fix her at 1.5 Mbps while she needs it.”
They can do this because DOCSIS 1.1 comes with 16 different “service identifiers,” or “SIDs” (pronounced “sihds”). SIDs are a way of striping packets that flow to and from a modem, to make each different and distinct from another.
There’s a second part to QoS, called “data fragmentation.” It helps services like voice, which are intrinsically isochronous. Isochronous is just a fancy way of saying “equally timed to and from destination, without delays.” Think of it this way: When you’re talking on the phone, it matters more that the packets get there without hiccups, than it does that they get there over a fat, dedicated pipe.
Think of it as the converse of e-mail. You write a note. You hit “send.” The note gets broken up into small pieces; oftentimes, the pieces are sent over different routes. It doesn’t matter, because everything gets reassembled at the other end. If one packet is a few milliseconds late, it’s not the end of the world. Your note still arrives.
Voice has to work differently. If a packet is a few seconds late, and you’re talking live, it’s the equivalent potential of you saying “This ridiculous is.”
What really matters in voice is that packets travel smoothly and without delays. That’s what the data fragmentation portion of DOCSIS 1.1 does.
The third thing DOCSIS 1.1 brings is better security — which is not to say that existing 1.0 gear is vulnerable. DOCSIS 1.0 uses what’s called “link layer” encryption to secure all communications between the cable modem and the headend. CableLabs licenses the technique from RSA, and it’s not yet been cracked in a cable modem environment.
DOCSIS 1.1 builds on that with a mililtary-grade type of encryption known as “triple DES” (pronounced “triple dehz”) – generally acknowledged to be impenetratable. Spies use triple-DES.
In addition to QoS and privacy, DOCSIS 1.1 is notable for one other reason: It’s the foundation for PacketCable. PacketCable, the third leg of CableLabs’ major project trio, is building up lots of other packet-style services, starting with lifeline telephony and streaming media.
DOCSIS 1.1 is not a cure-all for the myriad technical and operational issues comprising open access. While 1.1-based equipment does help, it isn’t the total answer. It helps because MSOs can use some of the SIDs to identify outside ISPs — although this usage is debated in engineering circles.
It’ll likely be another couple of testing rounds before CableLabs certifies any 1.1-based cable modems, or qualifies any 1.1-based headend gear (known as “Cable Modem Termination Systems,” or “CMTS.”) After that, upgrades begin.
Silicon suppliers and equipment suppliers have promised operators that they can shift from 1.0 to 1.1 with software downloads. However, some MSOs are already starting to wonder if that remains true should they want to provide “carrier class” telephony services. Going to carrier-class means a lot of software and systems in the back office (the stuff of PacketCable), and resolution of plant issues, like how to juice a modem/phone when utility power is out. Which requires a whole other set of translations…
This column originally appeared in the Broadband Week section of Multichannel News.
Middleware. A term so stretched with multiple meanings, it’s as shapeless as a wet sock.
Let’s start with the word itself. “Middle” means between. “Ware” is a shortcut to “software.” In the case of advanced digital set-tops, middleware is software that sits between the operating system, and the interactive applications above it.
Middleware is a bridge. It connects the hardware and its operating system to the ITV goodies. It wasn’t needed in early digital boxes, because those boxes did a limited number of things. They took a digital signal in, translated it back to analog, and sent it to the TV. They tuned channels. The most interactive thing was the electronic program guide.
To add chat, e-mail, clickable ads, and the rest ITV’s harvest, the boxes need more stuff. They still need an operating system, to tell the chips what to do: Tune this channel. Descramble that one. Get the guide. To do more than that, and in a non-proprietary way, they need that middleware bridge to the apps.
The term entered the cable lexicon around 1997. Remember when CableLabs escorted cable’s top CEOs around Seattle and Silicon Valley? That trip, organized to ascertain the Internet’s impact on set-tops, ultimately lightened Microsoft by $6 bil.: $1 bil to Comcast, and $5 bil. to AT&T, a few years later. (And that’s just the domestic investments.)
Yet Microsoft’s zeal in those ground zero days of set-top software is what prompted cable to start pondering control. It didn’t hurt that Navio, now Liberate (and Network Computer Inc., in between), was on the scene pitching a “middleware” alternative to Microsoft’s operating system.
From the beginning, cable wanted middleware for two reasons: Portability, and control. Portability to assuage the FCC’s mandate for retail set-tops. That means set-tops you can buy at a store, no matter who makes them, what operating system, or who scrambles premium channels. It also means that you can move to another city, served by another cable operator, take that box you bought with you, and it’ll still work.
Middleware control ensures that ITV apps writers came to cable for distribution, not an outside software company — like whomever produced the operating system. Plus, middleware is intrinsically based on open standards, say its proponents, theoretically speeding deployment and shunting proprietary locks.
Here’s why control is an issue. With or without middleware, applications providers — makers of games, email, chat, clickable ads — need to know how to write software that runs on various set-tops, with various operating systems. They need to know how fast the processor is, how much memory is available, how to place items on the TV screen. They write applications using a specific set of guidelines, called “APIs” (Application Program Interfaces). Without middleware, the APIs are controlled by the operating system supplier.
Think of the operating system you use now, on your PC. Probably Windows. Think of what you do the most. Probably Word, Excel, Access. While scores of software writers can access Windows’ APIs, Microsoft had first access to them. Because they make Windows, they knew best how to write applications for it.
As one MSO engineer said to me years ago: “If one software company were to provide the operating system and the interactive APIs, I’d be left with one piece of control: The keys to the headend.”
CableLabs stemmed this concern by placing middleware at the heart of its OpenCable specification. Last month, to sidestep (or intensify, depending on the point of view) the confusion around the word “middleware,” it renamed that effort “OpenCable Application Platform.”
CableLabs also split the project into two layers: 1) The execution environment, where interactive applications actually run, which will be written by Sun, and 2) The presentation environment, which stipulates where and how interactive icons are placed on the TV screen. Microsoft and Liberate will co-write this spec. Canal+, OpenTV and PowerTV will assist and act as watchdogs, a critically important role.
In the PC world, middleware is analogous to Netscape, or Internet Explorer, or any other browser. It runs on any operating system. The execution environment is the plug-ins you download to listen to music, or watch a video, or play a game. The presentation enviornment is HTML. Applications developers write code without having to know what kind of equipment people use.
In TV, middleware is Canal+, Liberate, OpenTV, and WorldGate (among many others), who created an industry around it; Microsoft and PowerTV do both operating system and middleware.
Middleware providers sell MSOs two things: A “client” that sits in the set-top, on top of the operating system, and a “server” that doles everything out, and usually located at the headend. Most middleware companies provide a suite of get-started ITV apps: A branded first-screen, that links to e-mail, community info, the guide, the weather. This is the stuff you’ve seen at trade shows.
To apps writers, middleware companies provide tools that enable the “write once, run anywhere” mantra we so often hear.
What makes this all so confusing is that suppliers approach middleware in different ways. Some bind it to their own operating system. Others don’t. But this is nothing new – almost all equipment used in cable is feature-differentiated.
Over time, through deployments and the OpenCable process, middleware will gain shape. MSOs will put boundaries around features. Apps will start rolling in. It’ll be hard, but the alternative is subscriber churn, to the interactive features DBS offers.
This column originally appeared in the Broadband Week section of Multichannel News.
By now, you’ve probably seen or heard about PacBell’s TV ads for DSL, where cable’s shared bandwidth escalates into an absurdly amusing neighborhood war. Even cable people laugh at the spots: Friendly neighbors become sworn enemies, skulking around with spray paint to single out “bandwidth hogs.” (If you’ve not seen it, go to http://bit.ly/SKTKeC.)
It’s a clever attack on what DSL proponents perceive as cable’s Achilles Heel: That hybrid-fiber coax (HFC) architectures, like cable’s, are configured to share bandwidth among 500 or more homes hanging off a node. The assumption: Sharing causes insufferable slowdowns for cable modem users. Consumers should pick DSL, and never have any slowdowns.
Sorry. It’s just not that easy. As is usually the case, the truth about sharing lies somewhere in the middle.
Before we even get into who’s sharing what with whom, an even simpler truth: Residential DSL penetration, while climbing quickly, is still very low. At the end of the second quarter, there were about 750,000 residential DSL customers, to cable’s 2.3 million cable modem subs. All I’m suggesting here is the old one about throwing stones in glass houses.
Fact number one: DSL is a shared network. Not from the home to the central office, true. After that.
A quickie on DSL topology: The DSL modem at a house connects to a companion modem at the central office, about 3 miles away. There, any voice conversation on the line heads over to the phone switch, and the data traffic enters a DSLAM — a “Digital Subscriber Line Access Multiplexer.” Telcos use DSLAMs because the alternative — dedicating a router port to each individual DSL subscriber — would be outrageously expensive. Something was needed for router port sharing. Enter the DSL Access Multiplexer, or DSLAM.
Multiplex means smoosh. As in, cramming many inputs into one output. In this case, combining multiple DSL flows into a composite data stream, out to routed Internet pipelines.
Smoosh means share. With low DSL penetration, it’s easy to dismiss potential DSLAM sharing bottlenecks. But two things can happen when penetrations rise. First, more traffic hurtles through the DSLAM. The Internet’s language, TCP/IP, juggles overloads by dropping packets, that have to be re-sent. First come, first served. (True for cable, too.) This all happens transparently to the person surfing away. Symptom: Sluggishness.
The second thing that can happen when DSL penetrations rise is lesser known. It has nothing to do with sharing, but it’s notable. It’s called “crosstalk,” and is specific to twisted-pair wires, like telcos use to give us phone and DSL service.
Translated: When phone wires are bundled into one sheath to the central office, they’re physically close enough to one another that the DSL traffic, because of where it is in the RF spectrum, could radiate from one pair of wires to the next. That’s crosstalk. To the DSL equipment, it looks like noise. As DSL traffic increases, the noise floor rises. As the noise floor rises, data rates decrease. Symptom: Sluggishness.
What’s DSL’s fix? Adding DSLAMs is one solution, and it’s a pay-as-you-go capital expense. But fixing crosstalk means driving fiber deeper – an expensive operational expense that’s time consuming.
Cable is shared. But it’s incorrect to characterize 500 homes all sharing the bandwidth dedicated to cable modem traffic. That’d be 100% penetration for high-speed data services — unlikely anytime soon.
Translating cable’s sharing issues isn’t hard. Start with node size. Say it’s 500 homes. Then apply the penetration rate. Say it’s 20%. That’s 100 cable modem customers. Then, estimate how many of them are online at the exact same time. Say that’s 40%. You’re down to 40 people, sharing 27 Megabits per second. Evenly distributed, that’s 675 kilobits per second, each. That’s pretty fast.
The hardest part is anticipating what those 40 customers are doing. Some activities, like e-mail, chat, and surfing, are “bursty” – you don’t need 675 kbps to send an e-mail. But streaming video chews up bandwidth in a shared environment.
Cable’s fix is to split nodes, so that fewer homes are sharing the same 27 Mbps data channel. It takes about half a day to split a node. In today’s systems, even penetrations in the high 30% range haven’t yet needed to be split. In reality, most of the well-publicized cable modem slowdowns so far had more to do with improper router configuration – human error – than bandwidth sharing.
For both cable and DSL, sharing is an issue that will require careful attention. But it’s not as bad as PacBell’s clever ads make it out to be. Nor is DSL immune.
This column originally appeared in the Platforms section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.