What’s Managed about Managed Video Services?
by Leslie Ellis // September 08 2008
In case you missed it, Cablevision’s business arm, Optimum Lightpath, is up with a “managed video service as a complete Ethernet solution for video.” On the business services front, it’s a big deal.
Because the readers of this publication are necessarily steeped in the management of video, we had to ask: What’s “managed” about it? Because it’s “managed,” does that mean there’s un-managed video?
Here’s what’s going. When big video originators — broadcasters, cable programmers — need to move video around a “wide area network,” meaning the fiber optic wires ringing a city, they traditionally buy fixed-size circuits from a local telco to get the video in and out of the building. To blend those circuits onto the fiber, telcos typically use a technique called SONET, which stands for Synchronous Optical network.
The rub of it is, they have to buy a fixed amount of capacity, even if they’re only using a portion of it.
Incumbent telcos charge for these transport services — using a predictably inscrutable rate card. Port charges apply, to get on and off the optical network. Then, mileage charges. And, different rates apply for different types of native digital video protocols: If you’re sending in ASI, you pay differently than if you’re sending in SDI. Ditto for HD-SDI.
(“ASI,” Asynchronous Serial Interface; “SDI,” Serial Digital Interface, and “HD-SDI,” High Definition Serial Digital Interface,” are all broadcaster-centric methods for moving uncompressed, streaming video.)
Optimum’s alternative moves the video of big video companies around on the fiber it owns, using Ethernet.
This, in and of itself, is a big change: It used to be that Ethernet and digital video didn’t get along, because Ethernet wasn’t designed to handle a continuous stream of information — like video.
Practically, what Optimum’s work means is that video-centric companies in New York can save around 30% on transport fees, by switching to Ethernet. The savings come from not having to pay for a fixed chunk of bandwidth, from not having to pay differently for different protocols, and from not having to pay for fiber optic mileage.
Ultimately, what’s managed about Optimum’s Ethernet-based approach is cost. It gives big video businesses better management over what it costs to haul their stuff.
Note: As often happens in the lingo of business services, the Optimum Lightpath release bulged with unfamiliar vocabulary. “Mileage neutral” seemed pretty straightforward, as did “flat-rate pricing.” But “dedicated Layer 2 point-to point” and “Ethernet solution for video” exemplify a few areas where it’s easy to glaze over. More on that next time.
This column originally appeared in the Platforms section of Multichannel News.
DOCSIS 3.0 + PacketCable 2.0 = Cross Platform
by Leslie Ellis // September 01 2008
By now, the channel-bonding feature of the newest cable modem chapter, known in tech-speak as DOCSIS 3.0, should be pretty evident. Benefit: Ramming speed.
Channel-bonding is what lets you download a movie, or 40 pounds of encyclopedias, or anything else with bit bulk — in the time it takes to boil an egg.
The math of it goes like this: Each 6 MHz channel, slinging information using 256-QAM, can carry 38.8 Mbps of data. Bond two channels for 77.6 Mbps; three for 116.4 Mbps. A four-channel bond, at 256-QAM, yields a blistering 155.2 Mbps in downstream, toward-the-house speed.
And so on, all the way to the end. The end, in the case of a cable system built to 860 MHz, is 134 downstream 6 MHz channels. (The math: 860 minus the 54 MHz of the upstream path, divided by the 6 MHz channel width.)
If you bonded all 134 channels, you’d have a downstream pipe capable of 5.2 Gigabits per second. (Note: The channels need to be empty before you can do any kind of willy-nilly bonding.)
Even the most tricked-out home can’t guzzle that much raw speed. Even if it contained 10 HDTVs, all on (and compressed with MPEG-4 to 8 Mbps per stream.) And five cable modems, all streaming video at 6 Mbps. And 5 VoIP phones, all off-hook, as it were.
Even that bit-storm only consumes around 90 Mbps. (Only.)
Here’s the rundown of DOCSIS 3.0’s other major attributes: Channel bonding, sure. A side benefit of channel bonding is that the wider, bonded channels can reap the benefits of better statistical multiplexing. That means more people sharing the same bandwidth, more efficiently.
Then there’s the IPv6 support, which substantially lengthens the number of IP addresses an operator can dispense — which substantially increases the number of Internet-hungry gizmos it can support. Better security is part of the spec, too, but it’s largely irrelevant — the existing cable modem security, known as BPI and BPI-plus, is unscathed.
And remember: Lots of cable modems are riding into people’s homes today — nested inside digital set-top boxes. That’s what tech-siders mean, in essence, when they say “DSG,” which stands for “DOCSIS Set-top Gateway.” No, they’re not based on DOCSIS 3 yet — trials are just now starting for standalone modems — but the trajectory is visible.
Product people: This is less dry than it seems. As a springboard into an IP smorgasbord, DOCSIS 3 is sensibly featured. It lets you walk into battle with a credible speed weapon, a sturdier defense, and reams of information and tools.
At some point, though, speed becomes overkill. Then what? To really juice the imagination, it’s probably wise to start brushing up on the basics of PacketCable 2.0. The two together — DOCSIS 3.0 and PC 2.0, as it’s often abbreviated — make for some striking product roadmap imaginings.
PC 2.0 is a CableLabs specification. It grew up on the voice-over-IP side of the house. It matters now because it’s a tech sandbox for IP-based approaches that look plenty handy for cross-platform services.
A big part of daydreaming in PC 2.0 is getting your head around how to morph today’s bundle constituents — voice, video, data into applications, not just standalone services.
Example: Voice, as an application built into an online customer care portal. If you’ve ever used a chat room when you need help with a cable problem, you know that it’s fine — but, in some cases, it’s just easier to talk than type. Videoconferencing possibilities apply here, too.
And then there’s the whole notion of “FMC,” for “Fixed Mobile Convergence.” It also grew out of mobile. It’s what happens when you buy a new cell phone, and one of the many features it lists is “dual mode.” In this case, dual mode means it runs on the cellular network when that signal is best, and it flips over to (broadband-fed) WiFi when its signal is best.
The FMC application most discussed is the person who walks into the house, where the cell reception is not so good. The phone flips over to the WiFi network, which is fed by cable broadband, and bingo! The person she’s talking to doesn’t sound like the teacher on Peanuts anymore.
PC 2.0 is also big on IMS — the IP Multimedia Subsystem — which likewise grew up out of the cellular/telecom industry. Big vendors build big “cores” around it — companies like Alcatel, Ericson, Siemens, Lucent. It’s all about treating the network as a holding pen for lots of different applications servers, capable of working across different types of networks. In a sense, it’s the cellular/telecom industrys version of “cloud computing,” which is worthy of its own translation.
The point is this: DOCSIS 3.0 will start out as a weapon in the speed battles. With PacketCable piled on, its core technical attributes can serve as a solid springboard for cross-platform activities — voice stuff first, video stuff likely.
Video stuff? Already, most major MSOs are using their growing expertise in online video (think Fancast, as one example) to figure out what can be leveraged on the set-top — and especially on that built-in cable modem.
What services are potential applications, and what applications are good for blending? That’s what’s next. Just add creativity.
This column originally appeared in the Platforms section of Multichannel News.
SOA, Rhymes with Noah
by Leslie Ellis // August 25 2008
And now for a quick dip into the bubbling soup of acronyms within the software world, starting at the top. “SOA.” People say it as a word — rhymes with Noah.
“SOA” stands for “Service Oriented Architecture.” It’s the Big Picture for the software efforts of big companies. It’s especially enticing to companies wishing to untangle themselves from heavy, monolithic, single-vendor software systems.
Like the billing system, for instance. The historic grumble about cable billing systems goes like this: Ask for a change. Wait 18 months. Find a million dollars to pay for it.
That’s why you tend to hear of SOA when you’re with IT people. Here’s a usage example from a recent batch of notes: “We took a look at it and said, we need a SOA architecture, to let us to get time to market and productivity enhancements.”
Try this for fun: With a straight, calm face, suggest to anyone who works in cable IT that they’ll need to change out the billing system. Then try to find a way to share in the utter hilarity of the idea.
Here’s what SOA is: It’s tight, efficient little blocks of code, theoretically re-usable, with consistent passageways between them.
In practice, SOA is seeing that 60% of your care calls about digital video result in sending a refresh command to the box. Wouldn’t it be great if there was a way to let customers initiate the refresh themselves, by pressing seven on their phone, or asking online?
Pre-SOA, pinging a set-top required a care agent to initiate that activity, by accessing the headend components of what were then General Instrument and Scientific-Atlanta systems.
With SOA, pinging a set-top means abstracting that function into a chunk of code, then embedding that chunk of code into the other chunks of code that might need it — the IVR system for the phone; the self-care portal for the online query.
The catch: Those “theoretically re-usable” chunks of code. Say a “service,” as the chunks of code are called, moves into the domain of another “service” needing it. To the web care portal, in this example.
Oops. It only covers 80% of what the web portal needs. The other 20% either comes from an add-on, or, just as often, a total re-write.
Most of the larger cable MSOs are at least waist-deep in SOA, so it’s where your IT friends are headed. May they prosper.
This column originally appeared in the Platforms section of Multichannel News.
Advanced Advertising and SCTE 130
by Leslie Ellis // August 18 2008
Earlier this year, I tagged an advanced advertising standard, then known as SCTE DVS 629, as one of the “big things in cable tech” in 2008.
Since then, and to keep us all on our toes, the SCTE Digital Video Subcommittee changed the number of the standard, from DVS 629 to DVS 130. Not sure why. It reminds me of a funny friend’s funny child, who counts in this order: “One, two, skip a few, 99, 100.”
So: DVS 629 is now DVS 130. And DVS 130 is the sockets and sinew of addressable, targeted, interactive advertising, on cable.
On Aug. 4, the first four parts of the eight-part DVS 130 standard were approved by the SCTE’s Engineering Committee. That greenlights the vendor community to begin interpreting and building. (Many already have. Doing business in a standards-populated economy often means sprinting ahead, hoping your work becomes “the standard.”)
This week’s column attempts to untangle the context of SCTE 130. The actual nuts and bolts are a different translation. Consider: The four finished parts of the standard run across 429 pages.
If you’d rather fast-forward to the summary translation, it goes like this: DVS 130 creates the framework to pick, on the fly, which ad, of which length, to splice into a TV show — whether that show is linear, stored, or switched. It’s how to advertise the stackable washer/dryer to Condo Connie, and the lawnmower to Harry Homeowner.
For VOD, DVS 130 leapfrogs the “bookend” ads currently glued to the front and back of a video title. It adds things like replacement ads, pause ads , and telescoping capabilities.
Replacement ads do what the name implies: Splice a newer ad over the existing one. (It’s harder than it sounds, because it crosses industries. Not all program networks have the gear to put the necessary flag into the ad breaks of a TV title, to indicate when a cable ad substitution can happen.)
Pause ads pop something up when you decide it’s time to break for a ham sandwich.
Telescoping is the clickable thing that invites the interested viewer to see a longer, stored video.
As a framework, DVS 130 defines the language to be spoken between participating machinery, and what messages they’ll exchange. Likewise, it defines how to connect the machines doing the work of addressable and interactive advertising.
SCTE 130 doesn’t define how that targeting and campaign work should be done — that’s the job of innovation.
The Deeper Dive
If you haven’t observed a technical presentation on DVS 130, know going in that it’s pretty architectural. That means diagrams best absorbed by printing them out and staring at them. For a long time. With a clear head.
DVS 130 registers heavy on the jargon meter, too. Its official title: Digital Program Insertion — Advanced Advertising Interfaces. Its remaining four parts are expected to be approved this year. They’re juicier. They get at the raw materials of how to address ads to Condo Connie, based on what she approves you to know about her.
And yes, this whole thing will work best when all eight parts are done and in motion. In reality, that means this is a 2009-2010 thing, if everything goes well.
But until then, here’s the short version of the four approved parts of SCTE 130:
Part One, Advanced System Overview (16 pages) summarizes parts two through eight.
Part Two, Core Data Elements (77 pages), defines how to phrase XML (Extensible Markup Language) messages for addressability and interactivity.
Part Three, Ad Management Service (ADM) / Ad Decision Services (ADS, 246 pages), gets pretty dense. In essence, an ADM issues messages about what ads to place, An ADS figures out how to place them.
Part Three puts “real time” into the equation. It’s the “advanced” of advanced addressability. Today, it works like this: Ads are sold. Traffic schedules are built. At 4:00 p.m., those schedules get loaded into the ad insertion machines. If something needs to change after that, it better be important.
Part Four, Content Information Service (90 pages) is the keeper of the metadata about the ads and the video content they’ll run within.
Modularity and Scale
The fact that DVS 130 is chunked into eight parts illustrates one of its intents: To be modular in design. In premise, modularity attracts a wider supplier community. Plus, it lessens the risk of ganging stuff together that grows at different rates — scale matters.
Remember the first days of VOD gear, when storage and streaming worked in the same box? Storage grew faster. Decoupling happened.
Another handy consequence of SCTE 130 is the data it gathers — house by house, system by system, region by region, operator by operator — all the way upriver to the Canoe Ventures LLC, if so desired. (Yes, it’s an actual company now, with headquarters in the Chrysler Building, in Manhattan.)
When it all comes together, advanced advertising will send you the ads that are best for you, assuming you’re ok with it. Your viewing becomes collectable data, which gets ganged together with everybody else’s collectable data. Once the data is sufficiently smooshed and “anonymized,” cable advertisements become targetable and measurable.
That’s huge, for the people who live and work in cable advertising. It’s several giant steps towards being “more like Internet advertising.”
This platform originally ran in the Platforms section of Multichannel News.
The Secret Bandwidth of Addressable Advertising
by Leslie Ellis // August 11 2008
Last summer, while talking with a small gathering of muckety-mucks from a cable program network about how cable operators use bandwidth, I was asked what I knew about the “secret bandwidth.”
Secret bandwidth. News to me.
I asked for particulars. Turns out that this executive had learned from a cable operator contact that in addition to the digital shelf-space dedicated to standard and high definition linear TV, a special reserve also existed for advanced advertising.
I resisted the urge to tell the guy that he’d been tricked into some kind of digital snipe-hunt.
A year passed. Then, last week, a muckety-muck of the advanced advertising persuasion took a pause from his heuvos rancheros to ruminate over the bandwidth implications of addressable advertising, in high definition.
In order to send a 30-second spot that is, for whatever reason, more targeted than what’s already embedded in the linear video stream for that show, he said, operators will probably need to reserve some portion of their existing bandwidth.
Tactically, it goes like this: A typical, 6 MHz digital cable channel carries 10 to 12 linear video streams, in standard definition. To do addressable advertising, the substitute ads need some carriage room, too. Like three or four of those 10 to 12 streams.
Aha! The secret bandwidth.
Predictably, different addressable advertising vendors do this differently. Some borrow streams from within a mux, as described. Others ask for dedicated capacity — one to two 6 MHz channels, to carry the addressable ads.
(Proponents of the first method say that proponents of the second method introduce latency issues, because the set-top box has to physically re-tune to another multiplex, in order to grab and display the ads from the secondary or tertiary channel. Proponents of the second method say their way is a better use of bandwidth, because of statistical multiplexing gains.)
In HD, though, the matter gets more pronounced, because only two to three HD streams fit into that same 6 MHz digital cable channel — so where do the addressable ads go?
Three options, each with an increasing level of complexity. One: Dedicate additional digital channels to the needs of addressable ads. Two: Go faster on the move toward a unicast architecture — where each household gets its own video stream, and its own ads. Three: Find a way to use advanced compression — to mix MPEG-2 and MPEG-4 traffic in the same channel.
To those of you who, like me, were stumped chumps about the secret bandwidth — there it is. To those who already knew — my apologies for assuming a digital snipe.
POST-SCRIPT: Since this column ran, several readers, all of whom work for cable operators, wrote to let me know that switched digital video implementations can tuck in addressable ads without using tangible 6 MHz channels. If an advertisement is addressed in a forest, and no tangible bandwidth is used to carry it, is it still a secret? 😉
This column originally ran in the Platforms section of Multichannel News.
Upstream Bandwidth & Symmetry
by Leslie Ellis // August 04 2008
Recently, a childhood friend wrote to express his outrage at the U.S. cable industry, for “deliberately precluding people from getting symmetrical broadband speeds.” Several angry blogs bludgeon this topic, too. Salty opinions abound.
This week’s translation will attempt to explain (hopefully without damaging the friendship) why this particular accusation is incorrect — by reasons of regulation, physics, and present reality.
Let’s take it from the top. “Symmetric,” in a bandwidth sense, means the same amount of speed goes toward the computer as away from it. Most of today’s operators provide asymmetrical packages — so many Megabits per second toward the computer; fewer Megabits per second away from it.
This fits the pattern of most early and ensuing Web traffic. Your request for a web page, video stream, or a voice call is substantially smaller than the resultant page, stream, or conversation.
This changes, of course, with affordable HDTV cameras. That clip of the weekend at the beach is a massive file to move upstream, compared to typing in a web address and pressing “enter.” Peer-to-peer traffic also changes the scene. We’ll get to that.
The total available bandwidth of a contemporary cable system is likewise highly asymmetrical. It goes like this: Upstream traffic moves within a tiny gash of spectrum between 5 MHz and 42 MHz. Downstream stuff moves over a path that starts at 54 MHz, and goes as high as 1 GHz (or, 1,000 MHz).
Why 54 MHz? Why not just move that boundary up higher, to make the upstream path wider?
Here’s what veteran cable engineers say, when asked question: Because that’s where channel 2 starts.
The Regulations of Spectrum
What they mean is this: Long, long ago — as in 74 years ago, in 1934 — the Federal Communications Commission was empowered to develop, maintain, and enforce a table of radio frequency allocations for non-governmental use. (The National Telecommunications and Information Association handles the spectrum used by the government.)
This industry’s monthly engineering trades actually print a “Frequency Chart,” produced annually and in coordination with the FCC. Walk into any headend, and you’ll probably see one tacked to the wall somewhere. They’re a colorful bit of “tech art.”
As a direct result of the FCC’s frequency allocations, the U.S. cable industry is required to contain its upstream (or “reverse”) path traffic within that little gap, located between 5 and 42 MHz.
(The 12 MHz separating the top of the upstream band, at 42 MHz, and the bottom of broadcast channel 2, at 54 MHz, is called “guard band.” It prevents the two segments from colliding and making a mess.)
It’s true that cable’s spectrum is contained within shielded wires, and doesn’t free-wheel through the air, bumping willy-nilly into broadcast channels. Still, the FCC decided that cable operators should display off-air channel 2 at the identical 54 MHz spot as the broadcasters — so that ordinary people could find it, when they tuned in. (Remember — this was 60 years ago.)
So that’s the regulation part of why cable’s broadband plant isnt symmetrical.
The Physics of Spectrum
Let’s back up even farther — to the 1860s. That’s when a Scottish physicist named James Clerk Maxwell found a way to combine the properties of electricity with the properties of magnetism. His discovery was foundational to the “electromagnetic spectrum” that is the basis of all modern telecommunications. (To put this in people-context, Albert Einstein had two hero shots on his wall — Newton, and Maxwell.)
The electro-magnetic spectrum is invisible, and vast. It’s not just about radio waves. Microwave is there, and infrared. So are ultra-violet rays, x-rays, and gamma rays. In varying degrees, and if you could see them, they all look like the letter S, on its side. The sine wave.
The RF (radio frequency) portion of the electromagnetic spectrum can be made to do stuff — like carry radio and TV — by manipulating its frequency (the number of times the sideways S recurs) and its amplitude (how big the sideways S is.) This manipulation is called modulation — the imprinting of a signal, onto a wave.
In the cable upstream path, the type of modulation commonly used is deliberately and necessarily sturdier than what’s used in the downstream path. That tiny slash of upstream spectrum is a tough environment, bristling with noise and impairments. It’s the road with potholes big enough to swallow a Mini Cooper: You just have to slow down.
So that’s the physics of it.
Then there’s the empirical evidence, which says that people, on average, receive four times more information than they transmit. Maybe this will change, with user-generated video. Peer-to-peer traffic will, by its very nature, occupy all unused space. It behaves like a gas, that way.
The bottom line is this: Nobody is deliberately precluding symmetrical bandwidth. In the beginning, nobody even knew what to do with the upstream spectrum provided to them by the FCC. TVs were still black and white and analog. Two-way plant wouldn’t emerge for another 40 years.
But, you say: If the broadcasters are going all-digital in February, doesn’t the channel 2 thing go away, or at least present a way to expand the upstream boundaries?
Ah, wouldn’t that be grand. Alas: Going digital is one thing. Outfitting hundreds of thousands of amplifiers, taps, and in-home TV tuners to know that the upstream got wider is another.
This column originally appeared in the Platforms section of Multichannel News.
A Wireless Decoder for Wired People
by Leslie Ellis // July 28 2008
By now it’s clear that the Next Big Thing is getting your high-speed connection wherever you are — even if you’re outdoors; even if you’re in a moving vehicle; even if you’re an ocean away.
The jargon of wireless technology is thick. WiFi, WiMAX. 3GPP, LTE, GSM. Keeping it all straight, while keeping everything else straight, takes concentrated effort.
For that reason, this weeks translation seeks to serve as a tip sheet for we, the “wi-curious” — raised wired, open to alternatives.
Before we go in, remind yourself: Behind this clutter of acronyms are radios. It’s all about what protocols they speak, over what stripes of spectrum, using how much power, and how honkin’ a processor.
At an industrial level, wireless technologies identify like this: Who’s using it, with what spectrum. Will it work overseas. How fast can it send and receive data. How soon will gear be ready, relative to competing options. How small is it.
Because it is the intended direction of Comcast and Time Warner Cable, per their deal with Clearwire and Sprint, lets start with WiMAX. As a tech spec, its name is IEEE 802.16e. For cable, it’ll move across the 2.5 GHz spectrum. International adoption is not yet a strong suit, beyond Korea and Pakistan.
Speed-wise, WiMAX runs in the 1.5-5 Mbps range, to the handheld, and around 1 Mbps out from the handheld. Gear can be gotten, “but at a rather glacial pace,” grumbled one observer.
Stripped way down, WiMAX is WiFi at vehicular speed, with quality of service (QoS) support, meaning, more possible services.
WiMAX is up against global steamroller “LTE,” for “Long Term Evolution.” LTE is the brainchild of the GSM (Global System for Mobile Communications) cellular community — which, it bears noting, enjoys an 85% worldwide market share. In that sense, wireless broadband is to cellular what telco landlines were to cable: A sexy new cash spigot.
In the U.S., T-Mobile plans to go LTE. So does Verizon. AT&T says LTE. In Europe, pretty much everybody is LTE. The two primary groups driving it are 3GPP (Third Generation Partnership Project (www.3gpp.org) and GSM (Global System for Mobile, www.gsmworld.com).
Speedwise, LTE peaks at 100 Mbps toward devices sharing a 20 MHz swath of spectrum. Almost all major cellular gear manufacturers are roadmapping products for LTE.
But LTE is to its professed implementers what DOCSIS 3.0 is to cable: A hardened spec, in the hands of the vendor community, waiting to become product. Trials this year, gear next year.
What about an uber-phone, with as many radios and protocols as necessary to work everywhere? I tried this out on a wireless aficionado pal. Her response: Yeah. And the battery will last like 10 minutes.
This column originally appeared in the Platforms section of Multichannel News.
A Reader’s Guide to the Two-Way MOU
by Leslie Ellis // July 21 2008
For the past five years, while the rest of us plowed through digital simulcasts, the “7/07” deadline, the broadband speed wars, the voice launches, the onslaught of HD programming, and the bandwidth crunch, a small but enduring squadron of cable soldiers on the policy side of life relinquished dozens of nights at home to work toward an alliance — or at least to prevent further war — with the consumer electronics (CE) industry.
Representation was vast. Battle stations arose well beyond the known cable and CE bunkers. Small armies of strategists and lawyers emerged from the computing side of the world (Microsoft, Intel), and the movie studios. And from chip makers, and box makers.
The combined travel and expenditure receipts for those 60 months of face-to-face meetings, between 2003 and 2008, could likely buy a good-sized mansion. On a lake. Staffed.
At the end of it, though, were handshakes, not fisticuffs: An April 25 Memorandum of Understanding, or “MOU,” submitted to the Federal Communications Commission by a group of six “founders” (the six biggest U.S. cable operators), plus the CE “adopters,” led by Sony Electronics.
This week’s translation aims to unpack the highlights of the landmark deal.
Quick refresher: This is all culled from the story called “two-way plug-and-play.” That’s the one where the government tells cable operators and CE makers to play nice with each other. Start by building set-tops into digital TVs, the FCC admonished.
Oops. Except the cable box is part of a complex, two-way network, and the cable operator in charge of the set-top and the complex network is contractually and competitively obligated to keep its contents safe from theft and harm. Oh, and the CE side runs on razor-thin margins. Any whiff of extra cost is calamitous.
So the cable industry and the CE industry agreed to start at a walk, not a sprint. They launched with “one-way” devices — so that people could buy TVs and descramble premium content (hence the CableCard).
Meanwhile, they’d keep working on how to do the two-way part — so that consumers could buy a digital TV that descrambles premium content and accesses services that need to reach up the plant in order to work properly — like the guide, and the VOD ordering system, to name two.
Which brings us to now.
Here’s one big thing that’s different between the one-way MOU, and the two-way MOU: The former was a set of suggestions to the FCC about how to go about making rules. It had no teeth. The new deal does have teeth. It’s a signed and binding contract, with enforceable obligations and liability clauses.
So that’s big.
Getting there involved concessions. (Duh.) The CE side had to lighten up on its desire to build TVs that plunk idly at the far end of this complex, two-way cable plant — and instead participate, using software tools that ultimately allow everyone to continue to innovate. The cable side had to back off on wanting to test and certify every last device with a built-in set-top — and instead agree to allow some degree of self-certification.
Then there’s “the guide issue.” Boiled way down, it was about who gets to gather guide data and present it, as an on-screen program guide. The CE side wanted that right — and got it, through the cable side’s nod to carry guide data embedded within CBS’s signal. That’s huge.
Side note: In the language of two-way, cable-ready devices, there’s a difference between a “menu” button, and a “guide.” One brings up things you can do with your device. The other brings up things you can see or hear through that device. It mattered, in the MOU.
Think about it this way: On your digital TV, at home, there’s probably a way to select between inputs. That’s the menu. You want to watch a DVD, you select that input. Ditto for a game player, or a media gateway. Often, this menu selection shows as a crossbar on the actual TV screen, where you pick the item you want.
This mattered because the cable side didn’t want a CE menu to display with ads on it — doing so would violate existing carriage agreements between cable operators and content providers. The MOU addresses the issue by prohibiting the display of ads when consumers are using menu features in digital cable-ready CE devices.
A third biggie: The CE partakers agreed to a series of CableLabs licenses. Among them: OCAP, which remains the technical name for the underpinnings of what is now called “tru2way.” By saying okay to OCAP, participating CE manufacturers took a huge step toward making gear that can live and breathe on cable plant, instead of sitting, brick-like, at the end.
Last biggie: The creation of a Founders Advisory Board for conflict resolution. In essence, each stakeholder — cable, CE, content and computing/IT — has a clear path, if something goes awry. Each stakeholder, in essence, got a failsafe vote, which they can take to the FCC, if they want to fight. It’s big because it removes the fear that one entity (cable) is the master controller. The FAB gives each constituent a voice.
Again: These are the highlights. Blessedly, the MOU is a svelte six pages. It’s worth a look. For more, go here http://www.dwt.com/practc/communications/bulletins/06-08_Cable_DigitalTV.htm for a detailed blog by attorney Paul Glist, of Davis Wright Tremaine.
The good news for industry: The work that produced the two-way MOU will almost certainly blunt the alternative outcome — further regulatory action. The good news for customers: One TV, one remote, no box, and a healthy environment for more two-way services.
This column originally appeared in the Platforms section of Multichannel News.
“DTA” Means 100 More Linear HD Channels on (Comcast) Cable
by Leslie Ellis // July 14 2008
Here’s a new one from the Department of Three-Letter Acronyms: “DTA.”
DTA stands for Digital Terminal Adaptor. A digital terminal adapter is a cable-specific thing. It’s a small, low-cost gizmo — not unlike the small, low-cost gizmos being built for the upcoming broadcast digital transition.
It attaches to the back of every analog TV, or any TV not already connected to a cable set-top box — not unlike those built for the broadcast digital transition.
Because the average U.S. household contains at least a couple analog TVs, DTAs need to be inexpensive. One is needed for every TV that wants to keep being a TV. The DTA price tag to cable operators is in the $30-$50 range — not unlike the price tag for the adapters made for the broadcast industry’s digital transition.
Sufficiently confused? Here’s what’s going on: An unfortunately-timed intersection of one industry’s necessity, and another’s requirement. The competitive necessity is cable shelf-space, by way of analog spectrum reclamation. The regulatory requirement is the broadcast transition.
The DTA work is primarily emanating from Comcast. It’s a way to free up bandwidth for more HD channels, ethnic programming, and the channel-bonding magic that comes out of DOCSIS 3.0. At the SCTE Cable-Tec Expo last month in Philadelphia, Comcast President Steve Burke underscored the urgency of analog reclamation this way: “We’ll get started in earnest this fall, and get it done over the next 18 months.”
How much more space does analog reclamation yield, in HD terms? Say they take half of it back. That’s roughly 250 MHz, located between 50 MHz (the top of the reverse band) and 550 MHz (the bottom of the digital band). That’s enough room for 82 to 125 more linear HD channels (depending on whether you stuff two or three streams per channel), or 400 linear SD channels. It’s a roomy approach.
For Comcast, the DTA rollout is phase two of a three-phase plan, which started with the digital simulcast. That’s when operators made all analog channels available on the digital tier, as well as the analog tier. These days, some channels are available in analog, standard definition digital, and high definition digital. The shelf is crowded. Something needs to go, Comcast reasons — and the world isn’t going analog.
Meanwhile, DirecTV flaunts its 100 channels of HD. “This isn’t a place we want to be,” competitively, Burke noted. “The country’s digital transition is this February. It will be on people’s minds. People will be assuming the world goes digital, and now’s the time.”
And so enters the three-letter acronym known as “DTA” into the big doings of the next year or so.
This column originally appeared in the Platforms section of Multichannel News.
Inside Comcast Downingtown
by Leslie Ellis // July 07 2008
Downingtown, Pa. –About an hour from the gleaming 58-story Comcast Center in the heart of downtown Philadelphia, there’s a far-less-spectacular warehouse building owned by the same company in this suburban borough.
The nondescript building, designed to serve as Comcast’s new “integration epicenter,” stretches out over an area larger than a football field and houses a labyrinth of laboratories, test rooms and troubleshooting areas. Despite its plain outside appearance, it represents nothing less than the future of the largest cable operator in the United States and, by extension, the entire cable industry.
As the tech world becomes more splintered, it’s become increasingly difficult for the vast array of equipment needed to run a cable operation to “talk” with each other. Downingtown represents something akin to a 21st century Rosetta stone,through which Comcast can untangle software knots, allowing seamless communication between all of its disparate equipment. It is the last place new Comcast products and services go before they go into subscribing homes. It’s the final dragnet to catch and purge software bugs.
“We have [additional] product-engineering labs that develop and integrate and work out bugs,” said Comcast senior vice president of testing and operations Charlotte Field. “When [those products] get to Downingtown, we put them on this end-to-end network, to see how they work on our total network — our converged network.”
Its official name is “the Comcast end-to-end test and integration center,” but most people seem to call it by its location. Downingtown. It’s an exact replica of the company’s national network, with links to companion labs in Denver, Bishop’s Gate, N.J., and Moorestown, N.J. All of the largest cable operators have similar operations in some form or another.
“BEES IN THE DARK”
The lab, which began its first tests last fall, helps the cable giant avoid minor and major technical glitches. So, for example, engineers here can test if a software update for a set-top box actually fixes the problem — without corrupting an earlier software release.
Currently, the lab is clearing the bugs out of three applications: Caller ID on TV, which interrupts a program on TV to show the ID and number of incoming calls; helping consumers switch to the HD version of an SD video stream, and Tru2way TVs and set-top boxes, which will allow interactive ads and are to be available to consumers this fall. Several other tests are also underway, Comcast executives said.
Ultimately, the lab will be the final stop for potentially dozens of services and applications riding shotgun with each other on Comcast’s video, broadband data and voice plant.
“It’s all about testing to make sure anything new can be provisioned and billed for, and to make sure we have the right tools to understand what kind of problems can arise,” Field said.
Comcast wouldn’t say how much it spent to build the Downingtown dragnet, but the price tag is estimated to be more, and perhaps substantially more, than $25 million, according to one person familiar with the costs.
More than anything, cable’s need for such facilities reflects how critical software has become for the information technology systems needed to support new services. Finding problems in a world of software, as one cable technologist likes to quip, is like getting stung by bees in the dark: You know they’re there, but you can’t see them.
The old rule of thumb about how cable’s capital spending is 80% hardware and 20% software is starting to invert. That’s largely because cable technology was once primarily physically tangible: an amplifier, a roll of coaxial cable, an F-fitting for the end of a piece of cable.
Those physical artifacts are still around, but the far more complicated part is software. The set-top box, now designed to be inside the new Tru2way HDTVs coming to retail later this year, goes away, from a tangible perspective. Implementing it is a complicated twist of firmware, software stacks, operating systems, middleware, and applications. And it’s all invisible.
And that’s the reason for Downingtown — to be the secret decoder ring that brings software problems into the visible domain.
SOFTENING THE NETWORK?
As recently as two years ago, if you asked a cable CTO what was the hardest challenge faced by a system, the phrase “hardening the network” was high on the list. Hardening the network meant getting serious about best practices on craftsmanship — from splicing individual strands of fiber together, to crimping on F-connectors. It meant developing consistency around tests and measurements, to make sure the right signal levels existed for the best possible pictures, fastest data speeds, and best sound quality. Much of it was driven by the addition of voice services, which necessarily must support 911 emergency calls.
To “harden the network” was to develop policies for spare equipment and redundancy, so that if a link went down on the west end of town, a mechanism was there to quickly open up another lane to subscribing households. It meant paying closer attention to “telemetry,” which also goes by “network monitoring.” (For years, network monitoring was among the first things to get sliced during budgeting negotiations. No longer.)
These “plant-hardening” techniques became more critical as the cable industry, once comprised of literally hundreds of separate operators going back 60 years, has consolidated into a handful of giants. The way Continental Cable did things was different than the way TeleCable did things, which was similar to the way Cox Communications did things, but different than how Adelphia Communications or Tele-Communications Inc. did them.
Reality, in cable technology, is this: Every system is at least a little different than the next. From amplifier spacing to bandwidth maximums to optical layouts to headend components to conditional access and encryption, it’s entirely plausible that no one cable system is exactly like another.
Even the “500-home node” necessarily doesn’t serve precisely 125 homes to the north, south, east and west of its location because neighborhoods and towns just didn’t evolve that way. One side of town grows faster than the other, or uses more on-demand services than the other.
Because these software and applications differences matter so much, today’s cable technologists are now talking about “softening the network.”
At the recent SCTE Cable-Tec Expo in Philadelphia, Comcast executive vice president of national engineering and technical operations John Schanz used the term on an early morning breakfast panel. A few hours later, his colleague, chief technical officer Tony Werner, echoed the idea in a different panel discussion.
“Software has always been an important part of the business — but it becomes much more relevant now, in terms of testing, uniformity on requirements, openness,” Schanz said at the breakfast. “The network is softening as part of the evolution toward merging multiple products and experiences onto a single network.”
Comcast is not alone in the pursuit of an end-to-end integration lab. Time Warner Cable operates one, in Charlotte, N.C. In Atlanta, Cox links its interoperability tests with a gating system — suppliers must get through each gate before advancing to the next. Not all of the gates are technical. The earlier Cox gates determine whether a product should even be on its plant, by way of business models and product viability.
CABLE ANATOMY 101
The physical anatomy of a cable system goes something like this: A national fiber optic network links into regional fiber rings, which encircle cities and towns. The rings connect to headends and headends to distribution hubs. Hubs connect over fiber to nodes, nodes connect over coaxial cable to homes.
The nervous system of a contemporary cable system, traditionally called “the back office” or “billing system,” is what’s different now. Nowadays, it’s called “IT” (information technology) and it comprises all the software necessary to sell various services to each individual customer: Say a customer wants caller ID on the video service, and needs an 8 Megabyte package for her data service. All that requires “provisioning” of a customer’s devices and services, requiring linking into the systems that can send the bill at the end of the month.
In some ways, Downingtown pales in comparison to its companion lab in Denver, nestled near the Rockies with an array of 10-meter dishes on the outside, and an extensive video and production orientation inside. Comcast absorbed the facility as part of its purchase of TCI.
As the former “Headend in the Sky,” or HITS facility, the Comcast Media Center remains the breeding ground and physical launching pad for new video products — like its recently announced “Axis” program, to assist software developers wanting to write applications that will run on Comcast’s Tru2way platforms. The CMC will continue to provide mission-critical uplink services of broadcast and on-demand video, and will pave the company’s way toward advanced video compression, like MPEG-4.
But Downingtown is as different from the Comcast Media Center as suburban Philadelphia is from Denver. The Downingtown center is more about the general, industrial shift to software and applications that run on a “converged” network — meaning not within the traditional and isolated “silos” of voice gear, video gear and data gear. (Staffers have already shortened how they talk about the multiplatform, silo-busting approach: “cross-plat.”)
Inside, the end-to-end Downingtown lab is a combination of office space, used as applications labs, and a 15,000 square foot data center. The application labs are used by 50 Comcast employees, now, as well as any technology suppliers wanting to make sure their gear will work on Comcast’s converged plant.
“We built it so that vendors can come in to do early interoperability testing, to isolate problems they may not see in their test facilities — but would in ours,” said Field.
Rack upon racks of gear line the vast building. There’s a training room, and a legacy testing lab, important so that new applications don’t crash devices that are already installed in people’s homes. There’s also a troubleshooting area, to fix problems as they occur– but preferably before they occur.
Disaster recovery methodologies are studied here, so that redundant routes can be instantly activated to move information and communications traffic. Likewise, automatic testing lets an operator test multiple devices with multiple applications without having a person sitting there loading each application. Essentially, more testing, faster. Comcast engineers can also test remotely, which means they can do this testing from somewhere other than where the equipment is.
For power, Downingtown features substantial generator backup. Battery backup, too — enough for 15 minutes of clean, uninterrupted power. That translates into a room full of stacked, car-battery-sized batteries.
Where the racks run out, expansion space exists — an adjacent and unfinished room on one end of the building with nearly 20,000 feet of unused space — for now.
Said Schanz: “What we’re doing right now is preparing the infrastructure. The beginnings of the software ecosystem are coming together. It’s a journey, but we’re definitely on it.”
This piece originally appeared as a cover story in Multichannel News.