by Leslie Ellis // December 31 2007
Three big topics permeated the engineering lingo scene this year, and will assuredly gibberish themselves well into 2008.
Big topic Number One: Advanced advertising, and the furious, industry-wide paddling, via the so-called “Project Canoe,” to “interconnect the interconnects” around the cabled U.S.
Prior to the summer news of the Canoe, the state-of-the-state in advanced advertising was the notion of “dynamic VOD.” That’s the one about splicing a newer, fresher ad into a stored video-on-demand title. Up until then, ads were baked in as each title was formatted for storage.
Here’s what we’ll probably hear a lot about, tech-wise, in the ’08 advanced advertising scene: SCTE DVS 629. Spelled out, it’s the Society of Cable Telecommunication Engineers’ Digital Video Subcommittee, which is working on a standard, numbered 629.
DVS 629 adds a different flavor of dynamic into dynamic VOD. If the 2007 dynamic VOD (known in tech circles as SCTE DVS 30 and 35) was about how to splice a new ad into an old title, then the 2008 dynamic VOD , via SCTE DVS 629, is about how to pick which ad gets spliced.
Among the gibberish that 629 introduces: “ADM,” for “Ad Management Service;” “ADS,” for Ad Decision Service; “CIS,” for Content Information Service,” and “POIS,” for “Placement Opportunity Information Service.” More on these in a future translation. Promise.
Big Topic Number Two: The scuffle-pocked work to find a reasonable way to do “two-way plug and play” connectivity between CE devices, and cable services. This one already gave us CableCards, plus the shifting nomenclature for set-top middleware. Now, we call it “OpenCable Platform.” We used to call it “OCAP.” A new name is reportedly imminent.
On the CE side, ’07 gave us “DCR+,” where the “DCR” stands for “Digital Cable Ready,” which raises a wearying list of questions about protocols and potentially bifurcated workloads.
Big Topic Number Three: Anything that moves or lives within the swiftly-growing world of Internet Protocol (IP). This includes strategic things, like interconnecting individual cable backbones for handing off voice calls more economically (read: avoid termination fees paid to telcos).
It includes arcane-but-important things, like ENUM (pronounced as the letter E, plus numb), which stands for “Telephone Number Mapping.” Think of it as the white pages for routers. It’s what works in the background, converting phone numbers into IP addresses.
And then there’s that giant whooshing sound that is the movement of video to the Web, which pairs nicely with IP-side inventions like DOCSIS 3.0. That’s the one where you bond together four or so 6 MHz channels, sum the throughput, and wind up with burst downstream speeds of 150+ Mbps. (Mercy!)
That’s the short list. I wish you a thriving and translatable 2008.
This column originally appeared in the Platforms section of Multichannel News.
Bob Zitter, Chief Technology Officer for HBO, spoke with me (with Times Square in the background, for real!) about how the premium network views the market for mobile video.This segment aired at the 2007 Consumer Electronics Show.
Video courtesy The Cable Channel.
by Leslie Ellis // December 10 2007
Nothing like a few days amongst the “OpenCable-interested” to turn up a few new viewpoints. The forum: An OpenCable Community Conference, held in parallel with the Nov. 28-30 CableNext conference, in Santa Clara, Calif.
Probably the most important takeaway: That “Open” in “OpenCable is no longer an oxymoron. In case you’re behind on your two-way plug-and-play reading, here’s the Cliff’s notes: Open means open. Inclusive of all “multichannel video providers,” or MVPDs. MVPDs are cable, satellite, and telco video providers.
That’s what the National Cable Television Association proposed to the Federal Communications Commission about how to solve the complex problem of stitching a set-top into a two-way TV.
And yes, that version of “open” does unbolt a new realm of competition. Nowhere was this more clear than in a Nov. 29 keynote address by Dan Brenner, senior VP of law and regulatory policy for the NCTA: “This gets lost, and it’s an important fact in a complex debate — there is nothing about OpenCable that prevents a TV set manufacturer from having an IP video access point on the set, that completely bypasses the cable product.”
Which leads directly to a second important takeaway: From here on out, it’s an apps, apps, apps world. Hardware cedes to software. The network is ready. Now it’s about what applications go where. It’s time for the next wave of Ted Turners.
Other Takeaways
The OpenCable community, for what it’s worth, breaks down into three groups: Content owners, video distributors, and application developers.
If you’re a program network, you’re probably most interested in “bound” applications, which tie a clickable thing into your shows. Maybe it’s to vote, or to tunnel into stored episodes.
In that case, your specific area of OpenCable interest is the “Enhanced TV,” or “ETV” subset of the OpenCable platform. (You’ll also hear it called “EBIF, pronounced ee-biff, which is the name of the technical specification behind ETV. It stands for Enhanced Binary Interchange Format. )
If you’re an operator, you’re probably more interested in the “unbound” aspects of the platform, for unity and scale. You probably already support hundreds of combinations of set-tops, operating systems, and underlying chip sets. A big plus of the OpenCable platform is that it smoothes the need to support different software for different set-tops.
And if you’re an applications developer, you’re sitting somewhere near the sweet spot. (See “next Ted Turner,” above.)
But a Dan Brenner keynote isn’t a Dan Brenner keynote without a bit of re-usable mirth. This time, it was about the readiness of the OpenCable — a “tweener” in its own right, as an 11-year-old.
“I’d say you could stick a fork in it, but, it’s never a good idea to stick a fork into the back of a TV set.”
Good point.
This column originally appeared in the Platforms section of Multichannel News.
by Leslie Ellis // November 27 2007
The quintessentially techie “QAM,” for quadrature amplitude modulation, is back in the engineering limelight, this time with a new prefix: Edge.
When you hear people talking “edge QAMs,” chances are high that you’ll hear the term “switched digital video” within a few sentences. That’s because digital video switches created the need for edge QAMs.
It goes like this: All digital cable services carried over cable plant use QAM modulation. It’s the conveyor belt. It’s what moves video, voice and data services from headends to homes.
(Brush-up basics: People tend to pronounce it as a word — “kwahm,” as in, rhymes with “Guam.” As for physical location, it’s a headend/hub thing: An unassuming, rack-mounted metal box, enclosing a series of slide-in cards. Carrying capacity goes like this: One QAM equals 38.8 Mbps of downstream data, or two to three HD streams, or 10 to 12 regular digital video streams. It is the equivalent of one analog channel.)
Up until recently, cable providers purchased QAMs on a service-specific basis. The QAM assigned to move that Web page into your cable modem, for instance, couldn’t also be used to move those digital video channels into your high definition TV.
Likewise, a QAM dedicated to video on demand (VOD) couldn’t also be used for switched digital video, even though their architectural constructs are identical.
Enter the edge QAM. It’s built to carry both VOD and switched digital video streams. Ultimately, the edge QAM will move three categories of service (those two plus the Internet Protocol side of the house, meaning data and voice.)
When people say “edge QAM,” then, they mean multi-purpose. They aren’t talking about where it is, physically — at the “edge” of the network, wherever that is. (For directions, see “A Pocket Map to the Edge of the Network,” in the 03/15/05 edition.)
Innovation Abundance
In the big picture of cable technology development, the edge QAM is fairly unobserved — but bubbling with supplier activity. By my count, seven companies are building the gear, up from two a few years ago. Among the providers: Arris, Big Band Networks, Harmonic Inc., Motorola, RGB Spectrum, Scientific-Atlanta/Cisco, and at least one skunk-works outfit.
Cable operators look for three things when evaluating edge QAM innovation: Price, density, and, for lack of a better term, “open-ness.”
Price is predictable. Right now, QAMs run in the $250 range. The goal, however ambitious, is to lop off the zero.
One rack-mounted unit typically contains cards that operate between eight and 24 QAMs. That’s the “density.” One pointed area of innovation, then, is upping the density. Higher density, lower price.
And then there’s the “open-ness” piece. Switching is a big move. All major cable operators are or will be doing it as a way to preserve bandwidth. None of them want to get painted into a corner with a single, monolithic supplier, seeking to control the economics of what happens beyond the switch. They’ve seen that movie before, and they don’t like the ending.
An edge QAM is considered a network resource. It talks to at least two other devices: The “session manager,” which sets up the linkage between you and the VOD server (or the switch), when you choose to watch something, and the “resource manager,” which figures out which QAMs are supposed to be moving what stuff to where.
An industrial movement is well underway to open up the conversations (the “protocols”) between those linkages, so that operators can buy things modularly. A switch from one guy, a session manager from another, a resource manager from a third, and edge QAMs from whoever is the most dense, open, and affordable.
Mix and match is the name of the game.
From an overall cost perspective, edge QAMs matter because they currently represent something like 70% of the capital spend for deploying switched digital video, according to the MSO-side technologists who track such matters.
By opening things up and making them more modular, operators reason, the actual QAM costs come down, which brings that percentage down, which makes switching more affordable, which make Wall Street happy. Lather, rinse, repeat.
This column originally appeared in the Platforms section of Multichannel News.
by Leslie Ellis // November 12 2007
If your company moves high definition TV to its distributors by way of satellite, there’s yet another bandwidth saving superstar to help you get 10 more pounds of HD into that five-pound bag.
What it is: A new way of saving room on those spendy satellite transponders, which can cost a content owner upwards of $125,000/month.
How big of a bandwidth savings? About 30%, say tech people at program networks and aggregators. One content-side engineer explained it this way: “Our transponders right now can carry 47 Mbps of video, and the S2s will carry somewhere between 65 and 72 Mbps.”
That extra 18 to 25 Mbps makes room for another two or three HDTV streams per transponder.
In other words, the satellite transponder that currently carries two or three HD streams (using existing MPEG-2 compression and existing modulation/coding techniques) could now carry five or six (using advanced compression and new modulation/coding techniques).
“The “S2” in that mention, by the way, is verbal shorthand for “DVB-S2,” which is the bandwidth saving superstar of this week’s translation.
“DVB-S2″is the name of a technical standard. It breaks down like this: “DVB” stands for “Digital Video Broadcasting,” and is the major standards-setting body in Europe. “S” is for satellite. The “2” is the version number of the standard.
(Version one of the DVB-S standard, as a point of reference, blueprinted more than 100 million digital satellite receivers, worldwide. That’s since 1993.)
The underlying context of DVB-S (versions 1 and 2) is all about how to anticipate (and thus prevent) the errors that inevitably occur when blasting bits up and down from space.
From here, the DVB-S2 detail can corkscrew way into the tech-funk. (This is generally true of anything involving satellite technologies. If you find cable tech-talk daunting, spend an afternoon with a satellite engineer.)
Here’s an example from an email last week: “The LDPC codes replace the Viterbi Forward Error Correction (FEC) of DVB-S, whilst the Reed-Solomon code is replaced with a different BCH (Bose-Chaudhuri-Hocquenghem).”
Uh-huh.
How DVB-S2 Works
How it works requires a brief refresher on how satellite transmission works. For starters, the modulation has to be sturdier, because sending stuff up into space is much harsher than sending stuff over a wire. Satellites traditionally use a form of modulation called “QPSK,” for “quadrature phase shift key.”
Don’t get hung up on the language. To put it in context, the QPSK modulation used for space transmissions is also used by cable systems to transmit in the upstream (home to headend) direction. The upstream path of cable is theoretically as hostile (in different ways) as space.
When you’re sending stuff into a hostile environment, even if you’ve taken extra precautions when imprinting it onto the carrier (modulation), you need to carve out room for reconstructive surgery — just in case something gets really mutilated during the ride.
The safety mechanisms are known as “forward error correction,” or FEC. Techniques vary. Understanding them involves heavy math. Short version: Send extra bits that know where they’re supposed to go, if they get called upon on the ground to stand in for missing data.
DVB-S2 is an improvement in forward error correction, which harnesses improvements in modulation. It’s already in use by some content owners and aggregators. It’s not something that can be easily adopted by DirecTV and EchoStar, for legacy reasons: On the receive end, it requires gear that can demodulate and de-code in the new way.
Conversationally, DVB-S2 tends to move in step with MPEG-4 compression. The reasoning: You’ll need new gear to do MPEG-4, and you’ll need new gear to do DVB-S2. Might as well make both changes at once.
And so ends this unintended mini-series on HDTV and bandwidth. To sum it all up: People who pay to move big HD streams over satellite will soon get a trifecta of bandwidth savers.
One is advanced compression, like MPEG-4. Two is better modulation, to move stuff up and down from space. Three is better error correction, to reconstruct streams on the ground.
This column originally ran in the Technology section of Multichannel News.
by Leslie Ellis // October 29 2007
By now you’ve probably heard the one about the big cable operator with the big plan to boost digital capacity by 30% next year.
That’s good for the HDTV channel explosion, obviously. Happiness accelerator: No ditching of the 14 million or so digital boxes already working in people’s homes.
The big cable operator is Comcast. The capacity gain involves a compression improvement, disclosed partially at recent investor’s conference.
The compression technique launches widely in January, so details will assuredly follow soon. What’s likely to be involved is a largely overlooked member of the bandwidth preservation family: The digital video encoder.
For that reason, the subject of this week’s translation is a brush-up on the language of video encoding. Chances are high that this topic will nudge its way into your conversational life very soon, especially if you follow cable’s shelf space situation with any fervor.
Knowing going in that this batch of improvements necessarily centers partly on the existing type of video compression, known as MPEG-2. (The “MPEG” stands for “Moving Pictures Experts Group.”) It also probably involves some new video processing techniques that are heavily focused on measurable video quality.
Know also that there’s really no other way to apply newer types of compression, like MPEG-4, to a bandwidth problem, without installing set-top boxes that know what to do with an incoming MPEG-4 stream.
Three Terms
Three terms tend to pop up repeatedly when talking about how to squeeze — encode — a digital video signal: “Dual pass,” “open loop vs. closed loop,” and “lossy vs. lossless.”
Let’s start with dual pass. Not surprisingly, it’s a way of compressing video in two swipes. Swipe one is the encoder’s best shot at using the components of the MPEG-2 standard to squish down a video.
Swipe two is almost always the secret sauce of the encoder manufacturer. It’s a full second look at the compressed stream, to find ways to squeeze the bit rate down even more.
Right now, with HDTV mostly a volume game, bit rate reduction often leads any compression discussion. The next HDTV chapter, though, will be about picture quality. The ideal encoder accomplishes a good squish without noticeably degrading the quality of the picture.
Fact: Most professional-grade encoders use “dual pass” techniques. The real action is in what they do within that second pass.
Soon after “dual pass,” you’ll run into “closed loop” and “open loop.” The loop is the linkage (or not) between a video encoder and a statistical multiplexer.
Refresher: If the work of a video encoder is to squish one digital video stream, the work of the statmux is organizing lots of those squished streams for the ride toward homes. (Lingo translation: “Statmux” is tech-talk shorthand for “statistical multiplexer.” “Mux” is also acceptable. Both remove more than five syllables.)
A good statistical multiplexer is like my friend Diana, who can take one long look at the overhead bin on a small airplane, and at the pile of stuff needing stowage — then magically fit the hat box, guitar, duffels, crutches, rolling bags, ficus tree, parkas and backpacks into the bin.
The loop that’s being closed in a “closed loop” scenario is the one that’s created between the encoder and the statmux. If it’s closed, those two machines are working together to organize bits for the ride. If it’s open, they work independently. There are pros and cons to both.
At some point, you may hear mention of “lossy.” (If you do, you’re hanging with the advanced class.) What’s lost in “lossy” compression is an exact pointer to the original material. Lossy algorithms tend to make files smaller.
Lossless compression is a more mathematical bit tossing, so that the underlying content can always be perfectly reconstructed. It’s sort of like when you use a “zip” program to shrink an important file you need to send to Harry, because otherwise Harry’s email server keeps kicking it back to you with a note that it’s too big.
Encoder-speak is coming back into the high-tech vogue because necessity is the mother of invention: HDTV signals are huge. Bandwidth is precious. The installed base of digital boxes using MPEG-2 compression dwarfs the number of boxes that can “see” MPEG-4 signals.
Something had to be done.
This column originally appeared in the Technology section of Multichannel News.
by Leslie Ellis // October 15 2007
Three things are indisputable about high definition television: More channels are coming, more homes can afford the HD sets to display them, and consumers harbor big opinions about which service gives the best — and worst — HD pictures.
Right now, those opinions are all over the place. Depending on the blog, the “hands down best” HD pictures come from AT&T. And DirecTV. And cable, and EchoStar, and Verizon.
Ditto for the “hands down worst.”
DirecTV is on the receive end of most of the recent blog angst, mostly because it’s moving at ramming speed toward its year-end promise of 100 channels.
It’s risky, though, to take comfort in DirecTV’s woes.
Why? Because there is no official technical benchmark for what constitutes “an HDTV picture.” Ditch any daydreams about a button on the remote that lets you know what resolution you’re getting, like you can do on the PC to find out what broadband speed you’re really getting. So far, it doesn’t exist.
What we’re left with is … people’s opinions. Even the experts in picture resolution are quick to point out that quality is highly subjective. My eyes see differently than yours, and your eyes see differently than the person nearest to you right now. It’s a byproduct of being a human.
Plus, glitches in picture quality can’t easily be pinned to how a program is distributed — over satellite, over cable, over fiber, over copper. From the time a program is created to the time it shows up on your snazzy new flat-panel HD set, it’s probably been “touched,” meaning manipulated, at least four times.
Then there’s the simple fact that today’s larger, higher quality TV sets show glitches larger and more distinctly. (Aside: At the Consumer Electronics Show, in January, a chief technologist from a major program network said that his big “aha” was that in 2008, TV displays will outperform distribution networks, in terms of how much picture information they can display.)
But back to the blog buzz, which seems to center on what programs are “true HD,” versus an “up-rezzed” version. (“Up-rez” is HD shorthand for “up-resolution,” also known as “up conversion.”)
Here’s what that means: At any program network, right now, some fraction of its content library was mastered in an HD format. The rest was not. The latter category will need more bits, in order to “look good” on those big, beautiful HDTV displays. That’s the up-rez.
The extent to which a program or movie can be “up-rezzed” also depends on how it was stored. If it’s on film, you’re good. If it’s on videotape, not so good.
The process for creating master film reels in a digital, high definition format is known as “telecine” (pronounced “tele-sinny.”) It usually starts with a clean-up, to remove any dirt, scratches, hair, or other visible glitches. It’s expensive and time consuming, but at the end, its true HD.
The process for up-rezzing videotape content is less accurate, and is part (part!) of the reason why some pictures look better than others on HD screens.
Up-resolution, as a technique, has two main components: Line-doubling, and interpolation.
Line-doubling is a method used on the vertical part of the picture, as it is “drawn” on the screen — more lines, more bits, more picture. Interopolation is the addition of bits within the horizontal lines, in a way that is hopefully creative enough to estimate what’s really happening in the picture. If the interpolator sees a line of red dots, maybe it adds another red dot, for instance.
The point is, up-rezzing is based on clever guessing, but it’s a guess. It is not “true HD.” From there, that up-rezzed show is compressed and sent along — where it might be compressed, uncompressed, and re-compressed a few times before it gets to the HDTV in front of Consumer Jane’s couch.
Right now, the name of the HD game is volume: Who has the most channels. The next chapter will probably be about quality.
Quality, in picture resolution, is about how much picture information there is, which depends on how many bits are used, which depends on how much bandwidth is available.
This column originally appeared in the Technology section of Multichannel News.
by Leslie Ellis // October 01 2007
It used to be that three blockers predictably conspired to gum up innovation and slow down new product introductions on the video side of the cable house.
One was the billing system. The bindweed of the back office, its tendency was to wind its way into all other mission-critical realities, like new service activation, customer care, even dispatch.
Two was “the guide.” Like a gas, it seeped into all available space within those early, already-constrained digital boxes. Plus it seemed to want to grow up to be the foundational platform for anything that showed up on the screen. It wanted to be the guide and the middleware.
Three was the conditional access and encryption system. Because the supply side for keeping content safe was split down the middle — between Motorola and Scientific-Atlanta — making changes meant constantly nudging two companies with mutually exclusive technologies.
The three blockers had three things in common: Making changes took too long, cost too much, and yielded less control than operators wanted.
That’s when the game changed. Maybe the time had come to take over for real, various operators said. Maybe it’s time to build our own (fill in the blank).
Fast forward to now: Billing still tilts toward “buy” in build v. buy. It’s painful enough just to modify the billing system. Changing it out? You might as well remove your own veins.
The guide and the conditional access mechanisms, however, did tip toward “build.”
Two things happened to the guide. One was the OpenCable Application Platform — what we used to call “OCAP.” It solved how the guide wasn’t to be the middleware. Plus, OCAP solved another problem: How to give cable services a national footprint.
The guide also became a “build” item for some operators — notably Comcast and Time Warner Cable. Both made substantial investments in the “build your own” alternative, either by absorbing guide-oriented companies outright (Comcast), or by assembling brainpower in-house (Time Warner).
From the outside looking in, it’s difficult to quantify whether these “build” moves made costs drop and innovation accelerate. Here’s one way to look at it: Inventions like “Start Over,” “Look Back,” subscription VOD, and the ability to do a self-upgrade from the TV screen are all reasonably new. They came, in part, from the “build your own” guide camp.
A couple of things happened on the conditional access side, too. The biggest chapter, since the “good old days” of embedded security, was the CableCard. It achieved the goal of a national cable footprint, for those consumers who purchase a TV set with a CableCard slot, take it home, and decide to get a scrambled service.
Somewhere in all of that, an effort called “downloadable conditional access” (“DCAS”) was born, in the form of an operator-owned company, PolyCipher. PolyCipher is based in Denver, but incorporated in Delaware, as part of “NGNA” — the “Next Generation Network Architecture” joint venture between Comcast, Time Warner, and Cox.
To put this in perspective, NGNA is known within technical circles as a place where strategic necessities are turned into a plan for products. The channel bonding aspects of DOCSIS 3.0, for instance, were born within NGNA. So was the notion of modular headend gear for broadband data. Ditto for the set-top line Comcast calls “RNG” — initially short for “Real Next Generation,” then changed to “Residential Network Gateway,” depending on whose version of the story you hear.
The difference this time is that the “build” was done as a separate company — understandable, given that this is crypto stuff. But still, it doesn’t take that much of a logic leap to wonder if the work of PolyCipher fits into the “get it done, move on” model of the prior NGNA efforts.
In other words, design the DCAS chip, find someone to make it, participate in the tape-out (initial layout) get it into production — then retreat. Move on to the next strategic priority.
What’s better, build v. buy? It’s a controversial philosophy question that varies, depending on what’s being built or bought. At the very least, though, it probably means that those operators entrenched in the “build your own (fill in the blank)” are a bit more empathetic to the timing and cost issues once shouldered solely by their suppliers.
This column originally ran in the Technology Section of Multichannel News.
by Leslie Ellis // September 17 2007
If your company’s growth depends on the cable industry’s available bandwidth, then you’re probably wondering what all this crazy double-talk about the “digital TV transition” means. Especially the part about “dual must-carry.”
You’re not alone. The “DTV transition” is a dense and trippy subject. For that reason, this week’s translation shows how to do the bandwidth math of a cable system’s carrying capacity.
Let’s put a finer point on it. Say you’re a program network. You’re pitching three new HD channels to the person at the cable company who decides what goes on, and what doesn’t.
For the past few months, you’ve been hearing variations of “love the idea, Bob, but for each of your new HD channels, I need to remove four standard definition (SD) channels.”
You cross your arms. Nod. Blink slowly. And silently wonder: How come?
The math goes like this: Cable systems built to 750 MHz have about 33 digital “channels,” each of which is 6 MHz wide and has a total carrying capacity of 38.8 Mbps. (Cable systems built to 860 MHz have 51 digital channels.)
One digital, SD stream uses 3.75 Mbps of bandwidth. About 10 can fit comfortably into a 6 MHz channel. The math: 38.8 divided by 3.75.
Likewise, one HD stream, using conventional compression, uses 15-ish Mbps of bandwidth. About two can fit into a 6 MHz channel, with some wiggle room. The math: 38.8 divided by 15.
So, for an operator whose shelves are full, adding one HD stream (15 Mbps) could well mean removing four SD streams (3.75 x 4).
In both cases, SD and HD, operators often apply a method interchangeably known as “rate shaping,” “grooming,” and “statistical multiplexing” to squeeze, say, two more SD streams, or one more HD stream, into that 6 MHz “container.”
Refresher on statmuxing: It’s like driving at rush hour when you’re in a hurry. You seek the blank spaces between the cars in the other lanes as your way to dart ahead. Same with rate shaping — it’s a way of organizing the bits more efficiently for the ride.
But all of this is before the Federal Communications Commission decided that cable must carry broadcasters in digital, and in analog, until 2012. Take a market like LA, or New York, each of which supports a couple dozen over-the-air networks. What’s the worst that can happen?
In a dual must-carry environment, for a market with, say, 20 broadcasters seeking that treatment, and they’re transmitting in HD, cable operators would be forced to clear off ten 6 MHz channels (assuming those networks aren’t already carried).
The good news is, the FCC didn’t mandate must-carry of “all content bits.” Had that passed, operators would’ve been stymied to statmux, re-compress, or otherwise handle the incoming broadcast signal, for the sake of transmission efficiency.
But back to you. At this point, you’re probably wondering: If dual must-carry is such a squeeze on bandwidth that’s already pinched, then why, oh why, would the National Cable Television Association be glad that its constituents are now “allowed” to dual-carry broadcasters for three years after the transition? Isn’t the point to take back that analog bandwidth (to make more room for your HD stuff?)
It’s not that operators planned to yank the broadcast networks as soon as their analog signal went dark. Most realize the power of being the only guy in town who can serve all the TVs in the house — even that junky one in the back bedroom — with the wire that comes out of the wall. Meaning, without having to put a box on every set.
More, the good of the 9/11 FCC decision is that its dual carry obligation lasts three years, and not “perpetually,” as had been proposed. Plus, it is any trade association’s preference that matters like this be handled by the businesses involved, not by the government.
And, you wonder: What happens, then, 2012, after the sunset? A bandwidth glut? Answer: Probably not. Probably, operators will continue to reclaim analog channels (yours included, as negotiated), at a measured and steady rate — which is hopefully fast enough to accommodate all the “more HD” that’s coming.
This column originally appeared in the Technology section of Multichannel News.
by Leslie Ellis // September 03 2007
In case you’re out of late summer reading, or were wondering whatever happened with those “two-way negotiations” between cable and the consumer electronics industry, there’s always those 500+ pages of collective snark sent to the Federal Communications Commission on August 24.
The comments — submitted by the lawyers of AT&T, Comcast, DirecTV, EchoStar, Intel, Microsoft, Samsung, Sony, TiVo, and Verizon (among others), as well as by trade groups like the Consumer Electronics Association and the National Cable Television Association — were filed in response to the FCC’s Third Further Notice of Proposed Rulemaking, issued in late June.
That’s the one that seeks the next chapter of the one-way plug-and-play rules, established in 2002. It was the one-way rules that gave us those instantly obsolete TVs with built-in CableCard slots.
(To find the reply comment mother lode, go to www.fcc.gov, click on the “search” button at the top right of the screen, scroll down to “Search for Filed Comments — ECFS” and enter in proceeding number 97-80. Otherwise, read on. This week’s translation begins to summarize 16 of the 100+ filings.)
It really isn’t as dull as it sounds. This is five years of pent-up frustration (and God knows how many aggregate air miles), trapped inside a gag order. If you’ve been following it, it’s one of those scuffles that forever elicits the same tense retort: “I just can’t talk about it.”
After a while, we all quit asking. That’s why it feels nearly voyeuristic to dip into the pages and pages of comments.
If your time is limited, or if you’re just not into reading that many FCC filings, start with the 80-page seethe from the National Cable Television Association. It’s deliciously pointed. The gloves are off. Even the footnotes are juicy.
Cable’s irritation is directed at the collective bunch of respondents who believe, with casual fervor, that cable services — specifically, the guide, on-demand, pay-per-view, and switched digital video — should be physically decoupled from the cable plant, and handed over to the devices that would play them.
At the helm of that idea is the Consumer Electronics Industry (filing: 163 pages), with its “DCR+” proposal, where DCR stands for Digital Cable Ready, with that plus sign.
NCTA calls DCR+ “consumer minus,” and rails that “the CEA proposal would be the most intrusive regulatory regime ever established.” (Ever.)
To do what CE side wants, NCTA argues, would require a massive do-over of the protocols currently used by cable to do VOD, the guide, pay-per view, and switched video. It would require new multi-stream cards, and new versions of leased set-tops. But, as NCTA puts it, “CEA is indifferent to engineering realities in cable.”
Beyond CEA, though, it’s nearly impossible to clump out who’s on which side. Most of the respondents support parts of the cable proposal, parts of the CEA’s proposal, and parts of their own proposals.
Industrial discord, even within the constituent camps, isn’t a big surprise. Everybody wants to protect their babies. Finding useful agreement amongst the multichannel video providers (“MVPD”) who filed, however, is somewhat surprising.
The point of accord, loosely paraphrased: It’s a bad idea for government to specify technologies in a rule. Different platforms (cable, satellite, telephone) use different technologies to reach consumers, and that’s as it should be. Let networks be networks. Encourage some kind of common interface, to let consumer devices attach to network services.
Naturally, the non-cable MVPDs argued loudly that they don’t want that common interface to come from their cable competitors. AT&T played the “we’re not a cable service” card; DirecTV and EchoStar, exempt so far from any navigation device rules, asked to keep that status.
Bogus, said cable (repeatedly). The existing “negotiations” are bounded to an FCC ruling that’s more than 10 years old. Back then, satellite providers hadn’t made much of a dent in subscription television. Video wasn’t yet a twinkle in telco budgets. DSL was the name of their game.
The NCTA’s filing should be required reading for everyone who reads this newspaper — especially if you’ve ever wondered, but didn’t want to ask, what the heck this “OpenCable Platform” thing is really all about.
This column originally appeared in the Technology section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.