by Leslie Ellis // January 26 2009
No shortage of questions on the matter of three-dimensional television, or “3D-TV,” the big sizzler at this year’s Consumer Electronics Show.
Here’s a roundup of the questions in last week’s mail: If cable providers wanted to go 3D, are there bandwidth implications? What about set-tops?
Content owners wanted to know about production and cost implications for making 3D shows.
The consumer-facing questions: Is there such a thing as a “3D-ready” TV? Why are there so many different types of glasses? Where do people get them, if this is happening in homes as well as movie theaters?
The Third D is Depth
Let’s start at the beginning. The third dimension in 3D-TV is depth. It joins the other two dimensions: Height and width (of the screen). By adding depth to the viewing experience, some objects seem closer to you. Like the football, coming right at you, or the pointing finger that seems to be aimed right at your nostril. (Ew.)
Today’s 3D-TV environment is a bubbling brew of different techniques, ideas, and excitements. There are more questions than answers at this point, but let’s look first at set-tops and bandwidth.
The pre-requisite for any kind of 3D content — for cinema screens or TVs — is digital. If this strikes you as a “duh!,” consider: As of last spring, only about 5,000 of 38,000 movie screens in the U.S. had “gone digital,” and of those, about 1,000 could do 3D. (And the economics work: People are willing to plunk down an extra $5 to see, say, Hannah Montana in three dimensions.)
On set-tops: So far, it’s sounding like the minimum resolution required to do 3D is 1080p, at 60 frames per second. Most of what you see on HD now, with the possible exception of some EchoStar-delivered content, is 1080i.
Consider: An uncompressed 1080i stream chews up 1.5 Gigabits per second. The same uncompressed content in 1080p requires about 3 Gbps, engineers say.
This means one thing for sure: Get out the advanced compression, variously known as MPEG-4 and H.264. While operators are beginning to install set-tops with dual MPEG2/4 decode chips, it’s a trickle, not a gusher. This means there’s an installed-base issue. [Before despairing, see “people are willing to pay a premium.”]
Original content produced in 3D requires two cameras — one for each eye, essentially. And, as might be expected from something this new, it isnt cheap. Estimates at a Digital Cinema conference hosted by SMPTE last spring put complex 3D production at $75,000 per minute. Adding 3D to existing HDTV material rang in at around $55,000 per minute.
Subtitling is also an issue. When the football is coming right at you, where do your eyes best interpret the words and numbers on the scoreboard? The incorrect placement of text into a variable-depth of field environment can be & well, nausea-inducing.
About 3D-TV-ready HDTVs: According to the demo-people at Dolby, something like 1.5 million HDTVs have already been sold in the U.S. that are “3D-TV-ready.”
In the home, there’s the glasses. Three types, now, by my count. There’s the original paper kind, circa 1953, with the red/blue lenses. There’s a polarized version. And, there’s the kind with shutters in each lens that flutter open and shut, super-fast, one eye to the next, synced to pulses transmitted from inside or near the HDTV.
“Cross-talk” could also be an issue. In 3D terms, cross-talk is what happens when left-eye images leak into right-eye images, and visa versa. (This can also induce a feeling of dizziness.)
The 3D TVs at this year’s CES were a bit more “gee-whiz” than truly a mechanism to advance or enhance a story. Most people say reality for 3D is in the five year timeframe. Between now and then, watch for 3D webcams, 3D without glasses, and who knows, maybe even a resurgence of holographic TV.
So: Other than challenges with production, delivery, viewing, technology, standards, and business models, 3D is on its way.
And if nothing else, it sure gives new meaning to feeling a little flat.
This column originally appeared in the Platform section of Multichannel News.
by Leslie Ellis // January 19 2009
Here’s a new one in the surging body of interactive TV work known as EBIF: The possible use of the spec to do both bound and unbound applications.
That’s good for applications needing to remain in the TV foreground, should Viewer Jane change the channel. “Unbound,” in this sense, means it stays on the screen regardless of what Janes watching.
Like, say, a “TV widget” — one of the recurring themes at the recent Consumer Electronics Show.
It’s also good for unbound applications that want a nice big service footprint (name one that doesn’t), and not just a ride on the smaller-but-growing deployed base of advanced digital boxes.
So far, though — meaning since the Enhanced Binary Interchange Format emerged from CableLabs as a technical specification in 2005 — EBIF exists as a way to add interactivity within a specific show, or ad. It’s “bound” to the content it travels with.
As for EBIF’s potential service footprint: Still big. The “binary” in EBIF is a hint at how small the stuff is. Small is good, when you need a big deployed base.
The Cool Factor
This all started with an email, right before the holidays: “A while ago, you wrote a column about EBIF, which touched on the ‘cool factor’ of bound apps versus unbound apps.”
That column in question was written in May of ’08. It questioned whether enough unbound applications existed to attract developers. It likened bound apps to Ginger, and unbound apps to Marianne. (“Necessary. Sensible. Pretty in their own way. Not Ginger.”)
At the time, the majority of the unbound apps were MSO-specific: The guide. The VOD ordering system. When people talked about OCAP apps, that’s what usually comes up.
Conversely, voting someone off a reality show, or “telescoping” in to find additional episodes, are the things that come up when people discuss EBIF.
So, a tacit generalization grew: EBIF is bound, OCAP is unbound; EBIF is sexy, OCAP is necessary.
The question came from a representative of Integra5, which got its start doing caller ID on TV –an enormously popular unbound application. Conversations ensued. What about recommendations, fantasy sports teams, social networking? (Note how widget-y these are.)
Quickly grew the list of apps that work best if they persist over channel changes. Now it’s a matter of making EBIF capable of listening to two doorbells, so to speak: The existing inbound, and the out-of-bound. More on that as clarity occurs.
This column originally appeared in the Platforms section of Multichannel News.
by Leslie Ellis // January 12 2009
Nothing like a new year to tackle an intricate set-top/cable modem combo topic: A-DSG.
“A-DSG” stands for “Advanced DOCSIS Set-top Gateway.” It travels with terms like “MAC multicast addressing,” “digital channel descriptors,”and my personal favorite, “the whole hub straddling thing.”
Before we even get into what’s advanced about it, know that DSG alone matters for lots of reasons. It’s the first major technical juncture between traditional “MPEG” video transport [people and gear] and the cable modem/broadband realm [ditto].
That’s because it defines how set-tops talk not only to traditional headend controllers, but also to the CMTS [Cable Modem Termination System] controllers of the broadband domain.
Plus, DSG is a big part of how OCAP stuff moves to boxes.
The original DSG spec [sans the “A”] was released by CableLabs in 2003 as an alternative “out-of-band” signal path to and from digital set-tops.
Refesher: “Out of band” means data sent to and from a box, without any correlation to a specific channel or TV show. Guide data, conditional access markers, and software patches are examples of data types that travel best in a tunnel not correlated to particular show.
Think of it this way: If you change the channel while next week’s guide data is flowing [invisibly, in the background] into your box, that flow stops until the next time you tune that channel.
The “in-band” transit passageway sends data within a channel or TV show. EBIF triggers are an example. Using the remote to vote someone off a show, or to be pointed to more, stored VOD content about a show, works best if that trigger is correlated with the show you’re watching.
“A Well-Known MAC Multicast Address”
In “regular” DSG, each set-top is pre-configured with a “well-known MAC multicast address” for out-of-band signaling.
Let’s unpack that.
“Well known,” in this sense, means mutually acknowledged by both MSO recipient and set-top manufacturer. Technically, it means each box is hard-coded at the factory to know where to “join” a particular multicast stream of out-of-band traffic.
“MAC” stands for “Media Access Control.” It’s an identifier.
“Multicast” is a way of sending a “one-to-many” stream of packets over an IP network. It’s not as wide as broadcast, reach-wise, and not as narrow as unicast.
Note: “MAC Multicast address” is conversationally synonymous with the term “DSG tunnel.”
But why on earth would a set-top need a MAC multicast address in the first place? Turns out it was more a desire for calamity avoidance than for IP video. If a set-top’s upstream path went kaflooey, and couldn’t talk to the CMTS controller, there had to be a way to still send it critical out-of-band messages that could be understood and enacted.
Engineers ultimately realized they needed a more elbow room in how DSG addresses are handled. They needed tunnels they could change on their own, on the fly, without having to go back to manufacturers: One for conditional access stuff, another for OCAP stuff, another for guide stuff, and so on.
Enter the “A” in “A-DSG.”
Example: You’re a cable operator, wanting to deploy an unbound or OCAP application. It requires an out-of-band signal path. Before A-DSG, your option was to contact your supplier to change that well-known address.
In “advanced” DSG, that well known DSG tunnel can be changed as needed, by modifying a thing called the “downstream channel descriptor,” or DSD.
Now let’s say that application, delivered over the DSG path, is streaming out to boxes that are geographically scattered. Technically, let’s say its plant with hubs that straddle county borders. [This happens.]
The app, in this example, is the guide data, flowing over existing hubs to Counties A and B. But! Each county happens to have channel differences. So, two types of guide data are needed. Welcome to “the whole hub straddling issue.”
With A-DSG, the set-tops in County A can be told [via that downstream channel descriptor] to watch for a specific tunnel that contains multicast data, including tuning information. Likewise for County B, but on a different tunnel.
Voila: The hub straddling issue is & no longer an issue.
That’s a quick overview of A-DSG. Chances are high that this will serve as a big background topic in cable’s engine rooms this year, for two reasons: One, it’s tightly linked to OCAP. Two, it’s the first real blending of old (MPEG) and new (IP) transport.
This column originally appeared in the Platforms section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.