by Leslie Ellis // May 30 2011
Last week, we looked at the ways operators are considering to extend the work that is EBIF – the embedding of a clickable thing into an MPEG video stream – into the technology tsunami that is IP.
(Acronym soup descramber: EBIF stands for Enhanced Binary Interchange Format; IP stands for Internet Protocol.)
This week, we’ll look at what can be done to make TV shows and ads more interactive, when it’s not necessary to lug along the legacy base of set-tops.
Aside: Throughout the storied interactive TV timeline — every chapter, every decade — there’s that chasm, carved out by the question of where to draw the line between old and new. On the one side are those who learn how anemic those older set-tops are, in terms of processing power, memory footprint, and graphics engine, then throw up their hands, mutter “what’s the point?” and walk away. Enough with the legacy albatross, they say.
On the other side are the ITV stalwarts – in today’s chapter, the people whose work includes EBIF and tru2way. Either they or their bosses made the decision to not strand a fielded base of 25 million boxes and growing, sitting in America’s living rooms, bedrooms and kitchens.
At the recent TV of Tomorrow event in San Francisco, during a session titled “Broadcast-Synchronized Companion Apps: Lessons From the Field,” that legacy albatross was weighing heavy.
Interactivity synchronized with a broadcast TV show. Sounds very EBIF-ish, right? In this case, though, no triggers, no user agents. Instead, The Weather Channel’s “From the Edge” show on one screen, and a whole lot of companion interactivity on Apple iPad.
The show follows the adventures of nature photographer Peter Lik; interactive enhancements on the second screen allowed viewers to capture more details about each location (Jurassic Falls, in this case), map his path, and so on. It was all very Apple-sexy.
How does it work? Hello, audio watermarks. In this case, provided by Nielsen as part of its “Media Sync” platform – meaning a 32 kilobit stamp tucked into Nielsen’s encoders. Download the app, it syncs with the show by listening to the audio coming from the TV.
On the panel, content producers raved about the creative freedom, cool factor, and production efficiency (TWC’s John Hashimoto finished coding the interactive features from his hotel room, earlier that day.)
Advertising? Yes. Watch for a new category – “run of app” – that lets advertisers buy presence within the app, for the duration of the show to which it’s synchronized.
That ITV chasm between old and new just got a whole chapter wider…
This column originally appeared in the platforms section of Multichannel News.
by Leslie Ellis // May 23 2011
One of the recurring topics at last week’s TV of Tomorrow conference, put on by interactive TV beacon Tracy Swedlow, was the fate of EBIF — the method of choice for embedding a clickable thing into a TV show or advertisement.
As this column has noted before, EBIF’s power is in its reach: It was invented as a way to add more oomph to the fielded base of digital cable set-top boxes, which obsolesced almost before they were installed. Ten years ago.
EBIF stands for “Enhanced Binary Interchange Format. As use cases go, it began as a way to let viewers click to request more information about a product, or to view more episodes of a show (also called “VOD telescoping”), or to participate in voting/polling activities.
Last year, EBIF flourished anew as a way to make a TV remote control out of an iPad, smart phone, laptop, or similar gizmos in your digital garden that tend to hang out near you and your TV.
But what happens with an EBIF trigger nestled inside a video stream that doesn’t travel through a set-top – like the connected side of a connected TV? “Connected,” in the 2011 sense, means “to the Internet.” Input one, cable; input two, Internet. Let’s say you’re on the Internet side, watching a show. The EBIF trigger is baked into the stream. How do you see it?
A couple of options, noted panelists at last week’s TVOT. Option 1: Convince consumer electronics manufacturers to include a EBIF “user agent,” or UA, into their gadgets, so as to see and render the clickable thing. Consensus: Good luck with that. 25 million EBIF-enabled boxes sounds like a big number, but it’s not 100 million TV households.
Option 2: Transcode the trigger into the equivalent of a Web bookmark, for the end device to go to retrieve the clickable thing. Consensus: Better, but timing issues need work, to make sure the triggers fire in sync with the underlying content. Seeing an option to “click here” for an item that left the screen 10 seconds ago doesn’t seem like a recipe for success.
Another option is to put the user agent somewhere else in the network. In the cloud. Maybe the trigger comes down as an instruction to the CE device to pop it up to the cloud, which knows what clickable thing to send back, quickly, and in sync with the show or ad that’s on the screen.
Whither EBIF in an IP world? Short answer: Yes.
This column originally appeared in the Platforms section of Multichannel News.
by Leslie Ellis // May 16 2011
Is it just me, or is “HTML5” rat-a-tatting into video lingo way more often than, say, this time last year?
For most of us, HTML in general and HTML5 in specific are terms in the periphery, cleaving around the business the broader Web community – Apple, Adobe Flash, Microsoft’s Silverlight.
Until now. As more of the stuff of the web intersects with the stuff of professional video, HTML5 is less of an “over there” term.
In short, HTML5 matters to multichannel video services in general, and cable in specific, as a way to move in step with the “connected device” landscape of screens that want to play video.
But let’s back way up. HTML stands for “Hypertext Markup Language;” “hypertext” means you click on something and get linked to a related bit of text. Generally speaking, “HTML” started out as a way to mark up “pages” for presentation on the World Wide Web. The word “markup,” in fact, is a throwback to the (sadly ancient) world of print type-setting. So, in that sense, HTML began as the electronic setting of type.
Each version of HTML brought advances in what we see and do, when we go to a web page. In the earliest days of the web, we had “flat,” text-only web pages. Then came still images. Then animation and things being refreshed without having to be reloaded (a function of AJAX, or “Asynchronous JavaScript and XML, which came with HTML 4.)
HMTL5 introduces ways to tag web pages for video, so that future HTML5-based browsers can stream video without having to download a player. (Think Adobe Flash, as one frequently cited example.) That’s gotten the attention of lots of people in the video food chain. More and more cable engineers, for instance, are actively participating in the W3C – the World Wide Web Consortium, which governs HTML5 activities – to have a voice in what happens.
Likewise for consumer electronics manufacturers. It started publicly at this year’s Consumer Electronics Show, when Sony and Samsung demonstrated live streaming of cable-delivered video from Comcast and Time Warner. Those demonstrations relied on HTML5 to render the “clickable thing,” or remote user interface (RUI), which looked like an Xfinity logo and a Time Warner Cable logo.
The thinking is that HTML5 becomes the method of choice, by multichannel video providers and CE manufacturers, for RUIs. “Remote” meaning that the rendering of the clickable thing comes from elsewhere in the network. From the cloud.
Perhaps not surprisingly, HTML5 isn’t a slam dunk. While more and more devices and services are being coded for HTML5, it’s not expected to reach “recommended” status until 2014. Until then, watch for degrees of compatibility – a feature that runs here, but not there. (Oh joy.)
Or, as itaas CTO and founder Jatin Desai put it last week: HTML is a journey, not a destination. As is life. And marathons, and most kitchen remodels. Best limber up.
This column originally appeared in the Platforms section of Multichannel News.
by Leslie Ellis // May 02 2011
Last week, I posted a Facebook solicitation for glaze-over tech terms worthy of translation. One slice of jargon came in repeatedly: Adaptive streaming.
So, Ron, Dawn, Ed, Jeff and John, this is for you. And everyone else who keeps bumping into the term.
Let’s start with the basics: Adaptive streaming is the older sibling of the “progressive download.” Both are terms specific to moving and displaying online video to screens attached to the Internet. (As opposed to dedicated machinery for displaying video on TVs, like set-top boxes.)
What’s “progressive” about the progressive download is the buffering that happens along the way to your screen, and in the background. Remember the early, early days of online video streaming, and the buffer indicator that drew itself in circles, over and over, while the bits tried to get to your screen? By progressively loading the video in the background, we (gladly) see less of that these days.
Adaptive streaming takes it a step further. What’s “adaptive” about it is its ability to throttle down, or up, depending on available bandwidth. Instead of one stream, of one size, loading into a background buffer, the adaptive stream exists in various sizes – small, medium and large, let’s say, although some techniques slice it into 10 or more sizes. It’s a mixture of traditional streaming, with file-based delivery.
(This is why adaptive streaming tends to be spoken synonymously with “fragmented MPEG-4,” which also works by treating a video stream as a series of small files.)
With adaptive streaming, the end screen—the client—can sense the bandwidth on its connection, and send up a request for a “right-sized” stream for what’s available. This way, the client screen can switch between files on the fly, at varying bit rates, depending on available bandwidth.
This is all great, unless of course you’re the guy (hello, cable) who provisioned millions of customers for, let’s say, 10 Mbps of downstream throughput, on the assumption that not everyone would use it all at once. Adaptive streams of video tend to behave like a gas, filling all available space.
With Mother’s Day coming up, it’s useful to point out that the telephone network was originally architected around that particular holiday, because that’s when the most phone traffic occurs. The intent was to minimize the occurrence of “all circuits busy,” on that peak calling day.
Think about it: On this Mother’s Day, will you pick up the phone, or will you find a way to call Mom so that you can see each other? Video is bigger – way, way, way bigger – than voice. Adaptive streaming helps that, for the end device. But it still places a heavy load on available bandwidth.
Anyway. Call your mother. She’ll be glad to hear from you either way.
This column originally appeared in the Platforms section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.