Monthly Archives: March 2014
A Wider Upstream Path: 2018?
DENVER–A surefire way to fire up cable technologists used to involve smiling broadly while asking: “When will you need to widen the upstream path?”
For decades, the answer, usually harrumphed, was this: “Never!”
Why: It’s a pretty big hassle. Very plant-intensive, possibly to the point of having to revisit or replace gear at the tap level. (Taps are expressed in number of ports — 4-port, 8-port — and exist to adjoin fatter cable, like feeder cables, to the thinner, coaxial cable that drops into homes. So there’s tons of them.)
Later, queries about a wider upstream softened into variations of “not in my lifetime.” Why: The upstream is a very skinny portion of the total available capacity of a cable system. “Very skinny” meaning five percent or less, occupying a slender spectral spot from 5-42 MHz.
Then broadband happened. Right now, the growth of downstream (home-facing) broadband consumption still far outpaces the growth of upstream (network facing) bandwidth usage. But! Think about how many things come with a built-in video camera. Your phone, for instance, or any of the webcams monitoring any of the things in your life.
Video is big. Sending it upstream, live, chews up bandwidth.
Think, too, about the fact that more Wi-Fi traffic is happening right now than mobile or wired, combined. Offloading some of that onto the wired network in the house is a plausible reality.
Which brings us to the latest round of responses to the age-old question of when the industry might consider a wider upstream. Last week, specifically, during a panel of technologists at Light Reading’s annual “Cable Next-Gen Technologies & Strategies” event. Answer, extrapolated from the guts of the panel and not expressed directly: 2018-ish.
“We’re all exploring it,” said Jorge Salinger, VP/Access Architectures for Comcast, to the point of an organized, weekly call amongst involved technologists at several MSOs.
Here’s where the 2018-ish prediction comes from: DOCSIS 3.1 includes language supporting a “mid-split,” which is tech talk for widening the upstream.
The silicon for DOCSIS 3.1-based gear is expected this year. The cable modems and gateways that use it will follow in 2015. Then interops, then trials — which makes 2016 plausible as “the golden year” for widespread DOCSIS 3.1 deployments.
After that, 3.1-based headend gear (known industrially as “CMTS,” for “Cable Modem Termination System”) catches up. Let’s say that happens in a big way in 2017.
After all of that, and should we continue to see gadgetry in our homes that streams video constantly, it will probably make sense to move the upper boundary of the upstream spectrum, from 42 MHz, to 65 MHz, or higher.
That’s why we’re putting a 2018-ish stamp on it. (Heavy on the -ish.)
This column originally appeared in the Platforms section of Multichannel News.
Imagine Park: RDK in Action – Espial
“The TV UX – Reimagined with RDK” Jaison Dolvane President & Co-founder Espial. Filmed June 10-12, 2013. Video courtesy The Cable Show.
Imagine Park: RDK in Action – Tata Elxsi
“RDK and Making Sure Your Kids Get Where They’re Going” Glee Abraham, Solutions Engineering Manager at Tata Elxsi. Filmed June 10-12, 2013. Video courtesy The Cable Show.
Imagine Park: Tech Talk with Tony Werner (The Cable Show 2013)
Hosts: Leslie Ellis, Ellis Edits, Inc. and Tony Werner, EVP & CTO, Comcast Cable. “Show Don’t Tell: Giving Customers A Taste Of Awesome” Eric Schrag Senior Android Engineer Comcast Cable “Be There from Anywhere with PanaCast” Aurangzeb Khan Co-Founder, President & CEO Altia Systems Lars Herlitz Co-Founder & CMO Altia Systems “Movie Night – A New Way to Pick a Flick, Together” Preston Smalley Product Manager & Entrepreneur Comcast Silicon Valley.
Filmed June 10-12, 2013. Video courtesy The Cable Show.
Not long ago, I bid farewell to the flood-damaged farmhouse in Longmont, Colo. and moved on to greener, less swampy pastures. Despite the stress of moving and the fact that there are still boxes everywhere, there’s a lot to love about the new digs – a neat old Victorian surrounded by gardening space and fruit trees.
And the best part? I’m back on the cord!
One of the first orders of business at the new house, even before the moving truck pulled in the driveway, was to get Comcast service up and running. After the ultra-slow (<5 Mbps) DSL service at the farm, I was beside myself with joy when I saw this:
So how does the “cord-cutting” experience change now that I’m back on the cord?
For starters, I can watch streaming video and download software simultaneously – at the farm, this same challenge caused everything to grind to a halt for 5 or 10 minutes.
I also don’t see nearly as much buffering — there’s some, of course, but it’s generally limited to when I first start playing a piece of content. For example: Slingplayer, whether on my iPad or another device, will now keep playing without dropping the connection for hours on end (at the farm, Slingplayer would lose sight of the Slingbox at the lab at least once an hour, and every 5 minutes if I was watching something particularly interesting).
I expected to see some improvements in terms of video quality, but found it to be about the same as at the farm. Slingplayer works without interruption, but only in the SD or Auto settings – if I change the picture quality to HD, it’s full of skips and starts just like at the farm.
And the same can be said for Netflix, Amazon, and Hulu Plus, regardless of whether I’m streaming to a Roku, Apple TV, or Chromecast. I can’t say that the picture quality is noticeably sharper than it was on an ultra-slow DSL connection. What I do notice is that videos play smoothly at the new house, with virtually no “buffer breaks” (which, like commercial breaks, were a good time to grab a snack. Now I have to pause the video).
This underscores the fact that our OTT devices are really good at handling streaming video, even when the connection is less than optimal. At the farm, even at <5 Mbps, the video generally looked pretty sharp and the buffer breaks were manageable when using a streaming device connected to my TV. The main difference now that I’m on a 50 Mbps connection is that videos load much faster, and I very rarely see buffering in the middle of a piece of content.
Aside from the faster connection, the biggest difference with my new setup is that I can easily get local channels with an antenna. Finally!
You may recall that I spent hours moving a huge high-powered antenna all over the farmhouse, and tripping over coax in the hallways, only to find I STILL couldn’t get all the major over-the-air networks. When I connected the dinky little Boxee antenna to a TV at the new house, it immediately picked up ~35 channels, including ABC and NBC, two that I tried in vain to pick up at the farm. Of course, I can get those channels (and more) through my cable service, but the rarely used upstairs TV doesn’t warrant its own cable box. And now that Aereo has shut down its service in Denver for the time being, the timing couldn’t be better.
It’s good to be on the cord again. The fast Internet and cable TV feel downright luxurious after doing without for years, and I’m excited to finally be able to explore some of the other technologies that are making their way into homes. Now that we’re in the time of home automation and connected bike helmets, I’m glad to be back on the cable loop.
What’s Unified About “Unified Storage”?
“Unified storage.” Another example of a tech-side term stuffed with descriptive confidence. It’s storage, and it’s unified, silly! Nobody wants to be the dummy who doesn’t know what’s so unified about it. (Right?!)
So off we go. Starting with a reminder that we’re still in the middle of the gigantic transition to IP video. Service providers are scattered along a continuum of “now” and “next.” Anything expressed in past tense is still happening.
Quick refresher: IP video is that fertile catalyst to “cloud,” TV Everywhere, multi-platform, cross-platform, and however else we’re describing the transit of subscription television signals into homes, through a box that’s more cable modem than set-top. And from that broadband “gateway,” out to connected screens — tablets, laptops, phones.
In the old days of digital cable (meaning a few years ago), the only thing the network needed to store, really, were the assets of video on demand (VOD). Recall, too, that those early offerings of cable VOD were mostly digital movies.
Shipping VOD content to cable systems traditionally involved a “pitcher,” to blast the assets up into geosynchronous orbit, and “catchers,” at recipient headends. Storage resources were vastly distributed, across an operator’s footprint.
Transport v. Storage
The economics of Big Networks involve (ceaseless) evaluations of the cost of transport, vs. the cost of storage. Now, storage is cheap. (Think about how many Gigs you can stuff in your pocket right now.)
It follows that the first unification of storage is architectural: Centralize storage. Big “origin” servers in the “middle” of the network. Closer to consumers, and holding the most popular stuff, smaller “caching” servers. Everything linked up over fiber — from national backbones, to regional rings, to last mile.
Meanwhile, along the continuum, most operators built out a different on-demand pipeline for their broadband footprint. That way, their customers could stream video titles onto their other screens: PCs, laptops, tablets, connected TVs, phones.
Supporting duplicative paths is inefficient, particularly in centralized architectures. Especially the ingest.
It follows that the second thing that gets “unified” in “unified storage” is the work of ingesting both traditional and IP-based on-demand assets.
A third element being unified, in unified storage: Metadata. Establishing and manipulating it is faster and more comprehensive in IP than in “traditional” VOD. Why: Because video assets in the online world are resourced with editors. Their job is to increase the chances that an asset will show up in a web search.
So: Unified storage is part architectural, part ingest, part metadata. In all cases, the momentum, tools, and spotlight is on the web-styled way of doing things. Be there or be … un-unified?
CableNet 2000 Tour, Part 1: CLASSIC IN HINDSIGHT! Stream video to the PC? Seriously? You could do that?
I’m posting this interview for you to see 14 years after it happened, while watching the video of it stream through a cable modem to a PC screen … so, watching myself rapturously wonder whether such a feat is even possible. Trippy, maaaan…
Please enjoy this CableLabs CableNet interview with Jeff Huppertz, then (2000) with Clearband — about this wacky new idea: stream TV through the cable modem to the PC screen. Crazy!
Video courtesy The Cable Channel. Originally posted May 9, 2000.
The Great Lab Purge of 2014
Landscapes are changing, both inside the lab and out. We’ve seen the “hardware streamer” category flare up and settle back down; the major players have been established, and the lab shelves are cluttered with “televestigial” devices and piles of remote controls. And so, the purge begins.
A few of the favorite televestigials get an honorary HDMI port — namely the Boxee Box and a 2nd-generation Sony Google TV. The 1st-generation Sony Google TV gets to stay too, because it’s another screen (but probably the dusty 91-button remote control will live in a drawer).
The multiple outdated devices from Netgear and Sony (not to mention the associated tangles of cords running behind the lab shelves) are getting the axe.
Fortunately for those of us dealing with cord-clutter, 2014 is shaping up to be Year of the Dongle. We’ll have offerings from Roku and (so we hear) Amazon joining the lab next month, and we’re looking forward to covering the next phase of OTT technology and branching out to some new areas as the traditional hardware streamer market dies down.
Meanwhile, I recently moved from the connection-challenged farm and am officially back on the cord. Happiness! My new house gets Comcast service, so I now have access to cable TV and 50MB internet – a big upgrade from the farm, where I’d get 4.7MB downstream on a good day. As soon as I find the boxes labeled “OTT,” I’ll be back with an update on how my streaming experience at home changes with a much faster connection. Stay tuned!
TV and Beyond – The Whole Thing (37 minutes)
In this decade by decade chronicle of the origins and evolution of cable television, Leslie Ellis and filmmakers David G. Knappe & Joe Bondulich take viewers through 60 years of innovation. This documentary is presented in chapters elsewhere on this web site. Originally posted October 22, 2008.
Video courtesy Multichannel News.
On Clouds & Chokepoints
In the halls of cable and broadband technologies, what people were talking about last week were the performance glitches during the first-ever live stream of the 86th Academy Awards.
A few weeks earlier, what people were talking about were the travel-is-so-glamorous accommodations in Sochi, for the Winter Olympic Games.
In other words, what people weren’t talking were performance glitches during an Olympic Games treatment that put every event (every event) online.
This week’s translation examines the subtle and shifting nature of choke points, bottlenecks, and other things that now — in the age of cloud — can constipate the user experience of television.
Let’s start with Oscar, and his first experience as a live stream. Something went wrong. That it disaffected viewers is bad, of course. But why it happened is illustrative of the expanding nature of potential choke points, when it comes to moving a stream of video from where it’s happening, to your screen.
Until the Oscar hiccup, the perceptions associated with performance breakdowns on live streaming events were typically pinned on the physical infrastructure. Something bottlenecked, either in the last-mile plant, or north of the headend, or between regional rings and national fiber backbones. Bad ISP. Bad, bad, bad.
These days, we live and work in a digital world increasingly plumbed to optimize three things: Connectivity, compute, and storage.
The “connectivity” part came first, and thus bore the brunt of the early glitches.
The thing we now call “cloud” brings the two other dimensions — compute, and storage — into the video pipeline. Both are now core parts of video distribution, as we know it. It follows that both can become accidental choke points, especially if they aren’t built for “elasticity” — to expand and shrink as needed, based on demand.
Storage, in a cloud sense, is proximity-based: Popular stuff streams from the edge closest to consumers; long-tail fare sits elsewhere. (In the cloud, storage is priced by size, as well as “how soon would you need it.”)
“Compute,” as a dimension, handles things like input-output — or, in the case of the Oscars, how fast a stream could be spun from the source, when lots of people wanted it.
That the streaming treatments of the Winter Olympic Games weren’t plagued with connectivity issues is a nod to the ongoing modernization of video distribution systems. Sure, connectivity matters, all day long — but at the same time, the care and feeding of “compute” and “storage” matters as much, if not more.
This column originally appeared in the Platforms section of Multichannel News.