by Leslie Ellis // August 22 2011
Few tech terms can instantly generate so many groaner-twists on clichés: The cache cow. Cache in your chips. His mouth is writing checks that his body can’t cache. (Ok, ok, I’ll stop.)
Nonetheless, in the language set of the content delivery network – the CDN – “caches” are a big part of the scene. In a hierarchical sense, they’re at the end, closest to viewers – you’ve probably heard about “edge caches.”
The purpose of a cache, in a CDN, is to temporarily store the most popularly viewed content, so that it’s readily available to lots of people using lots of different devices (not just the television.) It’s all part of this shift toward distributed storage – gigantic servers in the center, linked to regional servers, linked to caching serves at the edge (headend or hub.)
Caching is buffering. You’ve seen it dozens of times, especially when video streaming first began. It’s the little animated circle that spins around on the screen when you’re watching something, over the Internet. Caching reloads more bits to your screen.
As cable operators continue architecting their CDNs this summer, caches matter for local content – stuff that can’t easily be encoded nationally, like cable channels – HGTV, ESPN, Discovery. Big cities can host more than 100 local TV stations. The MSO encoding 500 national channels at a centralized facility (the “origin server,” in CDN speak), and 100 channels locally, may need to cache a million or more file chunks, switched out several times per hour.
An HDTV title, for instance, may get compressed and encoded into eight different stream sizes, ranging from 1 Megabit per second up to 10 Megabits per second, to be adaptively streamed to suit the different screen sizes at the end points – the more bandwidth, the bigger the chunk.
As a direct result, caches can fill up real fast, especially if you’re a service provider tasked with serving up movies and TV at scale, to millions of viewers on dozens of different screen sizes.
Part of the caching equation is figuring out what can and can’t be cached. Some stuff doesn’t lend itself well to caching, for instance, like file segments that are “stateful” – meaning they’re tied to something you’re doing, like pausing to resume in another room, on a different screen.
As a European cable technologist reminded me last week, all operators delivering VOD already have a CDN, or the beginnings of it. The difference is that when VOD began, it was cheaper to store the same title once, at hundreds of edge points, than it was to store everything centrally, then ship it out over satellite or fiber.
These days, both storage and transport is cheap, comparatively. So why not store the hot stuff at the edge, as needed, and keep everything else deeper in the network, to stream as requested?
If you’re a cable operator planning your CDN, chances are high that a big part of the discussion is edge caching: How many servers, at what size, where, with what local encoding to handle off-airs, encryption and ad insertion. It’s the perennial tradeoff between the economics of storage and transport.
Cache out.
This column originally appeared in the Platforms section of Multichannel News.
by Leslie Ellis // August 08 2011
One of the big-three reasons commonly cited as cause for going “more IP,” and eventually “all-IP,” is the notion of “service velocity” – new-tech talk for getting more stuff to market more quickly. As in, ditching the time-tested “one new thing every 18 months” plan, long an unfortunate shackle of the set-top-based digital video world.
Rather than trudge yet again through the depressing realities about why things take so long in today’s world, let’s look at what puts the “velocity” into new service rollouts.
This probably doesn’t come as a big surprise: Turns out it has a lot to do with the tsunami of software-based everything that’s making the workplace, and people, more efficient.
Talk to the people whose work it is to bridge between now and next, and specifically service velocity. They’re almost always IT/information technology people. Chances are high that you’ll hear two terms pop up again and again: Agile programming, and waterfall programming. “Waterfall” is old world; “agile” is new world. Proponents on each side tend to snark on the other.
Here’s some examples, from recent notes: “Waterfall is a disaster … you get these designs that aren’t influenced by reality.” And (puffed out with pride): “We run an agile development shop.”
My personal favorites: “Agile is the only way you can keep track of all the sh#t that’s going on in the network and at the end points,” counterpointed by “Agile, tiger-teams, it’s all a bunch of crap.”
“Waterfall” coding goes like this: You need to roll out a new video feature. After it’s designed, by the design team, it goes to the quality assurance team. Then to the solutions, integration and test team. If that’s all good, it gets released. Waterfall time is measured in double-digit months – and heaven forbid something changes along the way.
(Things that take a long time always remind me of a favorite joke. MSO to vendor: “Great! When can I have it!” Vendor: “In six months.” MSO: “Six months from when?” Vendor: “From every time you ask.”)
Agile programming is different. It splits the coding workload into chunks, which are constantly shipping, written by small teams that work in two-week “sprints.” Changes are assumed, meaning that time is reserved to add stuff in, if requested.
None of this is new, by the way. In the world of computer science, it’s an old saga. A google search on “agile v. waterfall programming” returned 540,000 results. Books are written on it; seminars are taught about it. It’s new to us because software is eating the world — and to survive and thrive, we need to know what and how software eats.
This column originally ran in the Platforms section of Multichannel News.
by Leslie Ellis // August 01 2011
Lots of follow-on questions in the mail about content-delivery networks, starting with this plum from reader Dan: “I’ve seen my company put in fiber rings, regional fiber rings and, more recently, a fiber backbone. How is that different than a CDN?”
Also this, from reader Chris: “Ingest, multicast, transcoding, adaptive streaming, MPEG DASH – the CDN jargon is intense. Help.”
Welcome to the trove of terms that describe what it takes to get video content to Internetconnected screens, beyond and including the television. Laptops, tablets, game consoles, PCs, smart phones.
Let’s start with you, Dan. If fiber is the physical conduit over which IP (Internet protocol) video packets flow, CDN is everything else: How those packets are collected, stored and packaged for receipt by all the screens we’re watching.
A brief history of CDNs: In one sense, they’re the older sibling of the technologies of video-on-demand. Remember back when VOD meant a few thousand hours of storage, mostly movies? These days, operators are gearing up for 20,000 or more hours of storage, for episodic TV and movies.
That storage happens hierarchically, in CDNs – one or two big library servers (think “long tail”), with caching servers closer to consumers for popularly viewed titles.
Remember “pitchers and catchers” as the way to move video assets from source to destination? CDNs change that. Instead of pitching up to satellite and catching down on the ground, CDNs use fiber backbones, linked to regional fiber rings, which link to hybrid-fiber coax, to move content.
CDNs are in vogue right now because of the desire to use them for live and linear content, too, especially for channels that are nationally available (as in, not local broadcasters).
Which brings us to your laundry list of CDN curiosities, Chris. “Ingest,” as the name implies, is the process of feeding titles into the hierarchical storage. “Multicast” optimizes bandwidth for delivery – you want to see something, you put the flag up on your mailbox, so to speak. So does everyone else who wants to see it. The show moves down the CDN once, then you join the stream.
“Transcoding” formats video streams for receipt by the varying screen sizes and resolutions available – what goes to tablet doesn’t need to be as large as what goes to an HDTV, for instance.
Adaptive streaming, or “fragmented MPEG-4,” is the slicing of a piece of content into different sizes. It’s a way to suit what’s best for the end screen, as a function of available bandwidth – if there’s not enough, then downshift to a smaller slice.
The “DASH” part of “MPEG DASH” stands for “Dynamic Adaptive Streaming over HTTP,” and is a hopefully harmonized way for content owners and service providers to stripe content for display on different screens.
That’s a quick look at CDN lingo. It’s a big part of the whole transition to IP (Internet protocol) video, and likely a big topic of engine-room talk for the foreseeable future.
This column originally appeared in the Platforms section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.