What’s Rich About the Rich Edge?
Long ago, in March of 2005, this column took on a popular term in tech-talk, at the time: “The edge.” Which one? Where is it?
And here we are, almost a decade later, still talking edges. Except something changed: The edge picked up some serious semantic bling, especially in the Prefix Department.
It’s not just “the edge” anymore. It’s “the rich edge.” “The intelligent edge.”
As a word that routinely crisscrosses between everyday talk and shoptalk, the “edge” can befuddle. There’s the edge of the counter, and then there’s the edge of the network.
Back then, we polled engineers: Where’s the edge? Responses: “It’s where RF goes to IP, or visa versa.” “After the headend, before the eyeballs.” “At the output of the set-top box.” (Still my personal favorite: “It’s where the bits fall off.”)
Our conclusion, back then, was that “the edge” is in the eye of the beholder, because different work disciplines see edges differently.
And now, those edges are rich and smart. What happened?
First of all, this is “rich” as in “having or supplying a large amount of something that is wanted” more so than “sacks full of cash.” In a connectivity sense, edges are places where stuff gets handed off: Backbone traffic to regional fiber rings; fiber rings to nodes; nodes to homes and the connected stuff within them.
The “large amount of something” is where the intelligence comes in. It’s the addition of compute and storage resources — those building blocks of “cloud.”
The quest for rich, intelligent edges is the reason why traditional cable headends are becoming headend/data centers, with racks and racks of servers adjoining the traditional functions of signal demodulation, encryption, processing, re-modulation, and combining.
It’s most evident right now in video services. Remember when VOD began? Storage was distributed, per market. Titles were “pitched” (via satellite) to hundreds (thousands?) of recipient “catchers.”
Then “CDNs” (Content Delivery Networks) happened, with big “origin” servers in the middle, and video zipping to markets over terrestrial fiber.
“Rich edges” are morphing VOD yet again: Small, nimble storage, buttressing the big servers in the middle, and designed to both anticipate and locate the most popular content closest to viewers.
VOD is but an early example of a “rich edge” transformation. It’s what happens when “connectivity” (broadband) gets gussied up with the building blocks of cloud, so that our “connected” things work better — faster, and more intuitively.
Nonetheless, our advice remains the same, when it comes to the edge: Always ask. Asking “which edge?” and now, “what’s rich about it?” does two things. It shows the speaker’s knowledge precincts, and it spares you envisioning a different edge than the one being discussed.
This column originally appeared in the Platforms section of Multichannel News.
What’s Unified About “Unified Storage”?
“Unified storage.” Another example of a tech-side term stuffed with descriptive confidence. It’s storage, and it’s unified, silly! Nobody wants to be the dummy who doesn’t know what’s so unified about it. (Right?!)
So off we go. Starting with a reminder that we’re still in the middle of the gigantic transition to IP video. Service providers are scattered along a continuum of “now” and “next.” Anything expressed in past tense is still happening.
Quick refresher: IP video is that fertile catalyst to “cloud,” TV Everywhere, multi-platform, cross-platform, and however else we’re describing the transit of subscription television signals into homes, through a box that’s more cable modem than set-top. And from that broadband “gateway,” out to connected screens — tablets, laptops, phones.
In the old days of digital cable (meaning a few years ago), the only thing the network needed to store, really, were the assets of video on demand (VOD). Recall, too, that those early offerings of cable VOD were mostly digital movies.
Shipping VOD content to cable systems traditionally involved a “pitcher,” to blast the assets up into geosynchronous orbit, and “catchers,” at recipient headends. Storage resources were vastly distributed, across an operator’s footprint.
Transport v. Storage
The economics of Big Networks involve (ceaseless) evaluations of the cost of transport, vs. the cost of storage. Now, storage is cheap. (Think about how many Gigs you can stuff in your pocket right now.)
It follows that the first unification of storage is architectural: Centralize storage. Big “origin” servers in the “middle” of the network. Closer to consumers, and holding the most popular stuff, smaller “caching” servers. Everything linked up over fiber — from national backbones, to regional rings, to last mile.
Meanwhile, along the continuum, most operators built out a different on-demand pipeline for their broadband footprint. That way, their customers could stream video titles onto their other screens: PCs, laptops, tablets, connected TVs, phones.
Supporting duplicative paths is inefficient, particularly in centralized architectures. Especially the ingest.
It follows that the second thing that gets “unified” in “unified storage” is the work of ingesting both traditional and IP-based on-demand assets.
A third element being unified, in unified storage: Metadata. Establishing and manipulating it is faster and more comprehensive in IP than in “traditional” VOD. Why: Because video assets in the online world are resourced with editors. Their job is to increase the chances that an asset will show up in a web search.
So: Unified storage is part architectural, part ingest, part metadata. In all cases, the momentum, tools, and spotlight is on the web-styled way of doing things. Be there or be … un-unified?