Once you’ve encountered (yet another) cause to become conversant in the language of software, and particularly “agile” development, soon enough you’ll bump into this companion term: “DevOps.”
“DevOps” is an industrial sniglet — a pairing of “Development,” as in product development, and “Operations,” as in keeping the whole operation up and running, at all times.
People who work in product development get rewarded when existing products are improved, and when new products get to market swiftly.
People who work in operations get rewarded when everything just keeps working.
No wonder: Prior to “DevOps,” product development people viewed operations people as barriers to progress, and operations people viewed development people as pesky rogues, always armed with a swell new way to brick the network.
“DevOps” is part cultural spasm, part management renaissance. It’s a movement, usually growing out of an earlier decision to “go agile.” The point of it is to make sure the people who are building the machine, actually use the machine — and to give the people who maintain the machine a say in how it’s designed. Continuously.
When people explain DevOps, they say “over the wall” a lot. A product spec gets tossed “over the wall” to operations, which vets it for deployability, then throws it back over the wall to its developers, to fix this or that.
DevOps removes the wall. It does so by building intangibles into product design: Is it deployable? Does it scale? Can operations do it? It’s all about empowering people to get things done, by removing what can be layers of processes — permissions, approvals, and “adminis-trivia.”
Companies like Comcast and Time Warner Cable already began shifting to a DevOps model, tearing apart and re-assembling leads in both categories to (literally) work together.
And because DevOps works in tandem with “agile” software development, it follows that in cable conversations, the RDK (Reference Design Kit) is usually somewhere nearby.
RDK, now a company and a thing, aims to make it faster and easier for cable providers to launch the kinds of IP-based, in-home hardware that, in turn, makes it faster and easier to launch new, cloud-based services. It’s what’s inside of Comcast’s X1 platform.
Ultimately, DevOps recognizes that the opposing forces of change and stability are both vital to a company’s success. Its reach is wide, and it’s probably headed your way. Best get limbered up.
This column originally appeared in the Platforms section of Multichannel News.
LOS ANGELES–This year’s Imagine Park program — a live TV “show within The Cable Show,” now in its fourth year, and designed to shine the light on the hot tickets in cable and broadband technology — served up plenty of sizzle, but a few rose to the top of the list.
Starting with FanTV, a new entrant in hardware-based video streamers, and by far the most attractive and uniquely designed in the category. (Especially now that the original and very funky Boxee Box is officially “tele-vestigial.” Meaning no longer on the market. Sigh. A nod to its out-of-the-box box design!)
FanTV is hands-down gorgeous — swoopy and elegant, with a buttonless remote that fits in the palm of the hand like a smooth rock, and perches magnetically on top of the player like some kind of electronic cairn.
Its intent, market-tested with Cox last year and now scheduled to enter Time Warner Cable’s footprint, is to provide subscription and over-the-top video to broadband-only consumers. If you live in a Time Warner Cable market, run-don’t-walk to get one when it hits the (retail) market this summer.
Also a gift to the category of television: “Dolby Vision,” an effort by the stalwarts in sound to make high definition video brighter, for lack of a visual term.
The set-up: When we think of HD, we think of higher resolution — more pixels. Dolby’s position is that two other dimensions can be manipulated to enhance television, beyond additional pixels: Better pixels, and faster pixels. It’s all about improved color gamut (blacker blacks, greener greens) and higher dynamic range. Watch for it to enter the market next year, as it gets licensed by CE and screen manufacturers.
My favorite Imagine Park session this year, even though it hadn’t happened yet at press time: A showcase of innovation coming out of the developmental labs inside Comcast, Liberty Global and Time Warner Cable.
For starters is the fact that these “lab weeks” even exist. All involved MSOs sponsor the activities as a way to let their developers stretch their wings, design-wise, then “pitch” their ideas, internally and throughout the year.
It’s part one of a broader body of work, known as “DevOps,” which blends people from product development and operations. It’s happening as a way to get new services out more quickly, by removing the friction that traditionally hamstrings those two groups.
Here’s a partial list of what was scheduled to happen in the Lab Week session: A tablet mosaic that links related, web-sourced content to subscription video; cloud-based services on legacy boxes; and a way to take your home phone service with you, internationally, on your mobile.
One other bit of extraordinarily good news coming out of this year’s Cable Show: The NCTA’s annual compilation of tech papers will be available online. (Go here: www.nctatechnicalpapers.com) Not just this year’s batch — all of them. And they go back for decades. Halleluiah!
This column originally appeared in the Platforms section of Multichannel News.
Long ago, in March of 2005, this column took on a popular term in tech-talk, at the time: “The edge.” Which one? Where is it?
And here we are, almost a decade later, still talking edges. Except something changed: The edge picked up some serious semantic bling, especially in the Prefix Department.
It’s not just “the edge” anymore. It’s “the rich edge.” “The intelligent edge.”
As a word that routinely crisscrosses between everyday talk and shoptalk, the “edge” can befuddle. There’s the edge of the counter, and then there’s the edge of the network.
Back then, we polled engineers: Where’s the edge? Responses: “It’s where RF goes to IP, or visa versa.” “After the headend, before the eyeballs.” “At the output of the set-top box.” (Still my personal favorite: “It’s where the bits fall off.”)
Our conclusion, back then, was that “the edge” is in the eye of the beholder, because different work disciplines see edges differently.
And now, those edges are rich and smart. What happened?
First of all, this is “rich” as in “having or supplying a large amount of something that is wanted” more so than “sacks full of cash.” In a connectivity sense, edges are places where stuff gets handed off: Backbone traffic to regional fiber rings; fiber rings to nodes; nodes to homes and the connected stuff within them.
The “large amount of something” is where the intelligence comes in. It’s the addition of compute and storage resources — those building blocks of “cloud.”
The quest for rich, intelligent edges is the reason why traditional cable headends are becoming headend/data centers, with racks and racks of servers adjoining the traditional functions of signal demodulation, encryption, processing, re-modulation, and combining.
It’s most evident right now in video services. Remember when VOD began? Storage was distributed, per market. Titles were “pitched” (via satellite) to hundreds (thousands?) of recipient “catchers.”
Then “CDNs” (Content Delivery Networks) happened, with big “origin” servers in the middle, and video zipping to markets over terrestrial fiber.
“Rich edges” are morphing VOD yet again: Small, nimble storage, buttressing the big servers in the middle, and designed to both anticipate and locate the most popular content closest to viewers.
VOD is but an early example of a “rich edge” transformation. It’s what happens when “connectivity” (broadband) gets gussied up with the building blocks of cloud, so that our “connected” things work better — faster, and more intuitively.
Nonetheless, our advice remains the same, when it comes to the edge: Always ask. Asking “which edge?” and now, “what’s rich about it?” does two things. It shows the speaker’s knowledge precincts, and it spares you envisioning a different edge than the one being discussed.
This column originally appeared in the Platforms section of Multichannel News.
This week, the people of broadcast television make their way to Las Vegas, for the annual gathering of the National Association of Broadcasters.
For broadcasters in particular, it’s a weird time to be in television. The word itself — television — is equal parts strongly nostalgic, and tele-vestigial. Say “television” to a millennial, you’re a relic. Say it to any of us who grew up with that one screen as the central viewing device, it’s home.
The identity crisis facing traditional television is evident even in the show’s tagline this year: “Where Content Comes to Life.”
We took a quick poll of our favorite go-to, broadcast-side technologists over the last few weeks, to find out what’s on their shopping lists for this year’s show. Not surprisingly, 4K video, and its consumer-facing brand, UltraHD, will be the main event — but not all technologists are convinced it’s a go.
“I want to see if live TV production gear, like big production switchers, has made any progress — we’re building a big new production facility, but so far it’s only being outfitted for HD,” said one network-side technologist.
Refresher: UltraHD and 4K video is the next big thing coming from the consumer electronics side of the television eco-system — but the rest of that eco-system is still catching up. From the HDMI connectors into 4KTVs, to the physical media (Blu-Ray is arguably still “not big enough” to hold 4K video), to the bandwidth requirements, to the cameras, and whatever else we’re missing, there’s work to be done.
But! The challenges facing the rollout of 4K are nearly identical to those facing HD, when it first hit the market. And if the NAB show floor is any indication, and to use a medical analogy — there are plenty of white blood cells flooding all the problem areas, seeking to make each juncture healthy and well.
And then there’s the other stuff that typically lines the floor of a convention for broadcast engineers.
Or not.
“Betcha I don’t see any transmitters or towers,” said another, who wondered when the “B” in “NAB” switches from “Broadcasters” to “Broadband.”
And, like everywhere else, “cloud” and the transition to Internet Protocol everything, from image capture to production to post-production to screen,” will crowd the exhibit hall. “It will be interesting to see how many possible functions can be stuffed into the cloud, or say that they can,” noted a content-side technologist.
Added another: “Wait a minute: If a broadcast tower is high enough, does that count as being in the cloud?”
Ah, the existential engineers in our tele-vestigial worlds. What would we do without them?
This column originally appeared in the Platforms section of Multichannel News.
DENVER–A surefire way to fire up cable technologists used to involve smiling broadly while asking: “When will you need to widen the upstream path?”
For decades, the answer, usually harrumphed, was this: “Never!”
Why: It’s a pretty big hassle. Very plant-intensive, possibly to the point of having to revisit or replace gear at the tap level. (Taps are expressed in number of ports — 4-port, 8-port — and exist to adjoin fatter cable, like feeder cables, to the thinner, coaxial cable that drops into homes. So there’s tons of them.)
Later, queries about a wider upstream softened into variations of “not in my lifetime.” Why: The upstream is a very skinny portion of the total available capacity of a cable system. “Very skinny” meaning five percent or less, occupying a slender spectral spot from 5-42 MHz.
Then broadband happened. Right now, the growth of downstream (home-facing) broadband consumption still far outpaces the growth of upstream (network facing) bandwidth usage. But! Think about how many things come with a built-in video camera. Your phone, for instance, or any of the webcams monitoring any of the things in your life.
Video is big. Sending it upstream, live, chews up bandwidth.
Think, too, about the fact that more Wi-Fi traffic is happening right now than mobile or wired, combined. Offloading some of that onto the wired network in the house is a plausible reality.
Which brings us to the latest round of responses to the age-old question of when the industry might consider a wider upstream. Last week, specifically, during a panel of technologists at Light Reading’s annual “Cable Next-Gen Technologies & Strategies” event. Answer, extrapolated from the guts of the panel and not expressed directly: 2018-ish.
“We’re all exploring it,” said Jorge Salinger, VP/Access Architectures for Comcast, to the point of an organized, weekly call amongst involved technologists at several MSOs.
Here’s where the 2018-ish prediction comes from: DOCSIS 3.1 includes language supporting a “mid-split,” which is tech talk for widening the upstream.
The silicon for DOCSIS 3.1-based gear is expected this year. The cable modems and gateways that use it will follow in 2015. Then interops, then trials — which makes 2016 plausible as “the golden year” for widespread DOCSIS 3.1 deployments.
After that, 3.1-based headend gear (known industrially as “CMTS,” for “Cable Modem Termination System”) catches up. Let’s say that happens in a big way in 2017.
After all of that, and should we continue to see gadgetry in our homes that streams video constantly, it will probably make sense to move the upper boundary of the upstream spectrum, from 42 MHz, to 65 MHz, or higher.
That’s why we’re putting a 2018-ish stamp on it. (Heavy on the -ish.)
This column originally appeared in the Platforms section of Multichannel News.
“Unified storage.” Another example of a tech-side term stuffed with descriptive confidence. It’s storage, and it’s unified, silly! Nobody wants to be the dummy who doesn’t know what’s so unified about it. (Right?!)
So off we go. Starting with a reminder that we’re still in the middle of the gigantic transition to IP video. Service providers are scattered along a continuum of “now” and “next.” Anything expressed in past tense is still happening.
Quick refresher: IP video is that fertile catalyst to “cloud,” TV Everywhere, multi-platform, cross-platform, and however else we’re describing the transit of subscription television signals into homes, through a box that’s more cable modem than set-top. And from that broadband “gateway,” out to connected screens — tablets, laptops, phones.
In the old days of digital cable (meaning a few years ago), the only thing the network needed to store, really, were the assets of video on demand (VOD). Recall, too, that those early offerings of cable VOD were mostly digital movies.
Shipping VOD content to cable systems traditionally involved a “pitcher,” to blast the assets up into geosynchronous orbit, and “catchers,” at recipient headends. Storage resources were vastly distributed, across an operator’s footprint.
Transport v. Storage
The economics of Big Networks involve (ceaseless) evaluations of the cost of transport, vs. the cost of storage. Now, storage is cheap. (Think about how many Gigs you can stuff in your pocket right now.)
It follows that the first unification of storage is architectural: Centralize storage. Big “origin” servers in the “middle” of the network. Closer to consumers, and holding the most popular stuff, smaller “caching” servers. Everything linked up over fiber — from national backbones, to regional rings, to last mile.
Meanwhile, along the continuum, most operators built out a different on-demand pipeline for their broadband footprint. That way, their customers could stream video titles onto their other screens: PCs, laptops, tablets, connected TVs, phones.
Supporting duplicative paths is inefficient, particularly in centralized architectures. Especially the ingest.
It follows that the second thing that gets “unified” in “unified storage” is the work of ingesting both traditional and IP-based on-demand assets.
A third element being unified, in unified storage: Metadata. Establishing and manipulating it is faster and more comprehensive in IP than in “traditional” VOD. Why: Because video assets in the online world are resourced with editors. Their job is to increase the chances that an asset will show up in a web search.
So: Unified storage is part architectural, part ingest, part metadata. In all cases, the momentum, tools, and spotlight is on the web-styled way of doing things. Be there or be … un-unified?
In the halls of cable and broadband technologies, what people were talking about last week were the performance glitches during the first-ever live stream of the 86th Academy Awards.
A few weeks earlier, what people were talking about were the travel-is-so-glamorous accommodations in Sochi, for the Winter Olympic Games.
In other words, what people weren’t talking were performance glitches during an Olympic Games treatment that put every event (every event) online.
This week’s translation examines the subtle and shifting nature of choke points, bottlenecks, and other things that now — in the age of cloud — can constipate the user experience of television.
Let’s start with Oscar, and his first experience as a live stream. Something went wrong. That it disaffected viewers is bad, of course. But why it happened is illustrative of the expanding nature of potential choke points, when it comes to moving a stream of video from where it’s happening, to your screen.
Until the Oscar hiccup, the perceptions associated with performance breakdowns on live streaming events were typically pinned on the physical infrastructure. Something bottlenecked, either in the last-mile plant, or north of the headend, or between regional rings and national fiber backbones. Bad ISP. Bad, bad, bad.
These days, we live and work in a digital world increasingly plumbed to optimize three things: Connectivity, compute, and storage.
The “connectivity” part came first, and thus bore the brunt of the early glitches.
The thing we now call “cloud” brings the two other dimensions — compute, and storage — into the video pipeline. Both are now core parts of video distribution, as we know it. It follows that both can become accidental choke points, especially if they aren’t built for “elasticity” — to expand and shrink as needed, based on demand.
Storage, in a cloud sense, is proximity-based: Popular stuff streams from the edge closest to consumers; long-tail fare sits elsewhere. (In the cloud, storage is priced by size, as well as “how soon would you need it.”)
“Compute,” as a dimension, handles things like input-output — or, in the case of the Oscars, how fast a stream could be spun from the source, when lots of people wanted it.
That the streaming treatments of the Winter Olympic Games weren’t plagued with connectivity issues is a nod to the ongoing modernization of video distribution systems. Sure, connectivity matters, all day long — but at the same time, the care and feeding of “compute” and “storage” matters as much, if not more.
This column originally appeared in the Platforms section of Multichannel News.
There are few times in life where leaks are a good thing. When faucets, hoses, noses, plumbing, roofs, or secrets leak, for instance, it’s cause for immediate corrective action.
The same is true in software — sometimes. There’s the “memory leak,” for instance. Here’s how it manifests in everyday tech talk: “Within a week, they found something like 100 memory leaks in our browser.”
Memory leaks in software generally get pinned on bad code-writing. Like not emptying the trash before leaving on a trip, memory leaks in software happen when the part of the code that places items in solid state memory during processing aren’t cleared out, when that particular module completes what it was written to do.
The result: The software equivalent of something smelling bad. Technically, those memory resources appear unavailable for the next batch of code needing them. Symptom? Software that slows down or glitches.
Recently, however, we happened upon evidence of leakage in software imbued with a whiff of goodness: The “intentionally leaky abstraction.”
The term arose in a discussion about the Reference Design Kit (RDK) — an effort spearheaded by Comcast, and now under its own roof, with flanking support from Time Warner Cable, Liberty Global, Kabel Deutschland, and other unnamed global cable providers.
RDK aims to make the primary TV screen more accessible to innovation, and the “second screens” in our lives accept cable video applications more easily. It’s a list of open and shared-source software components (like Blink, QT, GStreamer, and HTML-5, among others) that can be used, in tight combination, to get to market more quickly with cable-specific hardware.
Let us now break down the “intentionally leaky abstraction.” Abstractions, in general, exist to occlude the underlying resource details. When you save a file to your hard drive, you hit “save.” The step-by-step minutia of how that happens is abstracted from you (thank the heavens and stars.)
The “leaky” part of the “intentionally leaky” abstraction is kind of a stretch, because nothing actually leaks. Rather, “leaky” implies that the layers of the stack (most software discussions happen in the context of stacks) aren’t sealed off. Coders have visibility “all the way down to the metal” — the silicon chip itself.
This fits, albeit awkwardly, with the definition of open that goes like this: Closed things make you wait in line: Someone (Apple, Google, etc.) must change the code and re-release it, before you can proceed. Open is about being able to “see” into the stack, to do things yourself. Transparently. Self-serve. With tools that enable the drill-down.
That way, entire communities can continue coding, to refine and advance whatever the effort. It’s an “intentionally leaky abstraction” in that there are ways to see and manipulate the code in each layer.
So. May your memory never leak, and your abstractions leak in ways that help you make better products!
This column originally appeared in the Platforms section of Multichannel News.
This week’s translation delves into yet another vat of software-side activity intersecting with cable and broadband: Virtualization.
It and its sidekick, the “virtual machine,” are not new concepts. Back in the 1960s, IBM Corp. “virtualized” the resources within its mainframe computers, so that different applications could share amongst them.
Everything we call “cloud” today began as “virtualization” — defined as creating simulated versions of things, from compute to connectivity to storage.
These days, it’s hard to identify things that aren’t being virtualized. Including the network itself. Not the physical wires and amplifiers, of course. But slowly and surely, functions that used to be in one physical place get re-done in code, and moved “into the cloud.” They become “virtual machines.”
The list of activities common to cable plant, that are on the list to get “virtualized,” include real-time encoding, trick-play (fast forward, rewind) television, even headend video controllers.
Say you have a proprietary video controller (which advanced-class readers know as “DNCS” and “DAC”), one per headend. Say one particular market comprises 40 headends. Making changes — to add features, or fix bugs — meant hitting those racks of gear one by one, 40 times over. Virtualization enables an instantaneous upgrade of all of them.
This type of “infrastructure virtualization” is happening for two reasons: Pervasive broadband connectivity, and entities like Amazon, which rent out general compute resources as needed.
In cable, the first real example of virtualization and cloud is the digital video recorder. If you’re in a Comcast market, and have experimented with its recent shift to X2 navigation, you’ve already experienced the shift of your recorded stuff from the box under the TV, to the network itself. Is it still your stuff? Yes. Is it sitting as a copy on the box in your living room? No. Does it work the same? Yes.
That’s a quickie on “virtualization,” as the training wheels to “cloud.”
This column originally appeared in the Platforms section of Multichannel News.
After thumbing through every 2014 issue of this magazine, five tech trends rose to the top:
1. We’re now squarely in the middle of the transition to “all-IP” (Internet Protocol), as the umbrella covering cloud-delivered services, bandwidth (wired and wireless), connected devices, TV everywhere, and all else in the technological vogue. It began with the cable modem, in the late ‘90s. Nobody really knows when the “all” part of “all-IP” will happen — but “not in my lifetime” is a seldom-heard response.
2. This year, the term “OTT” — Over-the-Top — became less a categorical description of Netflix, Amazon, and the rest of the new ilk of video competition, and more a common technological ingredient, used by all. In short, with every step toward cloud, operators are “over-the-topping themselves.”
3. The recognition that “the competition” now extends beyond satellite and telco-delivered services, to the OTT camp, brought with it a new “tech culture” reality. Vendors, operators and programmers alike spent a sizeable chunk of 2014 retooling to work at “web speed,” which means adopting agile software and “DevOps” strategies.
4. RDK, the Reference Design Kit, rose in strategic importance this year, again, and big time. Evidence: In October, Liberty Global CEO Mike Fries off-handedly called RDK “a DOCSIS moment,” referencing the cable modem specification that changed the economics of what became the broadband industry.
5. “Speed vs. capacity” will sustain as one of the more important tech subtleties. It’s the “gig” that can gum things up: GigaHertz is a unit of capacity, Gigabyte a unit of storage, and Gigabit a measure of speed. But! As important is throughput, or, the amount of stuff we’re moving to and from our various screens. Knowing the distinctions matters.
That’s the short list! Merry merry, and may your 2015 technologies be kind and useful.
This column originally appeared in the Platforms section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.