There are few times in life where leaks are a good thing. When faucets, hoses, noses, plumbing, roofs, or secrets leak, for instance, it’s cause for immediate corrective action.
The same is true in software — sometimes. There’s the “memory leak,” for instance. Here’s how it manifests in everyday tech talk: “Within a week, they found something like 100 memory leaks in our browser.”
Memory leaks in software generally get pinned on bad code-writing. Like not emptying the trash before leaving on a trip, memory leaks in software happen when the part of the code that places items in solid state memory during processing aren’t cleared out, when that particular module completes what it was written to do.
The result: The software equivalent of something smelling bad. Technically, those memory resources appear unavailable for the next batch of code needing them. Symptom? Software that slows down or glitches.
Recently, however, we happened upon evidence of leakage in software imbued with a whiff of goodness: The “intentionally leaky abstraction.”
The term arose in a discussion about the Reference Design Kit (RDK) — an effort spearheaded by Comcast, and now under its own roof, with flanking support from Time Warner Cable, Liberty Global, Kabel Deutschland, and other unnamed global cable providers.
RDK aims to make the primary TV screen more accessible to innovation, and the “second screens” in our lives accept cable video applications more easily. It’s a list of open and shared-source software components (like Blink, QT, GStreamer, and HTML-5, among others) that can be used, in tight combination, to get to market more quickly with cable-specific hardware.
Let us now break down the “intentionally leaky abstraction.” Abstractions, in general, exist to occlude the underlying resource details. When you save a file to your hard drive, you hit “save.” The step-by-step minutia of how that happens is abstracted from you (thank the heavens and stars.)
The “leaky” part of the “intentionally leaky” abstraction is kind of a stretch, because nothing actually leaks. Rather, “leaky” implies that the layers of the stack (most software discussions happen in the context of stacks) aren’t sealed off. Coders have visibility “all the way down to the metal” — the silicon chip itself.
This fits, albeit awkwardly, with the definition of open that goes like this: Closed things make you wait in line: Someone (Apple, Google, etc.) must change the code and re-release it, before you can proceed. Open is about being able to “see” into the stack, to do things yourself. Transparently. Self-serve. With tools that enable the drill-down.
That way, entire communities can continue coding, to refine and advance whatever the effort. It’s an “intentionally leaky abstraction” in that there are ways to see and manipulate the code in each layer.
So. May your memory never leak, and your abstractions leak in ways that help you make better products!
This column originally appeared in the Platforms section of Multichannel News.
It wasn’t that long ago – two years, maybe three – that the term “open source,” to industries like cable, which operate giant, two-way networks, was dismissed as too risky. Guaranteed to introduce malware and other kinds of security hazards.
It just wasn’t a wise idea, the thinking went, to usher the techniques of The Big Internet into professionally managed networks.
That’s all changed. It changed quickly and pervasively, such that even those of us who make it a habit to track the technologies of this industry find ourselves thinking, “I saw the whole thing. What happened?”
Open source. Open stack. Open flow. Open this, open that. Open is good; proprietary is bad. That’s the trajectory.
Before we start breaking this down: Our apologies to those readers who’ve maneuvered the software jargon jungle long before the rest of us. You know who you are. Hunch: You’re a smaller percentage of the readership of this magazine than the rest of us.
What happened? It’s all part of the unstoppable flow of technologies, networks, services and people toward “all IP,” where the “IP” stands for “Internet Protocol.” As it is, the industry’s broadband networks are becoming “virtualized” – broken apart into individual chunks, or modules, of activity. At the same time, competition from all sides forces the need to do everything faster.
That’s where the open source community comes in. You need a module to get your network to do something? What if that something already exists, in the open source community – why reinvent that wheel?
Also: Open components are generally more transparent, which matters a lot in times of trouble. In today’s (proprietary) world, when something konks out, step one is to call the supplier. Here’s an actual example, from a recent batch of notes:
“With every (supplier) release, you get one large executable file. And if something doesn’t work, you don’t know what it is that doesn’t work. You test it. You go back to the vendor. They take a look at it – they don’t know where the problem is. They send you another large executable file. You go through that cycle four, five times.”
By contrast, with code that’s based on open source techniques, you’re able to see into the code, to fix problems on the fly. That load balancer that’s giving you fits? Move the logic out into an application layer for inspection. Find the bug, fix it, six hours.
Perhaps not surprisingly, the “open” world is thick with activity, participants, “solutions,” and jargon. We’ll tease out the parts that matter, and bring them to you here.
This column originally appeared in the Platforms section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.