DENVER–Capacity. Always a hot ticket at tech fests, like the Society of Cable Telecommunications Engineer’s annual Cable-Tec Expo, during a week of Colorado gorgeousness. (The last time Expo graced Denver, we were Blizzard City.)
Here’s a weave of notable trends about capacity, gleaned from four jam-packed days of impressively nerdy tech-talk.
The next brink of capacity expansion maneuvers is at hand, and like the last time, engineers characterize their options as “tools in the toolbox.” Usually there are three. Last time, they were: Switched digital video; building out to 1 GHz, spectrally; and analog spectrum reclamation, to make room for all-digital.
Three is the number this time around, too. The front-runner: DOCSIS 3.1, the next grand slam in broadband capacity expansions, which doubles capacity in the forward/downstream and reverse/upstream signal directions. According to panelists at an all-day DOCSIS 3.1 Symposium preceding Expo, we’ll start seeing those modems and gateways sometime next year.
Second, and harder to swallow because it involves labor costs, is any of the many flavors of “fiber-deeper.” While it’s never fun to be the guy digging through the petunias to attach a new wire to the house, sometimes it just makes sense: New builds. After a catastrophic event.
It is in this category that you hear talk of “remote PHY,” “R-FOG,” and “distributed CCAP,” among others.
Option three goes higher again, spectrally — to 1.2 GHz, and even 1.7 GHz; the DOCSIS 3.1 spec mentions both. Nowadays, some operators built to 1 GHz; most sit at either 750 MHz or 860 MHz.
Going to 1.2 GHz tastes delicious, at first. Depending on the starting point — which involves how amplifiers are spaced on the wires — a move to 1.2 GHz bumps overall downstream capacity by as much as 60 percent. (What!)
Let’s do the math. Say the current spectral top is 750 MHz. If the new goal is 1.2 GHz, which is the same as 1200 MHz, the difference is 450 MHz. There’s the 60 percent.
Hang on! Turns out a power predicament accompanies a move to 1.2 GHz. Meaning a doubling of the power required to push amplifiers that high.
This all came to light at the tail end of an Arris-hosted breakfast on the last day of Expo, when a man in the audience, during the closing Q&A, asked about it.
It’s why we should all be glad for another Big Thing that happened during SCTE Expo: An effort, called Energy2020, to reduce power consumption “per unit” (per every component in a system, from “cloud to ground”) by 20%, by 2020. It’s an enormously ambitious goal, especially in the face of multiple “power hog” examples, like powering 1.2 GHz plant.
That’s the trajectory of capacity, if the trend lines of the SCTE Expo are true. Which they usually are.
This column originally appeared in the Platforms section of Multichannel News.
AMSTERDAM–Nothing like back-to-back trade shows to kick in the jargon engines! First was IBC, in Amsterdam last week; SCTE Cable-Tec Expo hits Denver this week.
Three terms popped up with amusing regularity at IBC: “Workflow,” “cloud,” and “virtualization.” Translations follow.
Examples, from piles of notes: Workflows can be 4K, file-based, and complex. Workflows in production and distribution are changing because the video content lifecycle is changing, one IBC session aspired to explain; “build a future-proof media production workflow,” another hawked.
Translation: A “workflow” is telco- and IT-speak for a business policy that needs to be teased out (with an API!) of its legacy (old fart) bindings, then recombined, as a spew of data recognizable by other spews of data (sister API!), to do its intention. Turn it on, turn it off. Encrypt it, decrypt it. Code it, transcode it, decode it.
“Cloud,” as it relates to the non-atmospheric, needs an immediate and explicitly silent sabbatical of an indeterminate length. We will happily contribute to that here.
Which brings us to “virtualization.” Always just like that, wrapped in quote marks on the page, and spoken with accompanying air squiggles.
Here’s what’s going on with “virtualization.” Everything in our digital lives that was purpose-built is at a brink. Depending on the point of view, the camera that is just a camera, nothing else, or the phone that is just a phone, nothing else, might be on the endangered species list. Why, because it becomes a feature, and not just in your phone or tablet. It gets virtualized.
But! If your digital life is like mine, your phone’s camera is already better than your regular camera (which you think might be in the garage somewhere), and your phone’s phone is a pretty crappy experience.
So, in one sense, “virtualization” unleashes a potential software renaissance for the core workings of our digital stuff, to trick them out with a continuously improving webbing of software-created accouterment.
In another sense, “virtualization” casts some of the stuff in our gadget gardens into the digital doldrums. A digital doldrum device is anything you still own, but don’t use because you can’t find the charger, BluTooth-to-USB dongle, or other mission-critical thingie.
Ultimately, the answer depends on the degree of usefulness of the potential accouterments of the renaissance.
Either way, it’s coming. Because to virtualize is to gear up to work at “web speed,” like all companies “born on broadband” (name any over-the-top provider of anything) already do.
This column originally appeared in the Platforms section of Multichannel News.
Over Labor Day weekend, an email exchange unfolded with a former cable guy, Dave Archer, who now heads Nevada’s Center for Entrepreneurship and Technology, in Reno.
The gist: He’d been contacted by a reporter, wondering if Reno was less attractive to high-tech companies, because it doesn’t have Gigabit broadband services.
“At some point in the conversation, I told him that wanting fiber to your home was like wanting a 747 — to go to the grocery store,” he wrote. He added: “Then I realized I’d used that same analogy — in 1980.”
He asked for some links to prior columns on the topic, so I sent him five or so, plus some basics on “how to do the math” of estimating household bandwidth usage. It seems useful to pass along.
Know going in that there’s a big caveat in any discussion involving the relative “weight,” when shipping video over broadband: Compression engines. They keep getting better, which makes the math change. Regularly.
Let’s say, for purposes of this discussion, that HD video compressed with MPEG-4 weighs about 3 Megabits per second (Mbps), and that the same video compressed with MPEG-2 weighs 5 Mbps. (Expect violent disagreement on these numbers, should you choose to use them. Consider them a starting point. You’ll still win!)
Tablets, smart phones, laptops, PCs, and their ilk use MPEG-4 compression. It’s newer than MPEG-2. There’s another one coming, “HEVC,” for High Efficiency Video Codec. It’s the capacity antidote to 4K / UltraHD video.
TVs connected via a set-top box use MPEG-2 compression, as well as a completely different distribution path — but let’s throw them into the mix anyway. Because we still won’t get anywhere near a Gig!
Picture a big, broadband-slurping house. In it, five HDTVs, all streaming live video, via set-top boxes. That’s 25 Mbps.
Add five laptops, also streaming HD video, for another 15 Mbps.
Lots of people in this house! Add 10 tablets, all streaming video, for 30 Mbps.
What the heck. Let’s pile on more laptops. Ten more, all streaming video. Add another 30 Mbps.
We’re up to 100 Mbps. A Gigabit is 1000 Mbps. That’s an order of magnitude difference, literally, by this math.
Does Reno need a Gig to be sexy enough for high tech? Probably, but not for any reasons of logic. Perception is reality, and the reason the reporter called in the first place is proof of that.
But still: Wanting a Gigabit per second is like wanting a jet ski. For the kiddy pool.
This column originally appeared in the Platforms section of Multichannel News.
PHILADELPHIA–It was a full-on Wi-Fi binge at the Philly Tech It Out program here on 8/21, with one common refrain: When it comes to Wi-Fi, we’re still in the very, very early stages.
“We know it’s new and nifty, and know it adds value, but where it’s going to go is anybody’s bet,” said morning keynoter Ken Falkenstein, VP/Wireless Technology for Comcast. He added, for the benefit of the appreciable student presence: “You will have a marvelous career trying to get rid of the wires.”
Other highlights of the “Wi-Fi Everywhere” day, put on by the Philadelphia chapter of Women In Cable & Telecommunications:
Greyhound’s decision on its 100th anniversary to put Wi-Fi spigots throughout its short-term rides reversed what had been the company’s smallest earner — and they can thank the millennial generation for it. “They gave us something we think we should have,” said Blaire Ballin, a senior at Ramapo College and Comcast summer intern.
Speaking of millennials: They’re a demanding bunch. Earlier this summer, she accidentally over-ran her data plan. Yes, she could’ve paid for more. But then again: “I have a hard time understanding that I have to pay for anything. Luxuries should just be there.”
(Just to bring your eyebrows back down: This same young woman also led a project that enabled a community of Guatemalan women to sell their woven goods over Wi-Fi.)
Sexy Wi-Fi numbers: Comcast expects to light up 8 million Wi-Fi “homespots” by year-end, calling the decision to install boxes comprised of both cable modem and Wi-Fi radio “the hockey stick moment.”
Time Warner Cable’s Wi-Fi footprint supports 17 million sessions per month; about a fifth of them come in from roaming partners, like Boingo. (Last summer, Time Warner was the first U.S. operator to partner with Boingo on Wi-Fi roaming — industrially known as HotSpot 2.0, with a consumer brand of Passpoint.)
The city’s regional rail line supports about 270,000 Wi-Fi sessions per month, with a load of 2.5 Terabytes of data transfer, said Bill Zebrowski, Senior Director of Information Technology for SEPTA, who quipped: “That’s a lot of Walking Dead.”
At the 2014 World Cup, in Brazil, 30% of the people sitting in the 241,033-seat Maracana Stadium got a connectivity fix over Wi-Fi, moving 5.6 terabytes of data over 217 access points, noted executives from Ruckuss Wireless.
Crazy stuff that’s coming: Wi-Fi that recharges your batteries. (What!?) Well, sort of. It’s called “wireless backscatter,” and is in the academic stages now as a way to make battery-less the sensors of the Internet of Things.
In closing: Focusing on one tech subject for an entire day takes guts! It worked. Kudos, WICT Philadelphia, for an outstanding event.
This column originally appeared in the Platforms section of Multichannel News.
If there’s one thing that stands out as a technology darling of Summer 2014, it’s the bombardment of gadgetry designed to keep the stuff of the digital garden charged and ready.
Two such things showed up on the doorstep last week as evidence, amongst tons of other affordable (meaning sub-$100) options.
One: The Mophie “Juice Pack,” which clamps onto your phone, acting as both protective case and power source. Charge the bottom part of the two-piece case, slide it onto the phone. When the phone gets low on juice, push the button on the back of the case, to pop it into charging mode. Voila! Suddenly the iPhone5 works all day and into the evening.
The second: A fold-up solar panel, made by Anker, and sent over for evaluation by the Society of Cable Telecommunications Engineers, presumably to point out what can happen as the industry continues to focus its attention on sustainable energy.
How it works: Unfold it. Find a wide, sunny place (it’s a lengthy bugger, unfolding to about yardstick length.) Plug your gadget into one of two USB ports; watch it charge.
Obviously, a sunny day matters with this one, and the documentation contains all kinds of near apologies for the weather. (Sorry, Seattle!)
The Anker panel folds up to about the size of a large Kindle; it doesn’t hold a charge, just dispenses energy to gadgets as available by the sun.
Advancements in battery life aren’t new — and the volume of R&D around the category continues to run at a sprint pace. It’s because of all of our stuff, of course, that needs and drains energy. More drain if you’re using your phone as a wireless access point, or if you forget to turn off the Blutooth transmitter.
(Or, in my case, if you plugged your stuff into an unprotected outlet in another country, promptly frying the charger and elbowing the battery into fast drain.)
My strong preference, between the case/charger and the solar panel, is the panel. It’s sunny 333 days per year in Colorado, and the charging mechanism from the sun seems to go much faster than when plugged into the wall.
Of course, always an option is to plug the charging case into the solar panel, thus using the sun to fill up the battery.
In mulling the “power summer” that is 2014, one thing seems pretty clear: The industry could assuage the embarrassingly huge and widely observed electricity draw of things like set-top boxes by including some form of solar alternative. Most people want to make a difference, and doing the “right thing” by plugging into the sun seems a pretty easy way to deliver a “feel good” experience.
On the other hand, there’s only a few weeks left of summer. Unplugging is also an option.
This column originally appeared in the Platforms section of Multichannel News.
UPDATE: Less than a week after this ran in Multichannel, both devices stopped working. Well, technically, my iPhone5 stopped liking them. “Accessory not supported.” Charging activity instantly stopped. Boo. I subsequently got a “Boostcase,” which so far keeps on working….
Last week, while wandering through yet another “cloud” conversation, flagging unfamiliar terms to tackle, one came up over, and over, and over.
“Instance.” Another everyday word that takes on a completely different meaning, when speaking with Software People. (Not unlike “edge,” to Distribution Network People.)
Here’s an example, from that set of notes: “What that means in terms of scaling is that you can deploy instances much more on demand. So we were able to get instances scaled and deployed really fast, within a few seconds, and, as a result, all that new content, too.”
If you were to look up what “instance” means in software terms, you’d immediately bump into an “object.” Not an object like your keys, or anything you see near you right now, because the bitch of software is that it’s all pretty much invisible, unless you can see in code.
An “instance,” in this instance, is a glob of code, typically a software application, that no longer runs on its own special piece of hardware, which isn’t necessarily able to see or talk to other apps on other special pieces of hardware, that are also mission critical to whatever the purpose is.
In this case, the purpose is the sending of video — linear, on-demand, over the top, under the bottom, whatever — to the screen that wants to display it. And the “instance,” or “object,” is software speak for making that app run on general purpose servers, instead of vendor-specific gear.
The verb of the instance, is “to clone.” Let’s say the app’s purpose is ingesting and storing content. Instead of the “old way” of ingesting — either pulling it down from a satellite, or off of a backbone fiber, then pouring it into a purpose-built, probably proprietary storage server — the “cloud” way is to treat assets as “instances,” which can be cloned, nearly instantaneously.
Benefit: Finding and using storage space, quickly and efficiently. One of the good things about cloud — video or otherwise — is that it upends the (prior) need to submit a requisition for more storage, then wait four weeks, then order it, install it, and activate it.
That way is particularly snarly for those unexpected “surge moments.” In the world of IP, the peak metric item isn’t the Superbowl. It’s the FIFA World Cup. When sudden millions of people want something, in IP, it’s critical to be able to spin up compute, storage and connectivity resources, really fast. For instance.
This column originally appeared in the Platforms section of Multichannel News.
Say you’re mingling in a room full of people, enjoying a tasty beverage. It’s a polite room of people who listen, responding during pauses. (So you’re in Canada!)
Out of nowhere, a mass of large, loud people enters the room, shouting instructions to each other. It’s like they’re oblivious to anyone who isn’t them.
In wireless protocols, the Canadians are WiFi. The Large Louds are LTE.
Here’s what happens next: The Canadians still want to converse. Their only option? Talk louder. The volume in the room goes up, and up, and up. The loud people keep piling in the door, with no signs of leaving. Suddenly, it’s not such a good time anymore.
This is one way to think about a red-hot topic touching WiFi people, known as LTE-U. The “LTE” stands for Long Term Evolution, a term mobile carriers use for fast, wireless broadband. The “U” stands for “unlicensed.”
Consider: About 200 MHz of spectrum exists for WiFi transmissions, including the extra 100 MHz the FCC granted in March, in the 5 GHz band. Right now, that spectral slice is carrying 50 to 60 percent of the Internet’s traffic.
Mobile carriers, by contrast, maneuver their traffic over some 600 MHz of spectrum — licensed spectrum, meaning they paid for it. (Dearly.) Some two to three percent of the Internet’s traffic moves within it.
So, right off the bat, WiFi is moving 30x the load, in one-third the space. Which brings us to how WiFi works, and the fact that just because its spectral zone is unlicensed, doesn’t mean it’s unregulated.
WiFi is built for spectrum sharing. It waits to talk, and it adjusts its transmit power as part of a design goal that purposefully wants to be a good neighbor, all the time — partly because of regulations that govern things like transmit power and sharing.
LTE is different. For starters, it uses “tunneling protocols.” That means that when a device connects, a secret tunnel is instantly established between it, and the carrier’s LTE network. Each data packet is both encrypted and encapsulated; the only visible parts are the packet’s source (who am I?) and destination (where am I going?)
Meanwhile, the LTE “control plane” — the servers and software that handling signaling and routing — is ceaselessly talking, back and forth, making sure everything’s doing what it’s supposed to be doing.
Here’s the concern: That LTE traffic will deliberately dump into the unlicensed territories, offloading giant blobs of traffic that can’t see or hear what’s already there. Such as anything moving over WiFi.
Is this a real problem? Not yet. Could it be? Definitely. (O, Canada! We stand on guard for thee.)
This column originally appeared in the Platforms section of Multichannel News.
DENVER–Nothing like a fresh batch of data about broadband usage, topped off with the start of the FIFA World Cup Games — always a streaming video gauntlet — to check in on the Hype Central category that is Gigabit services.
The fresh data comes from Cisco System’s annual Visual Networking Index (VNI), released last week, which slices trends in broadband every which way — and serves as a perennial reminder to learn the nomenclature of big numbers: Petabyte, Yottabyte, Exabyte.
(Refresher: A Gigabyte (GB) is thousand Megabytes (MB); a Terabyte (TB) is a thousand Gigabytes; a Petabyte (PB) is a thousand Gigabytes; an Exabyte (EB) is a thousand Petabytes, and a Zettabyte (ZB) is a thousand Exabytes. Woof.)
Note: Those are measures of volume. Gigabit services, popularized by Google Fiber and AT&T, are measures of speed. Which makes this Cisco VNI nugget all the more notable: “Global broadband speeds will reach 42 Mbps (Megabits per second) by 2018, up from 16 Mbps at the end of 2013.”
One Gbps is the same as 1,000 Mbps, in other words. Globally, we’re somewhere between 16 and 42 Mbps over the next few years. (That’s about two orders of magnitude off from 1,000 Mbps.)
The point: There comes a time, and we’re pretty much there, that things can’t load or behave noticeably faster. Which isn’t necessarily cause to do nothing, but neither is it a looming competitive catastrophe.
The topic of “Gigs” was a centerpiece discussion during last week’s 20th annual Rocky Mountain SCTE Symposium, where lead technologists from Charter, Comcast, Liberty Global and Time Warner Cable dove into the options for “getting to a Gig.”
Refresher: The entire carrying capacity of a modern (860 MHz) cable system, if every channel were empty and available (which they aren’t), is change north of 5 Gigabits per second. (That’ll double with DOCSIS 3.1’s new modulation and error correction techniques, known respectively as Orthogonal Frequency Division Multiplexing and Low Density Parity Check.)
Getting there, technologically and operationally, is rife with options. There’s the next chapter of DOCSIS, 3.1, and there’s a vendor community bursting with ways to take fiber deeper towards homes. (The vendor displays this year were “a lot more about glass” than in years prior, panelists noted.)
Has the time come that the cost comparison between DOCSIS 3.1 and fiber-deep strategies is close enough to parity for serious examination? No, panelists said (emphatically.) Taking fiber deeper may make sense in greenfield (new build) situations, but not yet in “brown field” (existing plant) conditions.
Nor is the SuperBowl the harbinger of peak traffic loads in IP, even though it’s the most watched television show (108 million-ish viewers.) This year’s “March Madness” NCAA men’s basketball tournament set Time Warner Cable’s new capacity peak for streamed video (exact numbers weren’t disclosed; it was “more than 10s of Gigs,” said TWC Engineering Fellow Louis Williamson.)
Comcast’s highest peaks come from its “Watchathon weeks,” when all programming is made available over IP. “They generate at least four times normal volume,” noted Allen Broom, VP/IP Video Engineering for Comcast.
Do Gigabit services matter? Sure. Should operators drop other technology priorities to build it? Google “red herring.”
This column originally appeared in the Platforms section of Multichannel News.
Amid the lingo of open source software is a new-ish entrant: The Linaro Foundation, which dropped an intersection with cable in the May 29 formation of the “Linaro Digital Home Group,” abbreviated “LHG.”
What’s it all about? On the surface, it’s a way for chip makers, set-top/gateway manufacturers and service providers to manage the complexities involved with moving from one type of silicon chip architecture (“MIPS”), to another (“ARM”).
In at the get-go are Cisco Systems, Comcast, and STMicroelectronics (all active members of the RDK [Reference Design Kit]), as well as Allwinner, ARM, Fujitsu, and HiSilicon.
Lots of moving parts here, starting with why the shift to ARM-based silicon in the first place. Answer: Lower power, higher speeds, smaller form factors. Mobile devices use ARM-based chips, for instance; in-home devices like set-tops and gateways are likely next.
And yes, MIPs v. ARM is a religious architectural debate — not unlike Microsoft v. Apple, in the operating system battles of yore, and Apple v. Android, in today’s techno-landscape. “Going ARM,” for companies accustomed to building MIPs-based silicon (like Broadcom Corp., as one example) usually starts with at least one outburst of “over my dead body!”
What Linaro brings, in general, is “the rest of the story,” from a Linux perspective. Building software products isn’t just writing code — there are points in time where an actual build is required. A “compile.” Important in the lingo of software builds are “active users” — how many people are throwing code into how many “build slaves” in which “build farms.”
Part of every software build involves the best way to ingest what is a usually a torrent of code chunks, coming from all over the place. Thousands of drops, daily. Linaro, in general, manages the Linux distribution of software components for ARM; LHG will extend that into cable-oriented devices.
But wait, there’s more! The Yocto Project, which generally comes up in Linaro conversations as the open source tools software developers use to participate.
In a nutshell: LHG aims to steer the industry further into open source software, and specifically the software related to ARM-based chips, so that the industry can build in-home gear that runs cooler, faster and smaller. Yocto provides the development tools to get there. Off we go…
This column originally appeared in the Platforms section of Multichannel News.
One of the greater developments following this year’s Cable Show, if you’re into immersion learning via tech-talk, is the placement online of the 2014 Spring Technical Forum papers. For free!
Up until now, it was a $50 DVD. Earlier, and for years, the papers came out as thick, bound editions. (A weary shelf at the office sags with Tech Papers dating back to the late ‘80s.)
If this is of interest, and you’d rather read them all yourself, go here: www.nctatechnicalpapers.com.
If you’d rather this (very abbreviated and likely to be continued!) summary, read on.
As titles go, few say “read me now!” more than “Predictions on the Evolution of Access Networks to the Year 2030 and Beyond,” written by five technologists at Arris (among them Tom Cloonan, CTO, who wins this year’s Mister Prolific Award, had we one, for writing or contributing to six papers.)
Shortcut advice on “Predictions:” If rushed, or impatient, skip to page 25. There, three pages characterize scenarios — some that impact all MSOs, others for MSOs planning to extend the life of existing plant, still others for MSOs going to new ways of bandwidth expansion, like Passive Optical Networks (PONs), which is tech talk for fiber-to-the-home.
Favorite line from “Predictions,” as an avid observer of cable’s upstream (home to headend) signal path: “Some of these MSOs will change the split on their upstream spectrum … in an attempt to provide more upstream bandwidth capacity.” Both 85 MHz and 204 MHz were mentioned as candidate upper boundaries for that terrifically thin spectral slice. (The very mention of a “widened upstream” was akin to operational anathema — as recently as two years ago.)
Trend-wise, the notion of “virtualization,” expressed as “SDN” (Software Defined Networks) and “NFV” (Network Function Virtualization) blitzed this year’s papers. It’s all about doing in software what’s done in hardware, now. Example: “Using SDN and NFV for Increasing Feature Velocity in a Multi-Vendor World,” by Cox’s Jeff Finklestein and Cisco’s Aron Bernstein.
Also: “An SDN-Based Approach to Measuring and Optimizing ABR Video Quality of Experience,” by the also-prolific Sangeeta Ramakrishnan (three papers) and Xiaoqing Zhu, both with Cisco.
Another tech trendline from the 2014 stash: Wi-Fi and wireless. Need a deep dive on why the batteries in your digital life behave the way they do? Go directly to “Wireless Shootout: Matching Form Factor, Application, Battery Requirement, Data Rates & Range to Wireless Standards,” by Comcast’s David John Urban. (Warning: It’s a deep-deep dive.)
If you’ve been wondering whether Wi-Fi has what it takes to stream multiple HD signals around a place, go to “Study of Wi-Fi for In-Home Streaming,” by Alireza Babaei, Neeharika Allanki and Vikas Sarawat, all with CableLabs.
There’s so much more. Check them out for yourself, and be sure to thank Andy Scott, Mark Bell and their team at NCTA for doing the work of putting it all “on the line.”
This column originally appeared in the Platforms section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.