Dead reckoning. Unless you’re a pilot, you probably haven’t heard the term in a while. Refresher: It’s a navigational term, used to establish where you are, and where you’re going, using the last known (“deduced,” which is the “ded” of “dead reckoning”) information about your location.
Charles Lindbergh dead reckoned his way over the Atlantic to Paris, in 1927, using its basic formula — distance equals speed, multiplied by time.
And now, it gains a new, kind of odd, prefix: Pedestrian dead reckoning. It’s a way of using Wi-Fi and the sensor-enabled stuff in our gadgets to find other stuff, indoors — like how your Garmin used to navigate you to physical addresses, outdoors. (Before your phone’s map app did.)
In short, pedestrian dead reckoning — abbreviated “PDR” — is a little bit GPS (global positioning system), a little bit Wi-Fi, a little bit accelerometer, and a little bit magnetometer. (No country. No rock-n-roll.)
Refresher: GPS works over satellite, with predictable results once you drive into the parking garage. Wi-Fi is Wi-Fi. Accelerometers measure, well, acceleration. They’re what’s inside your FitBit, Fuelband, or other digital pedometer. Magnetometers inform your phone’s compass app.
Put it all together, with an app on top, and suddenly Costco could offer a “mobile butler,” that senses when you’ve entered the store, and when you’ve stayed still for a time. It could ask: Can I help you find something? Paper towels? Follow me — I’ll show you the way. Then your sensor-equipped gadget (meaning your phone) and app shows you the way.
That’s but one example in what has to be dozens of use cases that blend Wi-Fi, pedometer and compass. Pedestrian Dead Reckoning: It’s coming, and it’ll either save us time, or drive us nuts. Maybe both!
This column originally appeared in the Platforms section of Multichannel News.
Say you’re mingling in a room full of people, enjoying a tasty beverage. It’s a polite room of people who listen, responding during pauses. (So you’re in Canada!)
Out of nowhere, a mass of large, loud people enters the room, shouting instructions to each other. It’s like they’re oblivious to anyone who isn’t them.
In wireless protocols, the Canadians are WiFi. The Large Louds are LTE.
Here’s what happens next: The Canadians still want to converse. Their only option? Talk louder. The volume in the room goes up, and up, and up. The loud people keep piling in the door, with no signs of leaving. Suddenly, it’s not such a good time anymore.
This is one way to think about a red-hot topic touching WiFi people, known as LTE-U. The “LTE” stands for Long Term Evolution, a term mobile carriers use for fast, wireless broadband. The “U” stands for “unlicensed.”
Consider: About 200 MHz of spectrum exists for WiFi transmissions, including the extra 100 MHz the FCC granted in March, in the 5 GHz band. Right now, that spectral slice is carrying 50 to 60 percent of the Internet’s traffic.
Mobile carriers, by contrast, maneuver their traffic over some 600 MHz of spectrum — licensed spectrum, meaning they paid for it. (Dearly.) Some two to three percent of the Internet’s traffic moves within it.
So, right off the bat, WiFi is moving 30x the load, in one-third the space. Which brings us to how WiFi works, and the fact that just because its spectral zone is unlicensed, doesn’t mean it’s unregulated.
WiFi is built for spectrum sharing. It waits to talk, and it adjusts its transmit power as part of a design goal that purposefully wants to be a good neighbor, all the time — partly because of regulations that govern things like transmit power and sharing.
LTE is different. For starters, it uses “tunneling protocols.” That means that when a device connects, a secret tunnel is instantly established between it, and the carrier’s LTE network. Each data packet is both encrypted and encapsulated; the only visible parts are the packet’s source (who am I?) and destination (where am I going?)
Meanwhile, the LTE “control plane” — the servers and software that handling signaling and routing — is ceaselessly talking, back and forth, making sure everything’s doing what it’s supposed to be doing.
Here’s the concern: That LTE traffic will deliberately dump into the unlicensed territories, offloading giant blobs of traffic that can’t see or hear what’s already there. Such as anything moving over WiFi.
Is this a real problem? Not yet. Could it be? Definitely. (O, Canada! We stand on guard for thee.)
This column originally appeared in the Platforms section of Multichannel News.
Back when I lived at the farm, with terrible antenna reception and no cable service, Aereo solved my problem of not having a reliable way to watch TV. That is, until service was cut off pending the Supreme Court’s final ruling on whether Aereo violated copyright law by retransmitting broadcast signals captured on dime-sized antennas.
We’ve had an ear to the ground all year, waiting to find out if Aereo would upend the media industry or go dark forever. That decision finally arrived on Wednesday, June 25, with the Justices ruling 6 to 3 against Aereo.
Aereo’s cloud DVR service worked using massive rooftop arrays of dime-sized antennas, each assigned to an individual subscriber. Because the antennas weren’t shared, Aereo argued that its retransmission of broadcast signals did not constitute a “public performance” and as such should not be subject to licensing fees.
Broadcasters, not surprisingly, had a different opinion.
One of the attorneys representing Aereo, David Frederick, is often quoted comparing Aereo’s technology to that of 1980s-era video recorders. Because the Supreme Court ruled in 1984 that recording programs at home for later viewing did not violate copyright laws, then Aereo’s remote DVR service shouldn’t raise any red flags. Right?
Actually, Aereo’s service was a far cry from the VCR experience of the 80s – both in terms of the monthly fee and the ease of enforcement. While broadcasters couldn’t do much to stop a guy with a set of rabbit ears and a VCR from recording episodes of Dallas back in the 80s, it’s a very different story when you’ve got a company repackaging free over-the-air content for a profit. And it’s much easier to make an example of that company.
So what’s next for Aereo, now that the Supreme Court ruled that they have to pay retransmission fees to broadcasters? Aereo founder Chet Kanojia (a long time cable industry guy, who founded interactive TV advertising company Navic Networks, selling it to Microsoft in June of 2008) previously said that Aereo had “no Plan B” if the court battle didn’t go in their favor.
However, in an email to Aereo subscribers on Wednesday, Mr. Kanojia changed the message by saying “our work is not done” and vowed to “continue to create innovative technologies that have a meaningful and positive impact on our world.”
The ruling also calls into question anyone delivering cloud-based video services, especially live and linear content. For now, it appears, so long as partakers keep paying broadcasters (for the content they pay for and distribute), it’s still a clear path.
So will Aereo be reinvented as something new, or is it destined to gather dust on our shelf of televestigials? Only time will tell… and we’ll be watching.
Happily, for lots of reasons, I’m off the farm now and back “on the cord.” If nothing else, I’m glad we got to be part of what was a very good television experience … until it wasn’t. Thanks, Aereo. (Can we have our dime-sized antenna, just for nostalgic posterity?)
Wearable buzz is hitting a frenzied pitch in the consumer marketplace. Here in the lab, we’re early adopters, and not just of over-the-top video options. Leslie’s in year six of walking 10,000 steps a day, for instance, starting with a Fitbit in 2008, and has walked several different fitness bands into the ground (including the recently sampled Polar Loop, returned within a week); I’ve been wearing sensors for longer than iPhones have been on the market.
A bit of disclosure: I’ve had Type 1 (autoimmune) diabetes since childhood, and back in 2007 I got my first CGM (Continuous Glucose Monitor) — a system that tracks the glucose levels under my skin. There are two companies making CGMs for the US market currently – Medtronic and Dexcom – and both systems work essentially the same way:
A disposable sensor, changed out every week, has a small wire that sits below the skin and measures glucose in the interstitial fluid. This sensor connects to a reusable transmitter, which sends raw data from the sensor to a receiver, which in turn uses an algorithm to generate a graph of estimated blood glucose levels.
CGMs don’t replace blood glucose testing — they require fingersticks for calibration, and there’s a bit of a lag between the sensor and actual blood glucose levels – but the trend information is incredibly useful.
Imagine you’re driving a car that has no windows or mirrors, only a sunroof – and you have to keep popping your head out to get a brief glimpse of the curves in the road and the hazards in your way. When I got my first CGM, I suddenly found myself in a car with windows for the first time in over 10 years, able to spot trends in my glucose levels and head off potentially dangerous lows and highs.
But all of this comes at a price – one sensor, good for about a week, runs about $75-100. Transmitters are reusable but need to be replaced every 6 to 12 months, to the tune of about $1400 per year. Then there’s the receiver, which is an insulin pump in the case of Medtronic (roughly $6,000) or a standalone device in the case of Dexcom (about $1,000). Fortunately more insurance companies are starting to see the value of covering this technology, but the out-of-pocket burden is still incredible.
And several expensive CGM systems later, I’m still using pretty much the same technology I had 7 years ago. Back then, I was using a Palm Treo. The word “app” was not a part of the mainstream vocabulary.
Medical technology moves at a snail’s pace because there’s a lot of red tape in place to ensure that things actually work before they’re put on the market. This is why the glucose sensors and insulin pump that I wear 24/7 are still pretty much unchanged – every little feature addition is something that needs to be tested and retested to ensure it doesn’t introduce some unforeseen risk for the end user. That’s understandable, but depressing, especially compared to the pace of wearable innovation.
That’s why it’s quite a jarring contrast to follow this new explosion of wearable health devices, because the production cycle moves much quicker without the whole FDA clearance bit – but there’s also a big risk that these new devices won’t actually work as advertised.
Leslie’s experience with several Fitbits, Nike Fuelbands, the Polar Loop and a gamut of digital pedometers confirms this, at least on a “steps” level. Most bracelet-styled pedometers, for example, don’t count correctly when the “wearable arm” is connected to ground in any way — pulling a suitcase (she does a lot of that), walking dogs on leashes (that too), or holding on when on a treadmill.
She reports that there’s invariably “sync issues,” which highlight another unanticipated ogre of tracking your active life: The botched streak. The Nike Fuelband design team brought this to the foreground with its quirky little app-side dude, named “Fuelie,” which bounces and squeals on every new accomplishment — like the number of consecutive days of hitting “goal.”
Then, the Fuelband breaks (usually within eight months, and always the same way: It shows as charged when plugged in, then displays the “charge me” icon immediately upon removing power.)
Suddenly, you’ve lost your “streak,” but not because you didn’t reach your steps goal. As Leslie puts it: “And at that moment, you realize that your life is freakishly controlled by a little dancing digital icon” — in her case, a 249-day streak — because the only way to correct the streak is to actually pick up the phone and call Nike. (Which has the best customer service of all of them, she adds. But still.)
On the consumer-grade medical wearable end, there’s the GoBe Wristband – a glorified pedometer that claims to be able to calculate calories consumed by unobtrusively tracking glucose levels under the skin.
I’m skeptical about this one for a number of reasons, but mainly this: If you don’t have diabetes, your blood glucose levels won’t fluctuate much at all, even if you have 5 gallons of ice cream and a barrel of root beer for lunch — so the whole premise of tracking calorie consumption this way doesn’t make a whole lot of sense. Yet, over 4,000 people signed up for a GoBe Wristband on Indiegogo, pouring about $200 each into technology that probably doesn’t work as advertised.
As someone who’s lived with diabetes for most of my life, my data is for the most part accurate, but I need it to be seamless. Right now, I have to connect all my devices (CGM, glucose meter, insulin pump) to my computer and download the data, then compare a bunch of different reports in order to make adjustments to my treatment regimen.
Instead, I want everything – my CGM, my insulin pump, my glucose meter, my bike computer, my pedometer, and my desk chair – to send data automatically and wirelessly to a single source, where it can be analyzed for larger trends without taking up my whole day and making my brain hurt.
These devices should all work together to keep track of the larger patterns and the smaller victories, to simplify living with a chronic illness and keep burnout at a minimum.
My phone could alert me to the fact that I’ve been running high in the evenings and may need to tweak my insulin dosage, and then congratulate me when I keep my glucose levels in range for a full 24 hours (known among CGM users as the elusive “no hitter”).
And when I start stepping up the intensity of my workouts, and my blood glucose levels are likely to end up in the trenches overnight, maybe my phone could offer to set an alarm?
With big players like Samsung and Apple now building frameworks to combine data from 3rd party apps, we’re hopeful that some of the major hurdles with respect to security can be cleared. Maybe then medical devices can start talking to our other gadgets, and we’ll finally be on the way to having wearables that simplify our lives, instead of just adding angst.
Just as our focus in the lab is expanding from OTT-only to include gadgets outside the living room, so are many of the majors in the OTT world busily branching out into the Internet of Things (IOT). Let’s have a look.
Apple
For the past few years, the leadup to every Apple announcement always includes plenty of hype about Apple TV – a hardware update to the streaming player is always predicted, but never shows up.
That held true at Apple’s recent World Wide Developers Conference (handily abbreviated “the WWDC”), where there was once again no new TV-related hardware. Instead, a number of new developments on the IOT front:
Along with iOS 8 Apple is releasing HomeKit, software that runs on an iPhone or iPad and controls lights, security cameras, thermostats, garage doors – pretty standard connected home stuff. Apple has a certification program for hardware partners, and is already working with a bunch of companies, including TI, Honeywell, and Marvell.
HomeKit will be controlled by Siri, so you can say something like “Siri, get ready for bed” and it will dim the lights for you. I don’t have much hope for that at this point, but maybe Siri will get a lot better with iOS 8. Speaking as someone who spent several minutes this morning trying in vain to get Siri to understand an address and give me directions, I sure hope so.
Perhaps more exciting: Apple is developing a framework called HealthKit in partnership with the Mayo Clinic and Nike, which pulls in data from 3rd-party apps to keep tabs on health metrics, over time, and allows clinicians to easily access information from your health apps. We don’t have much information yet, and clearly there are a lot of questions to be answered about security, but it’s exciting to see big companies getting involved in modernizing healthcare (more on that in a future post.)
Samsung
In April, Samsung released the “Gear Fit,” a smartwatch with a pedometer baked in, to lukewarm reviews – apparently Samsung’s custom software leaves quite a bit to be desired.
Then, on May 28, Samsung announced the Simband — a wearable prototype that measures key vital signs like heart rate, heart rate regularity, skin temperature, oxygen levels and carbon dioxide levels – impressive, but not an actual product, yet.
Samsung also introduced SAMI (Samsung Architecture for Multimodal Applications), an open software platform for wearables and sensor technology. We like the potential of an open platform, and the health applications are potentially exciting, but we’re not sure Samsung will be the one to ultimately succeed (our own experiences with their devices could be a post all on their own.)
Back in March, Google announced its Android Wear initiative, extending its Android operating system to cover wearables (early arrivals to the market include smart watches from Motorola and LG; Samsung’s early Gear smartwatches used Android Wear as well). The Android Wear SDK is currently in Developer Preview, to be officially launched later this year.
And in other areas of the home, there are persistent rumors of Google subsidiary Nest (the gorgeous, automated thermostat) buying Dropcam, makers of the $150 WiFi security camera. What, you don’t want Google recording the goings-on in your home? They’re already reading our email, after all…
With all these gadgets and sensors in our homes and on our bodies, security is obviously a big concern — and there are currently some gaping holes that need to be filled. We’ll keep a close eye on what each of these massive companies does (or doesn’t do) to protect our data, in addition to how well the products actually work.
DENVER–Nothing like a fresh batch of data about broadband usage, topped off with the start of the FIFA World Cup Games — always a streaming video gauntlet — to check in on the Hype Central category that is Gigabit services.
The fresh data comes from Cisco System’s annual Visual Networking Index (VNI), released last week, which slices trends in broadband every which way — and serves as a perennial reminder to learn the nomenclature of big numbers: Petabyte, Yottabyte, Exabyte.
(Refresher: A Gigabyte (GB) is thousand Megabytes (MB); a Terabyte (TB) is a thousand Gigabytes; a Petabyte (PB) is a thousand Gigabytes; an Exabyte (EB) is a thousand Petabytes, and a Zettabyte (ZB) is a thousand Exabytes. Woof.)
Note: Those are measures of volume. Gigabit services, popularized by Google Fiber and AT&T, are measures of speed. Which makes this Cisco VNI nugget all the more notable: “Global broadband speeds will reach 42 Mbps (Megabits per second) by 2018, up from 16 Mbps at the end of 2013.”
One Gbps is the same as 1,000 Mbps, in other words. Globally, we’re somewhere between 16 and 42 Mbps over the next few years. (That’s about two orders of magnitude off from 1,000 Mbps.)
The point: There comes a time, and we’re pretty much there, that things can’t load or behave noticeably faster. Which isn’t necessarily cause to do nothing, but neither is it a looming competitive catastrophe.
The topic of “Gigs” was a centerpiece discussion during last week’s 20th annual Rocky Mountain SCTE Symposium, where lead technologists from Charter, Comcast, Liberty Global and Time Warner Cable dove into the options for “getting to a Gig.”
Refresher: The entire carrying capacity of a modern (860 MHz) cable system, if every channel were empty and available (which they aren’t), is change north of 5 Gigabits per second. (That’ll double with DOCSIS 3.1’s new modulation and error correction techniques, known respectively as Orthogonal Frequency Division Multiplexing and Low Density Parity Check.)
Getting there, technologically and operationally, is rife with options. There’s the next chapter of DOCSIS, 3.1, and there’s a vendor community bursting with ways to take fiber deeper towards homes. (The vendor displays this year were “a lot more about glass” than in years prior, panelists noted.)
Has the time come that the cost comparison between DOCSIS 3.1 and fiber-deep strategies is close enough to parity for serious examination? No, panelists said (emphatically.) Taking fiber deeper may make sense in greenfield (new build) situations, but not yet in “brown field” (existing plant) conditions.
Nor is the SuperBowl the harbinger of peak traffic loads in IP, even though it’s the most watched television show (108 million-ish viewers.) This year’s “March Madness” NCAA men’s basketball tournament set Time Warner Cable’s new capacity peak for streamed video (exact numbers weren’t disclosed; it was “more than 10s of Gigs,” said TWC Engineering Fellow Louis Williamson.)
Comcast’s highest peaks come from its “Watchathon weeks,” when all programming is made available over IP. “They generate at least four times normal volume,” noted Allen Broom, VP/IP Video Engineering for Comcast.
Do Gigabit services matter? Sure. Should operators drop other technology priorities to build it? Google “red herring.”
This column originally appeared in the Platforms section of Multichannel News.
Amid the lingo of open source software is a new-ish entrant: The Linaro Foundation, which dropped an intersection with cable in the May 29 formation of the “Linaro Digital Home Group,” abbreviated “LHG.”
What’s it all about? On the surface, it’s a way for chip makers, set-top/gateway manufacturers and service providers to manage the complexities involved with moving from one type of silicon chip architecture (“MIPS”), to another (“ARM”).
In at the get-go are Cisco Systems, Comcast, and STMicroelectronics (all active members of the RDK [Reference Design Kit]), as well as Allwinner, ARM, Fujitsu, and HiSilicon.
Lots of moving parts here, starting with why the shift to ARM-based silicon in the first place. Answer: Lower power, higher speeds, smaller form factors. Mobile devices use ARM-based chips, for instance; in-home devices like set-tops and gateways are likely next.
And yes, MIPs v. ARM is a religious architectural debate — not unlike Microsoft v. Apple, in the operating system battles of yore, and Apple v. Android, in today’s techno-landscape. “Going ARM,” for companies accustomed to building MIPs-based silicon (like Broadcom Corp., as one example) usually starts with at least one outburst of “over my dead body!”
What Linaro brings, in general, is “the rest of the story,” from a Linux perspective. Building software products isn’t just writing code — there are points in time where an actual build is required. A “compile.” Important in the lingo of software builds are “active users” — how many people are throwing code into how many “build slaves” in which “build farms.”
Part of every software build involves the best way to ingest what is a usually a torrent of code chunks, coming from all over the place. Thousands of drops, daily. Linaro, in general, manages the Linux distribution of software components for ARM; LHG will extend that into cable-oriented devices.
But wait, there’s more! The Yocto Project, which generally comes up in Linaro conversations as the open source tools software developers use to participate.
In a nutshell: LHG aims to steer the industry further into open source software, and specifically the software related to ARM-based chips, so that the industry can build in-home gear that runs cooler, faster and smaller. Yocto provides the development tools to get there. Off we go…
This column originally appeared in the Platforms section of Multichannel News.
One of the greater developments following this year’s Cable Show, if you’re into immersion learning via tech-talk, is the placement online of the 2014 Spring Technical Forum papers. For free!
Up until now, it was a $50 DVD. Earlier, and for years, the papers came out as thick, bound editions. (A weary shelf at the office sags with Tech Papers dating back to the late ‘80s.)
If this is of interest, and you’d rather read them all yourself, go here: www.nctatechnicalpapers.com.
If you’d rather this (very abbreviated and likely to be continued!) summary, read on.
As titles go, few say “read me now!” more than “Predictions on the Evolution of Access Networks to the Year 2030 and Beyond,” written by five technologists at Arris (among them Tom Cloonan, CTO, who wins this year’s Mister Prolific Award, had we one, for writing or contributing to six papers.)
Shortcut advice on “Predictions:” If rushed, or impatient, skip to page 25. There, three pages characterize scenarios — some that impact all MSOs, others for MSOs planning to extend the life of existing plant, still others for MSOs going to new ways of bandwidth expansion, like Passive Optical Networks (PONs), which is tech talk for fiber-to-the-home.
Favorite line from “Predictions,” as an avid observer of cable’s upstream (home to headend) signal path: “Some of these MSOs will change the split on their upstream spectrum … in an attempt to provide more upstream bandwidth capacity.” Both 85 MHz and 204 MHz were mentioned as candidate upper boundaries for that terrifically thin spectral slice. (The very mention of a “widened upstream” was akin to operational anathema — as recently as two years ago.)
Trend-wise, the notion of “virtualization,” expressed as “SDN” (Software Defined Networks) and “NFV” (Network Function Virtualization) blitzed this year’s papers. It’s all about doing in software what’s done in hardware, now. Example: “Using SDN and NFV for Increasing Feature Velocity in a Multi-Vendor World,” by Cox’s Jeff Finklestein and Cisco’s Aron Bernstein.
Also: “An SDN-Based Approach to Measuring and Optimizing ABR Video Quality of Experience,” by the also-prolific Sangeeta Ramakrishnan (three papers) and Xiaoqing Zhu, both with Cisco.
Another tech trendline from the 2014 stash: Wi-Fi and wireless. Need a deep dive on why the batteries in your digital life behave the way they do? Go directly to “Wireless Shootout: Matching Form Factor, Application, Battery Requirement, Data Rates & Range to Wireless Standards,” by Comcast’s David John Urban. (Warning: It’s a deep-deep dive.)
If you’ve been wondering whether Wi-Fi has what it takes to stream multiple HD signals around a place, go to “Study of Wi-Fi for In-Home Streaming,” by Alireza Babaei, Neeharika Allanki and Vikas Sarawat, all with CableLabs.
There’s so much more. Check them out for yourself, and be sure to thank Andy Scott, Mark Bell and their team at NCTA for doing the work of putting it all “on the line.”
This column originally appeared in the Platforms section of Multichannel News.
© 2000-2016 translation-please.com. All Rights Reserved.