P2P Part 2: The Good and the Bad About Byte Caps
by Leslie Ellis // September 22 2003
Say you’re in the business of moving electronic things from one place to another. Like from a headend to a home, or vise versa. (This should sound familiar.)
One day, you’re told that your transit pipelines are filling with silt — at an alarming rate.
As tools go, you probably need either a good mesh, or a roto-rooter.
Welcome back to the sticky business of what to do about peer-to-peer (P2P) traffic. Last time, we examined why and how this P2P phenomenon is happening. This time, we’ll examine what can be done about it.
P2P, which happens when people apportion parts of their PCs to share files with other PCs, over broadband, is clogging up cable’s broadband pipes in a rather big way.
How big? Chances are high that right now, as you read this, 50 percent of the traffic moving through broadband (DSL is not exempt) is P2P. That’s especially true in the upstream signal path, which is cable’s scarcest bandwidth resource.
The good news: There are two things you can do about it now. The better news: An increasing number of suppliers are developing “data forensic” tools, which delve deeper into the situation.
The first option is to implement a “byte cap.” (Sadly, this is not something you put on your head.) Rather, it’s a counting mechanism: Bytes received and bytes transmitted, per customer.
The field varies on byte caps. Cox’s is a generously-sized 30 Gigabytes down, and 7.5 Gigabytes up, per month. To put that in context, your laptop’s hard drive is probably in the 30 Gigabyte range.
If you’re going to do byte caps, technologists caution, be sure to begin by putting counters on individual cable modems, and on the spigots of the CMTS (Cable Modem Termination System), at the headend. Counting packets is relatively new. Mis-counting: Not good.
The second option is a direct outgrowth of byte caps: Up-selling customers to a higher data tier. This is the stuff of the DOCSIS 1.1 cable modem upgrade, which tiptoes ever closer to the marketplace.
What happens when a broadband Internet customer exceeds the maximum number of allowed bytes per month? Not much, now. The byte cap and tiering phenomena is still fairly young. But the thinking is, on first offense, customers get a warning: You’re over. Want to upgrade?
(If the lawsuits doled out by the Recording Industry Association of America three weeks ago are any indication, it is precisely at this point that many broadband grandparents start to get wise about what their grandchildren are doing on Grampa’s PC.)
On the second overage, maybe customers get a harsher warning: You’re way over. Upgrade or else.
Maybe the third time is a charm (ahem) not unlike your cell phone bill, when you exceed the allocated minutes: Just as you never realized how expensive extra minutes could be, broadband customers realize that you’re not kidding about bandwidth overages.
Part of the trickiness about P2P is how to notify customers of overages. In some cases, customers are honest-to-God, no-kidding, completely unaware of their “offenses.”
As with data forensics, there is vendor help out there for customer messaging. Texas-based PerfTech is one, which accomplishes its work without using e-mail, instant message, or telephone contact. More, it acts sort of like a browser re-direct, customizable by the MSO to say whatever it needs to say.
Brute Force v. Forensics
Tiering and byte caps are viewed in the data community as “brute force mechanisms.”
That’s because a big part of the problem is that P2P bits often look exactly like Web browsing bits. Using the pipeline analogy, it all looks like water, even though half of it behaves like silt.
And, because P2P bits don’t crust on the walls of the big pipes – they keep on moving – the remedy isn’t like angioplasty. What’s needed is more like a magic mesh: Something that sees color in clear; patterns in uniformity.
Again, the vendor community is responding. Companies like Ellacoya Networks, P-Cube, and Sandvine are all building CSI-like tools for P2P and broadband.
That’s good, because that P2P traffic percentage – half the pipe – is huge. Even the engineering community, a predictably unflappable bunch, is alarmed. “It completely blew me away,” said one MSO data technologist recently, when he learned that half his traffic is P2P.
Maybe it’s just me, but, when I hear the seriously smart and sensible of our industry’s technologists say things like “it completely blew me away,” I listen more closely. In case you’ve never tried it, know that it’s not easy to “completely blow away” an engineer.
If the adage is true — that “anybody can do the job with the wrong tool” — then it’s probably time to start gathering the right tools for P2P.
This column originally appeared in the Broadband Week section of Multichannel News.
No Peer-To-Peer (“P2P”) Relief This Back-to-School Season
by Leslie Ellis // September 08 2003
The annual return of our nation’s children to school used to invoke a transitory calm in the people who track bandwidth consumption to and from cable modems.
As the yellow buses rumbled off, broadband networks settled down. Bandwidth spikes shrunk; congestion eased. Life was good again — at least for seven hours a weekday.
Alas, the autumn of 2003 offers no such relief to the industry’s data engineers. As is necessarily the case with better mousetraps, the tools available for “peer-to-peer” networking, abbreviated as “P2P,” are advancing.
It no longer matters, for example, whether there’s someone at the PC, to skipper the tugging of more music, images or video files. The spikes of P2P can keep spiking, and the congestion can keep congesting, unattended.
And, the files shared among P2P participants are getting bigger: It’s not just little audio files anymore. Video comprises about 15% of the P2P traffic ripping through today’s Internet routers, according to the companies who monitor this stuff.
That percentage is being feverishly stoked for growth.
It’s a fairly safe bet, for example, that DVD burners with file-sharing capabilities will be available by year-end. That means enormously fat files, with spigots tooled for broadband.
One movie weighs hundreds of Megabytes – sometimes Gigabytes — not including any DVD extras. That’s a lot of bits to move. Naturally, they move fastest over broadband’s roomy avenues.
The actual and anecdotal evidence about P2P’s insatiable nature is alarming, even to data stalwarts. Six percent of broadband Internet customers consume 60% of bandwidth. One guy uploaded 300 Gigabytes in a month. (That’s roughly equivalent to 1.2 million Web pages, or about 5 movies a day.)
Adding more bandwidth to fix the problem is about as effective as adding acreage to a forest fire. Like a gas, P2P traffic always seems to fill all available space.
So, shut it off, you say. Don’t let P2P traffic in. Or only let some of it in.
That method did work, for a while. Understanding why it doesn’t work now requires a bit of detail about how P2P traffic moves.
Say you’ve downloaded a file sharing program, like KaZaa, onto your PC. First, the program needs to identify itself to the PC’s operating system. Because it needs to communicate over the Internet, it gets what’s called a “port number,” so that the machines the application encounters along the way know where to listen for it, and talk to it.
Massively used applications, like Web browsing, use an agreed-upon port number, assigned by the Internet Engineering Task Force (IETF). The identification number for Web browsing happens to be port 80.
In the early days of P2P, downloaded “client” applications used a static port number. Kazaa used port 1214, for example. Static numbers can be physically filtered out at headend routers, and the traffic disallowed.
(If this sounds unfriendly, consider the operator who throttled P2P traffic way back, without getting a single angry customer call. Or the operator who stunted P2P traffic and saw a 50% decline in those cranky “tell me again why I’m paying you extra for your slow service” calls. Harnessing P2P traffic gives the majority of customers more room for their transmissions.)
These days, though, P2P applications are more skillful. Static port numbers are out. Port hopping is in.
It’s not unusual for P2P to identify its packets not with a fixed port number, but with the “port 80″ of Web browsing applications — which makes P2P packets look like any other Web page request.
Other times, P2P clients do what data technicians call “port hopping,” which is the use of random port numbers that are privately agreed upon between P2P users. Envision this as two P2P applications stepping off the network, and saying “meet me over there.” The “over there” is the random port number.
The next mousetrap is encrypted P2P traffic, which will happen, and will be all the harder to detect and manage.
The good news is, there are tools to deal with P2P now. They’re brute force tools, but they’re tools: Cable modems and their headend controllers, can be set to impose speed and consumption limits.
And, a growing number of companies are building forensic tools that can look harder into packets, to see how they relate to each other. This helps to “shape” cable modem traffic, either to cap “bad” traffic, or to apply business policies. Maybe this means making authorized P2P applications work better on broadband.
For now, as the days shorten and the air cools, there are three things to know about P2P on broadband networks. One, it isn’t going away. Two, it’s growing fast. Three, it won’t stop seeking ways to ride broadband.
That’s the current events of P2P traffic. Next time: More on the methods to make P2P more harmonious with broadband providers.
This column originally appeared in the Broadband Week section of Multichannel News.