A protocol that arbitrates how and when information is sent on a broadband network, and particularly the portions of that network that are shared amongst many users.
In cable modem parlance, the term “MAC” seems always to move in sync with the term “PHY.” Both are spoken acronyms — MAC as in the first part of the famous hamburger provider; PHY like the “fi” in “hi-fi.”
PHY is a shortcut for “physical layer,” which is a set of functions that defines how data from one place to another along a specified transmission medium — say, hybrid fiber/coax (HFC). PHY includes things such as modulation format and synchronization, so that sent data is timed correctly for receipt on the other end.
MAC comes into play in cable networks because the network itself is inherently shared. For this reason, there needs to be a bandwidth arbitrator, which decides who gets access to the upstream leg of the HFC network at any given time. Thus, MAC is an upstream thing, and not a downstream thing, for cable modem transmissions.
MAC, in the modem sense, handles:
The MAC is important because, in cable networks, the device transmitting data upstream (home to headend) can’t hear what other upstream transmissions are taking place in the same node. (This is inherently different than Ethernet, where each transmitter can hear the others.)
That’s why upstream time slots in cable modem systems are allocated using Time Division Multiple Access (TDMA), a protocol that essentially generates a reservation for each cable modem that has indicated a need to transmit. When the reserved time slot occurs, that modem transmits; any others on the network fall silent until it is their turn.
At the time, in 1995, the key MCNS constituents were Tele-Communications Inc. (now owned by Comcast Corp.), Time Warner Cable, Cox Communications and Comcast. Later, Continental Cablevision (which subsequently became MediaOne, then AT&T Broadband, then Comcast) and Rogers Cablesystems joined, although the latter two didn’t contribute as much funding as the original four.
The reasoning behind MCNS — which began its life under the cone of silence, heavily shrouded from public view — was to sidestep a pervasive set of proprietary ties that had been created by the MSOs’ larger equipment suppliers.
The MSO constituents of MCNS sent proposal requests to the vendor community, seeking commonality in modulation and other aspects of broadband Internet access. The reasoning: If they could break proprietary shackles, they could drive equipment costs down, while providing gear to subscribers that worked across cable franchise lines.
Mediation engines are increasingly used in billing systems, too, to assist in the trend toward transactional behavior.
Shifting toward a transaction-oriented marketplace is way more intricate than, say, tacking $3.99 onto a monthly bill for a VOD movie order. Maybe it’s deciding on a Monday to offer a free weekend of SVOD on Friday, perhaps for everyone who watched more than 2 movies that month. Or, firing up a broadband connection in the guest room for the weekend, because Uncle Bob, always with the electronic tool belt and latest-model laptop, is headed in for the weekend.
When it comes time to make a bill for that household, there’s a ton more information that needs to be extracted, from lots more equipment. The process of extracting the data necessary to compile a transaction is known as “mediation.” Tactically, mediation culls the data inside the headend controllers, for broadcast video services, or from video servers, for on demand services, or from the CMTS (Cable Modem Termination System) for broadband and telephony services.
Machine memory, mercifully, is slightly more manageable than the woolly inner workings of the human brain, which desperately tries to retain some memories, and shed others. Machines can be ordered to always remember, never forget.
An obvious example is the digital set-top box. What one hears most vociferously about them is how much memory they don’t have. But, as is usually the case in technological matters, there’s a lot more to be said about machine memory and how it works than the shrill refrain of “not enough.”
For starters, electronic memory is generally stored on chips. It is measured in kilobytes (abbreviated “kB”) and megabytes (abbreviated “MB.”) Some types of memory are more expensive than others, but in general, memory prices are falling predictably.
The digital set-tops shipping today — models on or after the “2000” series of both Motorola and Cisco/Scientific-Atlanta — contain at least three different types of memory. In techno-speak, these three memory types go by “NV-RAM,” for “non-volatile, random access memory;” “flash;” and “DRAM” (pronounced “dee-ram.”)
Flash and DRAM memory generally get the most talk-time among digital box aficionados. Maybe you’ve heard a technologist refer to set-top memory as “four by eight,” or “one by two.” The first number is the number of megabytes of flash memory. The second number is the number of DRAM megabytes. A four-by-eight configuration, then, is a box that contains 4 Megabytes of flash memory, and 8 Megabytes of DRAM.
In general, the difference between the three types of set-top memory involves what stays in the storage cells when the power goes out. Non-volatile, in this sense, means “must stay.” Both NV-RAM and flash memory chips are configured to keep stored information when the power fails. DRAM memory cells, by contrast, are volatile: They blank out when the power goes, and need to be refreshed.
NV-RAM is the tiniest of the three, capacity-wise. It is generally sized in kilobytes per unit. NV-RAM holds super-critical stuff: The identification number of the box. Customer-generated preferences, including any parental locks. Information about pay-per-view purchases, for inclusion in the next monthly bill.
NV-RAM is much like the first data you learned as a child, and the first data you teach your children to remember: Name, address, telephone number.
Flash is the most expensive of the three. A sort of re-writeable NV-RAM, flash memory used in contemporary digital set-tops is sized in the low Megabytes — from 1 MB to 4 MB, in the baseline cases of most set-top makers.
Flash memory holds the software code that makes various applications work — the “applications code,” or “executables,” in technical parlance. This means the operating system, the program that evokes the electronic program guide, and any other “resident applications” present in the box.
Flash, in human terms, holds the things you tend to remember no matter what: Your birthday. The Pledge of Allegiance. Multiplication tables. How to tie your shoe. The words to your favorite songs. Things you hold dear, either by repetition or vigilance.
DRAM holds the information used by the applications held in flash — the “apps data,” in tech-speak. DRAM is volatile, meaning that if the power fails, all its storage cells are emptied. Guide data is a good example: When the power returns to the box, all of the titles and descriptions of TV shows must be re-loaded. In some cases, applications code, like the software files that make VOD work, are loaded into DRAM, and initiated when a digital customer evokes the application.
The constant in machine memory is this: No matter how cheap it gets, and no matter how much of it snaps into electronics devices like the digital set-top, there will never, ever be enough. Such is the nature of software: Like a gas, it tends to fill all available space.
That’s why HTML, the language used to write Web pages, includes “meta tags” that list, in part, the contents of a particular Web page.
Meta data is also applicable to digital video and audio. In VOD implementations, for instance, meta data is the information that would otherwise be on the sticker of the tape container for the film, in pre-digital times: Title, run time, actors, writers, rating, summary, availability dates, expiration dates. Meta data for VOD titles generally gets rolled up into a “digital package,” and conveyed alongside the title itself, for later extraction by the VOD system.
In cable television, MIBs are historically associated with network monitoring, which is the science of automatically checking on the health of various devices in the network. Nowadays, MIBs figure heavily into many network-controlled in-home devices, from cable modems and VoIP adapters to set-top boxes. MIBs are an outgrowth of SNMP, or Simple Network Management Protocol, which is a network management language.
A “mid-split” would slide that 42 MHz upper boundary to 108 MHz. Whats being “split” in “sub split” and “mid split” is the ratio of downstream (headend to home) bandwidth to upstream (the other direction) bandwidth. In the early days of cable amplifiers, which topped out at 220 MHz, the middle — the mid-split — was at about 110 MHz. In today’s world of 750 and 860 MHz amplifiers, a strict midsplit” would occur at 375 MHz and 430 MHz, respectively — which makes the term “midsplit” a misnomer, in a vestigial sense.
Given that today’s upstream path represents a scant 4% of total available bandwidth, it seems obvious that a little more elbow room — a shift to a midsplit — makes a lot of sense. But it isn’t an easy migration. There’s a lot of stuff already riding in that area between 42-108 MHz, like off-air broadcasts of channel 2, at 54 MHz, and on and on. Thus midsplit conversations tend to morph immediately into spectrum dialect. All of it references the FCC’s rules on spectrum allocation, and who gets to use what, without interfering with each other.
Still, the midsplit remains a vibrant area of technological discussion, for two reasons: One, the upstream path could desperately use some more room. Two, as things progress toward “all digital,” including the reclamation of analog spectrum, the idea of a mid-split becomes more plausible. Once everything is truly “all digital,” and analog “goes away,” there are fewer and fewer reasons to worry about messing with what’s plunked in the way of a midsplit environment.
The earliest modem is the telephone modem, used to connect a personal computer to a data network, such as the Internet.
Like knowing where you were when John F. Kennedy was killed, or when you saw the video clips of O.J. Simpson’s white Bronco tooling down a California highway, most people carry bits of nostalgia about the first modems they ever used. When this writer began life in communications, the top speed of a phone modem was a blistering 1200 baud, or roughly 1200 bits per second. It was a happy day when that modem was replaced with a 2400 baud device.
In those times, in the mid-1980s, the Internet wasnt happening. Then came the cable modem, in the early ’90s, which leapfrogged the carrying capacity of traditional phone modems. Cable modems operate at (shared) top speeds of 38 Mbps downstream (headend to home), and 1.5 Mbps or faster, upstream (home to headend).
Modems are necessary because personal computers and electronic devices spit out information digitally, in a series of ones and zeros. The modem’s job is to imprint the digits onto a carrier, so that they can get to the destination. That means varying the amplitude or frequency (phase) of the carrier wave, so that bits can ride on it.
Modulation types vary with the resultant speed, required quality, and immunity to noise. A rule of thumb is this: Modulation is a tradeoff between speed and noise immunity. In general, the faster the afforded speed, the more susceptible it is to noise. Example: QPSK (quaterny phase shift key) modulation, a digital technique used in upstream cable and downstream satellite transmission, is slow, but plows through noise. By contrast, 256-QAM, another digital modulation technique, used in downstream cable applications, is way faster, but much more susceptible to noise.
Because analog television cameras and microphones emit signals at baseband, only one could be carried per wire — because a second one, at the same frequency, would interfere with the first. That drove a major design emphasis in early cable television systems: To assign several baseband signals to separate frequencies. The assigning and impressing of those multiple, baseband signals onto cable is modulation.
When the individual frequencies, carrying baseband information, are smooshed together to ride together over the cable system, that’s called frequency division multiplexing, or “FDM,” a form of modulation. It is FDM that yielded the concept of “channels” — each channel is the individual baseband signal, carrying audio and video specific to it.
Amplitude modulation modifies the strength of the carrier signal (its amplitude) so as to imprint a channel onto it. Frequency modulation modifies the number of carrier cycles per second (each hertz), to correspond with and carry the channel. Frequency modulation, or FM, is generally acknowledged as more sturdy and noise-resistant than AM, although it uses more bandwidth to behave better.
Given the mixed meanings of the word “monitor,” a distinction: This is monitor like hall supervisor, not monitor like Harriet the Spy.
The monitor app (short for “application”) grew out of the ongoing work to make OCAP (OpenCable Applications Platform) a deployable piece of middleware. It necessarily swirls around conversations of future consumer devices that come with a built-in digital cable set-top. Because the early application of such combinations is the digital TV, this definition will describe the monitor app in that context.
Think of all the things a person can do with a TV, in terms of basic functions: Control the volume, change the channel, manipulate a built-in DVD or DVR player, invoke a program guide. The monitor app is the shepherd of the cable-specific functions.
The monitor app has a precursor, in standalone digital set-top boxes. It’s known as “the resident app,” meaning it is always there, inside the box. It does behind-the-scenes things, like fetching and displaying the volume banner, or fetching the guide, or implementing any mechanisms listed as “settings.”
By contrast, the monitor app is more like a “co-resident app” in the digital TV/set-top combo. It gets downloaded from the cable operator when a customer brings the new combo device home, plugs it in, and wants to summon whatever cable offerings are of interest.
That means the monitor app isn’t a consumer-facing thing. It does its work in the background. It arranges what to do in situations that require an operator’s guidance is required — like when security is required (premium apps) or when a rogue app tries to bring the network to its knees. It minds application lifecycles, removing software when it expires, and managing new software when downloaded. If two different applications are elbowing for the same resource, at the same time — memory, processing power — the monitor app mediates. Monitor applications specific to OCAP are not yet mainstream. At this writing (mid-2005), consumer electronics manufacturers are developing prototypes; MSO technologists are working up their versions. Timing aside, the monitor app is an inevitable and necessary intersection between consumer electronics devices, and digital cable services.
The reasoning: There is a point at which nothing more fits. Nonetheless, Moore’s predictions have remained stable since he first made them, in 1965, and have become the catalyst for increasingly inexpensive personal computers and consumer electronics devices.
In general, without MPEG compression, video information, when digitized, carries about the same characteristics (in terms of size) as it did in analog. But when digitized, video is ripe for compression, by removing the picture elements that are the same from one frame of video to the next. A common example given when describing video compression is a scene in which the only thing moving is an airplane, flying by. With compression, the first frame of background scenery is kept. Each frame of video that follows is marked — “same as the last one.” The airplane’s movement thus becomes the only element that needs attention.
The far-away leader in deployed video compression is MPEG-2, which scrunches (standard definitino) video to a rate of 3.75 Mbps. Coming up fast, however, is MPEG-4, which compresses (SD) video at 1 Mbps (or less), with the same video quality.
MPEG transport hovers around the edges of technical conversations, usually in a tangle of technology terms. Here’s a random example of an actual spoken sentence: “They’d still need a set-top with MPEG-4 decompression on an MPEG 2/4 chip, and then it’s a matter of putting a new PID in the MPEG transport stream.”
Uh-huh.
In practicality, MPEG transport is more bit organizer than conveyor built. It groups packets of varying flavors — meaning those from cable modems and other Internet Protocol (IP) devices — for the ride to homes. It’s the bill of lading for a stream of digital video bits, detailing which are audio, video, and “business” bits, like for timing information and security.
MPEG transport is relevant because every fielded digital set-top box, cable modem, and VoIP adapter uses it. You read that correctly: Even IP devices, like cable modems and VoIP devices, send their bits over MPEG transport. What happens is, when IP traffic enters a headend CMTS, it is tagged with identifiers, then slipped into the outgoing MPEG transport stream.
The new-ish counter to “MPEG transport” is “IP transport.” Extremely pervasive, IP transport is increasingly viewed as a passageway that will only shed expense and gain innovation, through research and development.
There are two main things to remember: MPEG-2 isn’t just about compressing. It’s also about how those compressed bits are organized for transit. And, the “MPEG or IP” question doesn’t have to be “either/or.” Just as television didn’t displace radio, advances in signal transport usually don’t dislodge earlier versions.
In an Internet sense, “multicast” is a one-to-many technique that puts routers, instead of the video source, to the task of replicating packets designed for different recipients.
When cable technologists talk multicast, they’re usually explaining one of two things. One is how to move video over a new path — the IP (Internet Protocol) path, to digital cable ready devices, and any connected storage or display devices. Two is how to best link broadband-equipped PCs to broadcast quality video.
One receives a multicast by joining a “multicast group.” Content is “pulled,” not “pushed,” to the person wanting it. That makes multicast more analogous to switched digital than to traditional, everybody-gets-it-even-if-they’re-not-watching broadcasts.
Telco video providers are expected to be big multicasters, in part because they don’t have a legacy base of devices that can’t do IP video.
Usage: “To multiplex is to mux; to mux is to multiplex.”
© 2000-2016 translation-please.com. All Rights Reserved.