A software mechanism that summarizes and simplifies a long string of code so that a “higher” layer of code can use it, without having to know every last little detail.
Abstraction layers crop up in any industry that uses software. They usually travel with a prefix: Hardware abstraction layer, software abstraction layer, database abstraction layer, network abstraction layer. In each case, the intent is to simplify, for the layer “above” it, how to proceed with a software activity.
Software is largely invisible. You can’t put it in the palm of your hand and have a look at it. Still, it needs to be understood — and that’s why software people are so prone to drawing stacks of rectangles when describing their world.
In a traditional digital video sense, a typical stack starts with a rectangle at the bottom, marked “set-top hardware.” That’s the chips, and the software inside them. Above it, one marked “operating system.” Above that, a “middleware” rectangle. On top are those marked “applications” — maybe the electronic program guide, or on-demand ordering mechanism, or interactive trigger.
Abstraction layers do their interpretive work at the north-south intersections of those four rectangles. A hardware abstraction layer summarizes the “set-top hardware” box for the “operating system” box above it, telling it how to proceed with a particular activity. This work continues on up the stack.
Let’s use as an example the navigational part of a video-on-demand system. Navigators are applications, so they qualify for inhabitancy in that top stack. Say the navigator wants to tune a program ordered up by Customer Jane. Its abstraction layer says to the one below it, “Fetch me the stream for Pulp Fiction, please.” It doesn’t say: “Tune the following 6 MHz carrier, find me the program identifier for the MPEG-2 stream related to Pulp Fiction, isolate the index frames, begin filling the MPEG buffer, decompress the video, and get it on the screen, please.”
Usage: Having that hardware abstraction layer means they’ll be able to get OCAP-based set-tops from multiple suppliers.
Shorthand for Access Broadband over Power Lines, which is synonymous with Broadband over Power Lines, or BPL. In terms of who says it which way, the Federal Communications Commission appears to prefer BPL with the “Access” prefix.
Access BPL refers to the ability to get faster-than-dialup connectivity to the Internet by hooking into an everyday power outlet. What makes it potentially powerful (no pun intended) is the enormity of its potential reach: Electricity is a necessity, so its service reach is 100 percent of homes.
Consumers of BPL get an adapter box no larger than the adapter that powers your portable boom-box. The wire coming out of it ends in an Ethernet connector.
Until early 2005, data throughput levels for BPL lingered in the 300 kbps range. At the 2005 Consumer Electronics Show in Las Vegas, rates for BPL headlined above 200 Mbps. That’s plenty enough for a house with multiple HDTVs, a couple of broadband Internet links, and a voice service. It makes the electric companies candidates to deploy modems to join cable and DSL in the battle for residential broadband customers.
The real competitor to access BPL, however, isn’t the incumbent broadband providers. It’s the HomePlug camp, which is in the market (pervasively) with adapters that do everything access BPL promises — for cable and DSL implementations. (See HomePlug.)
How Access BPL Began
In the 1990s, many of the nation’s 3,000-plus electric companies started running fiber links between their substations. Substations are those fenced-off facilities where voltage generated from power stations is distributed to nearby homes and businesses. The purpose of these fiber links between substations was to shuttle telemetry information, so as to monitor network performance and prevent trouble from arising.
After the widespread power grid outages in the northeastern U.S. during August 2004, electric companies got even more proactive about linking substations with fiber. That means the Internet-facing links are there, albeit serendipitously.
The trick with BPL is finding a way to pass broadband signals off the fiber, around the high-voltage (HV) plant, through the medium voltage (MV) plant, and across transformers, to homes. Not surprisingly, the vendor community responded. Also not surprisingly, their approaches are vastly different.
Technologists familiar with the approaches in play say the vendor environment for access BPL is as rich with differences as were cable modems, prior to the DOCSIS standard. Some use spread spectrum, others use multiple carrier techniques. There are and will be more.
BPL Caveats
Despite its potential ubiquity, access BPL isn’t necessarily a shoo-in. Like telcos entering the video market, power companies are stepping into a zone already served by two industries — three, if you count wireless hot spots.
And there’s the corporate culture issue. Until now, the mindset of the stereotypical electric company has been one of “keep the lights on, don’t get hurt, safety first.
Plus, there’s the age-old matter of interference. Power cables aren’t shielded. RF leakage is bad, particularly for sensitive government agencies, like the Federal Aviation Administration, and for amateur radio operators. Even the telcos don’t like BPL much. They say interference is likely because telephone lines run parallel to electric wires.
In its October 2004 Report & Order on BPL, the FCC set forth a plan to protect against interference — sort of. To quote directly from the Order: “The benefits of Access BPL for bringing broadband services to the public are sufficiently important … as to outweigh the limited potential for increased harmful interference that may arrive.
The twisted-pair telephone line that connects a phone to a telco central office, so someone can access the public switched telephone network (PSTN) to make a call, or gain broadband admission to the Internet, over a DSL modem.
Cable plant uses coaxial cable as the line that connects set tops, cable modems, voice devices, and any other “cable-ready” gadgetry to the nearest node, so that consumers of those devices can access the services they deliver.
The Ultimate Truth about access lines: In general, people who own them, cherish them. People who don’t own them, covet them — and will do just about anything to get them. Consider AT&T (pre-SBC). It spent over $100 billion between 1998 and 2000 to buy big cable operators: Tele-Communications Inc. and MediaOne. Why? It wanted a way to get those millions of access lines all suited up for AT&T phone service.
Short version is it didn’t work out. AT&T sold its cable properties to Comcast in 2002 for around $30 billion. Gulp. If only hindsight were available retroactively! In 2004, AT&T found a way to nudge a phone service into homes, by slipping it in on top of cable and DSL modems using voice over Internet Protocol technology — with nary a dime spent to access cable or phone wires. Oh, the irony.
(Then, in 2005, SBC bought AT&T, and changed its name to AT&T. Talk about an identiy crisis.)
Lines in Decline
Retail access lines are diminishing among traditional Bell telephone companies as customers turn to alternative technologies — like wireless phones — and embrace competing phone providers.[/accordion]
[accordion title=”Access Node”]Also called just “node.”
An access node is a physically secured place in a network where signals are manipulated one last time before they move into homes or business. In cable plant, the access node is generally the place where signals stop moving over light (fiber), and start moving over radio frequency (RF, via coaxial cables.)
The types of manipulation that can happen to a signal in a node are many. The signal can be modulated, demodulated, multiplexed, protected, or re-organized, depending on its transit direction (downstream, toward homes; or upstream, toward headends).
Usage: “If they run into bandwidth scarcity, one option, at least for on-demand services, is to split the access nodes.
A fitting, connector, board or similar device used to make two dissimilar things suitable to work together. A timely example of an adapter is the gizmo that’ll be needed before cable companies get to the all digital cable system.
As adapters go, the all-digital gizmo needs to be inexpensive, and unobtrusive. Its task is to convert incoming digital signals back to analog, so that the 400 million or so analog TVs and VCRs out there in the U.S. know what to do. Without a digital-to-analog adapter, everybody’s analog stuff becomes extinct in February of 2009. With an adapter, there’s a lifeline.
The coordinates of just about anything, as necessary to find it.
Two decades ago, an “address” meant one thing: Your house number, street, town, state, and ZIP code. Today, you and your stuff have handfuls of addresses. They’re the secret ID numbers inside your PC, cable modem, digital set-top box, voice-over-IP adapter, PDA, cell phone, and anything else you own that contains software that communicates with other software, in order to do stuff. Thankfully, you don’t usually need to remember any of them. They do their addressing in the background.
Usage: Usage: “The unique identity of a cable modem is its MAC (media access control) address.
As the designator implies, an “adjacent channel” is one that is directly “above or below,” or “left or right,” of another channel. Thus the adjacent channel to channel 10 is either channel 9, or channel 11.
Why care? In engineering parlance, “adjacent channel” usually has a suffix: interference. It goes like this: Channels that are next to each other, in the frequency domain, may have some overlap, spectrally. That overlap can cause two adjacent channels to step on each other — which can have a bad effect on picture quality and sound.
Usage: “Adjacent channel interference looks like a shimmering, flickering distraction that usually corresponds to a neighboring channel’s sound.
A version of ADSL that roams higher in the frequency domain of a telephone wire — to the 2.2 MHz range, from 1.1 MHz. The extra bandwidth gives it breathing room for faster speeds. How much speed, however, still depends on distance. Best case, meaning shortest loop length, puts the theoretical max for ADSL 2+ at a healthy 24 Mbps, downstream. As of 2004, however, roughly half of the homes in the U.S. were attached to telco loops of about 12,000 feet. At that distance, DSL 2+ can move bits to homes at about 6 Mbps.
That’s plenty enough for video, even without advanced compression, but it isnt quite 24 Mbps. And it’s not just about Web-surfing, or a faster bullet in the broadband wars. It’s about transforming telephone networks to deliver credible television service. Digital video set tops, with embedded DSL 2+ chip sets, started to emerge in 2004, priced at around $185. The early models used MPEG-2 compression, with advanced compression on the way. The boxes make telephone providers capable of offering high-definition TV, video-on-demand and interactivity.
Usage: “The 2.2 MHz frequency range used by ADSL 2+ is considerably easier to manage than, say, VDSL, which ventures as high as 19 MHz.
Asymmetrical Digital Subscriber Line, or, in its most common parlance, DSL.
The prevailing technology for connecting people to telco-delivered broadband services. It works by availing itself of unused frequencies on telephone wires to transmit data, generally at multi-megabit per second speeds. The amount of speed depends on the length of the line between the telco central office and the customer’s home. Shorter line, faster speed. Longer line, slower speed. “Asymmetrical” means more data flows downstream (toward homes) than upstream. See ADSL 2+.
There are three “flavors” of ADSL:
· G.lite, also known as “DSL Lite” — a medium-speed (up to 1.5 Mbps downstream) offering
· RADSL — where the “RA” stands for “rate adaptive,” meaning the line can orient itself for varying speeds
· VDSL — where the “V” stands for “Very high bit rate” of 26 Mbps or so, on short (50 meter or less) lines — like those that drop off of fiber-deep architectures.
A squisher of digitized video, the advanced video codec variously shows up under the following names: MPEG-4 Part 10, JVT, H.264, MPEG-AVC or simply AVC. It was jointly created by ISO-MPEG and ITU-VCEG. A competing standard VC-1 from Microsoft is currently under standardization process in SMPTE.
All are newer ways to squeeze video than is possible within the predominantly deployed version, known globally as “MPEG-2” (where “MPEG” stands for Moving Pictures Experts Group). What’s “advanced” about advanced video codecs, perhaps obviously, is their compression rate: They squeeze video further than MPEG-2. That means more digital video content can be sent, and stored (think DVRs here), than now.
A “codec” is an engineering coupling of the words “coder” and “decoder.” In the case of advanced video codecs, a piece of video encoded with MPEG-4 and its counterparts, at a rate of 1 Mbps (and dropping), looks essentially the same as a piece of video encoded with the existing MPEG-2 stuff, at 3.75 Mbps.
In essence, advanced codecs produce thinner streams that work as well as thicker streams, to do the same thing.
About All the Names
Why so many names for the new compressors? Without going into the lifecycle of a technical standard (which can outlive even healthy dogs), it goes like this: Two different standards-setting groups (MPEG and the International Telecommunications Union, or ITU) were both working on an advanced video codec. Naturally, both went by different names — MPEG-4 and H.264, respectively.
The two groups decided to combine their efforts, calling themselves “JVT,” for Joint Video Team. But old names die harder than old habits. The MPEG people started calling the work of the combined group “MPEG-4 Part 10,” because they’d had nine parts before the merged codec came along. The ITU people kept on calling it H.264. (Most people pronounce it “H dot 264.”)
That was too confusing. Ultimately, the JVT opted to call its codec “AVC,” for “Advanced Video Codec.”
What’s Advanced About It
At a structural level, AVC isn’t much different from MPEG-2, experts submit. It’s still all about removing the parts that are the same, from one frame of digitized video to the next. It turns out that lots of things are the same, one frame to the next. But we don’t notice them, because human eyes like movement better.
To compress by removing repetition requires solid reference points. In MPEG-2, there are two: I (“intra” or “initialization”) frames and P (“predictive”) frames. A third, the “B (bi-directional) frame,” can be predicted using the prior two reference frames.
In AVC, two or more frames (including B-frames) can be used as reference frames. AVC also introduced a new tool known as “intra-prediction,” which MPEG-2 video coding does not have.
Initialization frames, or I-frames, are regularly occurring reference points that initiate a compression sequence. P-frames, true to their name, predict the next frame, based on the sameness of the frame following the I-frame. B-frames look forward and backward, to anticipate and build a forthcoming frame.
That methodology remains the same in JVT/AVC. What’s different is how the squeezing is accomplished. Some methods use a technique called “entropy coding.” Others use “discrete cosine transform coefficients.” Suffice it to say that JVT/AVC brings further efficiencies to motion compensation, artifact filtering, and about a dozen other compression-related processes.
Tactically, up-shifting to an advanced video codec, which is not backward compatible with MPEG-2 video coding standard, almost certainly means new equipment in the home — like new set-top boxes, or digital TVs. Early units will likely contain both MPEG-2 and advanced video codecs.
Mathematic formulas, often secret, usually in software, that are optimized to perform a specific, usually very complicated task. Algorithms are generally the secret sauce of software and equipment providers, touted as the reason their equipment is different or better. There are algorithms to compress digital video, to encrypt it and to place it in video servers. Algorithms schedule data through routers, perform statistical multiplexing, and accomplish a wide range of data instructions.
Usage: “Vendors are forever at work on algorithms to help make upstream data pass more quickly and with more resilience.
The big, slow process of transforming television content, distribution networks and in-home equipment (such as TVs and VCRs) away from traditional analog, to digital.
“All digital” affects practically every aspect of television — from creating it, to transmitting it, to watching it.
Background: In 1996 the Federal Communications Commission (FCC) adopted a standard for the transmission of digital television. The intent, in part, was to reclaim the analog spectrum occupied by broadcast TV transmissions. Initially, the “digital transition” was to occur in 2006 — an aggressive timeline, given the amount of change necessary. More recently, the cutover date was fixed at February 16, 2009.[/accordion]
[accordion title=”AM – Amplitude Modulation”]One method for attaching (modulating) a video, audio or data signal onto a radio frequency carrier, by altering the power (amplitude) of the desired signal. Modulation, in general, is a pre-requisite for moving TV pictures or other information from one place to another, and is simply a series of techniques that bind the desired signals onto an electromagnetic carrier, to get to the destination. Amplitude modulation differs from frequency modulation (FM), which joins a signal onto a carrier by varying the frequency of the desired signal.
Usage: Quadrature amplitude modulation (QAM) is to digital what amplitude modulation is to analog.
The use of amplitude modulation to launch video, data and voice information into a stretch of fiber optic cable.
AM fiber refers more to the opto-electronics at either end of a piece of fiber than to the optical cable itself. Advancements in linear light source lasers in the early 1980s produced a form of light that could propel signals by adjusting signal power (amplitude). The use of analog transmissions over fiber optic cable wasnt possible without these light source breakthroughs.
Usage: Historically, the development of AM fiber is a critical milestone in the evolution of today’ hybrid fiber-coax (HFC) systems.
A device that accepts a signal at its input, and presents the same signal at its output — but at a higher amplitude, and without marked distortion. Amplifiers generally reside in a metal housing, about the size of a pole-mounted mailbox. They contain booster circuits to extend a signal’s reach.
Typically, five amplifiers per square mile are used in the “last mile” of contemporary HFC (hybrid fiber-coax) cable systems — one every 1,000 or so feet. Physically, amplifiers hang directly from coaxial trunk cables, or are housed in environmentally matching pedestals.
Amplifiers pick up in the signal delivery chain after the optical-to-RF point — the node — where the “F” (fiber) in “HFC” passes off to the “C” (coaxial cable.) Network power, typically 60 or 90 volts AC, is required. The amplifier draws the power from the coaxial cable.
One nagging engineering characteristic of the amplifier is its utter indiscrimination between “desired” signals, and noise. The amplifier lifts the level of all that passes through it: the desired signal, as well as noise or distortion. Today, few cable amplifiers are shipped without bi-directional/two-way capabilities, to ensure adequate upstream passage of interactive requests, like a PC click through the cable modem, or an on-demand “trick mode,” like pausing or rewinding.
Historical tidbit: One of the longest amplifier cascades on record was in Manitoba, Canada, where a small telephone company, Manitoba Telephone, strung together 122 amplifiers to shuttle 12 channels to its far-flung customers, during the late 1970s. That’s a far cry even from the most startling recollections of U.S. cable engineers, who marvel about earlier-generation cable systems in which 80 amplifiers cascaded from a microwave hub.
Usage: Contemporary, fiber-deep cable systems generally gang five or fewer amplifiers into cascade.
A signal that can be manipulated to deliver video, voice or data information by altering its power or frequency.
Visually, an analog signal resembles the letter “S” on its side — several of them, connected in a series — and contains an infinite number of cycles that can be alerted. (A cycle is a full traverse of the tilted “S.”) By contrast, a digital signal has two allowable positions, or states: On or off.
In 2004, the inferiority complex for “analog” began, as the world started moving toward “all-digital.” Suddenly, good old analog started to sound old fashioned, worn out, not new. Direct broadcast satellite (DBS) providers DirecTV and EchoStar Corp. promulgated the notion in television ads, calling out cable for providing only partially digital services.
Here’s the big “on the other hand:” Half of all cable customers don’t take services that require a set-top box. In that sense, analog could be described as an asset, because, like a spigot, analog pours out of the wall socket and into any analog cable-ready set or VCR, without the need for converters.
Usage: Back in the days when you recorded Grand Funk Railroad songs onto an audiocassette, you were dabbling strictly in analog. In other words, you copied a sound wave to a tape in its original (analogous) form. Digital signals, in contrast, are numeric representations of the original wave.
Analog video is regularly converted to digital, in a high-quality manner. When that conversion has occurred, concerns quickly mount about copyright protection, Internet re-distribution, and the “Napsterization” of video.
Some content producers, including Hollywood movie studios, support watermarking as a way to combat piracy stemming from exploitation of the analog hole. They want new digital devices to come with watermark detectors that would disable illicit recording once they sense the presence of a watermark.
Usage: No matter how much copy protection technology is integrated into new-age digital devices, its still possible for people to make old-fashioned copies of analog output. A crude example: using a digital video camera to record a movie from your TV screen.
Because 10 or more “standard definition” (not hi-def) digital channels can occupy the space of one analog channel, “analog spectrum recapture” is increasingly viewed as a plausible way to maximize available bandwidth.
Usage: In contemporary, 750 MHz cable systems, analog channels occupy more than two-thirds of the available “shelf space,” from 52-550 MHz. Digital services typically reside between 550-750 MHz. By slowly turning analog channels into digital channels, bandwidth is maximized.
ANSI is the official U.S. representative to the International Accreditation Forum (IAF), the International Organization for Standardization (ISO) and, via the U.S. National Committee, the International Electrotechnical Commission (IEC). ANSI is also the U.S. member of the Pacific Area Standards Congress (PASC) and the Pan American Standards Commission (COPANT).
Armed with a budget that runs in the double-digit millions, ANSI traffics in everything from the number of threads in a household light bulb to the way automated teller machines spit out cash to — in cable’s case — the way cable modems process high-speed data flows. The theory is that standards render benevolent economic results, like volume manufacturing and highly competitive markets.
The term “standard” gets plenty of focus in cable technology circles. The ANSI definition of a standard is “a documented agreement, established by a consensus of subject matter experts and approved by a recognized body that provides rules, guidelines or characteristics to ensure that materials, products, processes and services are fit for their purpose.”
Cable industry standards go through a series of painstaking evaluations and commentary by committees established by SCTE and others. Some standards submissions originate from vendor companies; others come from work conducted by Cable Television Laboratories Inc. Through 2004, more than 140 standards have been advanced by the SCTE, resulting in published documents that are approved by ANSI and available freely from the SCTEs Web site at www.ansi.org.
Usage: The physical dimensions of the common “F” connector were described in the first-ever standard submitted to ANSI by the Society of Cable Telecommunications Engineers.
Antenna design, then, depends on the signal involved — CB radio, satellite, cable, over-the-air television, cellular — and each variation has a specific technical description. Antenna types include dipole, phased array, parabolic, yagi, vertical/horizontal loop, and others, depending on the application.
Low power satellites, such as C-band spacecraft, require a large receiving dish — about six meters, or around 19 feet, in diameter. Higher-power satellites, such as those used by the satellite TV industry, require smaller aperture antennas: Because the incoming signal is sent at high power to begin with, little gain (amplification) is required before the signal gets to the antenna — so a smaller dish can be used.
In cable, a primary terrain for APIs is the invisible software innards of the digital set-top box, where APIs allow developers to write applications that render interactive program guides, customer-care message systems, on-screen trivia games and anything else they can dream up.
Visibly, artifacts can produce oddities of motion in a digital TV picture. Imagine a field of tall grass in the wind, exhibiting a strange jerking movement, instead of gentle waving.
Usage: In digital video, there exists a small outcropping of “expert viewers” who are uncannily adept at noticing artifacts in new advanced compression techniques.
Both cable and DSL versions of broadband Internet delivery are architected for asymmetry, for a fairly logical reason: Most consumer Internet behavior is asymmetrical. Upstream bandwidth is generally lightly used — a click to request a web page. What comes back down is generally larger — the page itself, with hulking graphics or a streaming video.
One unwavering trend is challenging all asymmetrical architectures, however: User-generated content. Cell phones that double as cameras are commonplace now. Ditto for cell phones that record video, and the inverse. To get the stuff out of those devices, through a network, to someone else to see, requires bandwidth — upstream bandwidth. Thus bandwidth is trending toward symmetry.
Founders include the National Cable Television Association (NCTA), the Joint Committee on InterSociety Coordination (JCIC), the Electronic Industries Association (EIA), the Institute of Electrical and Electronic Engineers (IEEE), the National Association of Broadcasters (NAB) and the Society of Motion Picture and Television Engineers (SMPTE).
Most people know of the ATSC because of a momentous event in late 1996. That’s when the FCC adopted the major elements of the ATSC’s work on terrestrial digital television, known amongst the techno-intelligentia as DTV Standard A/53. An advanced television system standard, it describes methods for digital broadcasting of both standard-resolution and high-definition television signals. It also has been adopted by the governments of Canada, Mexico, South Korea and Argentina.
Before that could happen, though, the ATSC had to sift through several major digital television proposals. “Major” meant AT&T (now Lucent), General Instrument (now a division of Motorola), North American Philips, the Massachusetts Institute of Technology, the David Sarnoff Research Center (now Sarnoff Corp.), Thomson Consumer Electronics, and Zenith (now LG).
In 1993, the ATSC essentially said “Hey! Why don’t you all work together?”
They merged their work, calling it “The Grand Alliance.” (Or, the “grand appliance,” as one engineer quipped at the time.)
Everyone had their part. Video encoders came from AT&T and GI. Philips contributed the video decoder. Dolby Laboratories came in with audio. Thomson and Sarnoff dispatched transport systems, and Zenith contributed a transmission subsystem. Sarnoff did the integration. Testing started in April of 1995, and ended in August.
The FCC gave the big green light to the ATSC’s work in the fall of that year, making it the “way things work” for digital terrestrial broadcast, to this day.
Usage: The ATSC remains active in all things digital TV, with a specific focus on interactive, and broadband multimedia.
Also known as a “pad,” attenuators are the opposite of amplifiers. Amplifiers boost a signal’s power; attenuators dampen it.
For example, most network amplifiers carry specific rules about the allowable strength of an incoming signal. At the same time network amplifiers are in-line devices — one feeds the next, which feeds the next, and so on down the line. In some cases, the output of, say, the first amplifier in the cascade, may be higher than what the second can feasibly accept — in which case the signal coming into the second amplifier is deliberately padded, or attenuated.
Once a small contributor to cable industry revenues, the business of filling local avails with local advertising has grown into something big. The Cabletelevision Advertising Bureau expected U.S. cable operators to generate nearly $5 billion in 2005 from selling local advertising time. Advertising today contributes from 5 – 10 percent of the total cable revenues reported by the industry’s larger companies.
© 2000-2016 translation-please.com. All Rights Reserved.