A Market Overview of BPL in the United States
Brett Kilbourne — Dec 01, 2007
Broadband over Power Line (BPL) has evolved since its advent in the United States in 2001. Whereas then the focus was on commercial services, such as high-speed Internet access and VoIP, now there is probably as much or more interest in using BPL for so-called smart-grid applications.
Meanwhile, the digital divide continues to gape in rural areas, and some BPL providers are focusing on providing broadband services in these areas. BPL is also uniquely suited to provide broadband services to multiple dwelling units (MDUs) and campus environments, and there are several hotel chains that have signed deals with in-building BPL providers.
At the same time, there is a budding market for using BPL for home networking and consumer electronics applications. Companies like Sony, Panasonic and Intel are actively involved in developing standards for BPL, reflecting their interest in the technology to stream HDTV and other high-bandwidth applications in the home.
All these developments are catalyzing the market for BPL deployment.
Smart-Grid
There are several factors for the increased focus on using BPL for smart-grid applications. First, utilities are investing heavily on upgrading their infrastructure to improve efficiency and reliability. Second, policymakers are supporting smart-grid to reduce global warming and manage demand for electricity. Third, technology has matured to enable cost-effective solutions for smart-grid applications, and BPL is uniquely suited to work in a variety of environments where smart-grid applications are needed. Finally, and probably most importantly, the market is changing dramatically — generation costs are skyrocketing and demand is outstripping supply.
This trend has contributed to the widespread deployment of BPL by several utilities. In Dallas, Oncor Electric Delivery is partnering with Current Group to deploy BPL to two million customers by 2010, and today the network already passes over 108,000 homes and businesses in both suburban and rural areas.
In Houston, CenterPoint Energy is working with IBM to deploy BPL to 45,000 electric and gas customers, and the system currently passes approximately 10,000 homes and businesses.
The system in Dallas will provide smart-grid and commercial services, and the system in Houston will be used exclusively to provide smart-grid services.
Not coincidentally, Texas has laws that promote the deployment of BPL. In addition, this year the Texas PUC has established regulations for advanced metering, and these regulations will help utilities to recover investments in their BPL networks for advanced metering. Other states, such as Ohio, Michigan and Massachusetts, are conducting their proceedings on advanced metering as well.
The federal government is trying to further encourage this trend, and Congress passed legislation this year that promotes the deployment of smart-grid technologies by providing funding for research and development and creating other incentives for utilities to invest. As such, there is widespread support among state and federal policy makers for advanced metering in particular and smart-grid in general. BPL stands to gain from this, particularly in states (including California, Arkansas and New York) that have laws and regulations to encourage the deployment of BPL.
Broadband
The deployment of BPL for smart-grid is a win-win for broadband subscribers as well as electric customers. That’s because BPL networks are usually designed to support commercial broadband services, as well as smart-grid applications. Moreover, utilities will tend to deploy BPL for smart-grid across their entire service territory, including areas that may not otherwise be considered economic for broadband services alone. Finally, utilities are willing wholesalers, creating an opportunity for third parties to provide broadband services to their customers by piggy-backing on the BPL network.
There are BPL providers that focus on providing broadband services in rural areas, and there are cooperative utilities and municipal utilities that are deploying BPL networks in rural and isolated communities in several parts of the country. One of the BPL providers, International Broadband Electric Communications Inc. (IBEC), recently announced that the FCC has granted certification of its equipment, thus enabling it to install the equipment for three rural electric cooperatives in Alabama, Indiana and Virginia, providing broadband service for some 105,000 customers.
IBEC also was granted a $19.2 million loan from the USDA Rural Utility Service to deploy BPL in rural areas. Another BPL provider, utility.net, uses IBEC’s technology to offer BPL in rural areas that are served by investor-owned utilities, and it has announced that it will deploy BPL to 20,000 customers in Grand Ledge, Mich., by the end of the year. These are just some of the examples of rural BPL deployments in the country.
There are also BPL providers that focus on providing BPL in buildings. The advantage to using BPL over other technologies is that it provides ubiquitous coverage throughout a building without needing any new wires. Not only does this mean broadband connectivity to the outlet, but also elevators and HVAC systems as well. As a result, BPL enables landlords to provide enhanced services to their tenants while reducing their operational expenses for heating, cooling and electricity. One such in-building provider, Telkonet, has deployed BPL in the Trump Properties in Manhattan and in the Queen Mary in Long Beach, Calif. These are just some of the more notable examples where BPL is being used to support broadband services and smart-building applications in MDUs and campus environments.
Home Networking and Consumer Electronics
There is a robust market already for BPL devices that are used to provide home networking, and there is burgeoning market for BPL-enabled consumer electronic devices. Most of the home networking devices in the United States are manufactured under a specification developed by the HomePlug Powerline Alliance (HomePlug).
The first specification, HP 1.0, provided raw throughput speeds of 11 megabits per second (Mbps) and the latest specification, HP AV (audio-visual) provides raw throughput speeds of up to 200 Mbps. One such HomePlug manufacturer, Intellon, recently announced that it had shipped 10 million chipsets worldwide.
In addition to HomePlug, there is also the Universal Powerline Alliance, which has produced its own specifications for BPL as well. One of its members, DS2, announced that it has shipped 3 million chipsets. Most recently, the consumer electronics industry has created its own BPL specifications organization, the Consumer Electronic Powerline Communications Association (CEPCA). One of its members, Panasonic, has developed a chip that it claims can provide raw throughput of 190 mbps on existing home power lines, enabling simultaneous use of two HDTVs, IP telephone and data transmission.
Standards and Regulations
These various specifications are being proposed to the Institute of Electrical and Electronics Engineers (IEEE), which is developing standards for interoperability and coexistence between various BPL devices and systems. Recently, the IEEE P1901 BPL standards group voted to adopt a merged standard proposal by HomePlug and Panasonic for interoperability between in-home and access BPL systems. The standard still needs to be ratified, but this is a significant step toward achieving an industrywide standard.
The development of a standard will not only ensure compatibility between different BPL devices, but it may enable different devices to interoperate with each other. It is expected that such a standard would lead to the mass market production of BPL devices, which could increase economies of scale and drive down prices — further improving the business case for the deployment of BPL systems. The IEEE could develop the standard for BPL in 2008.
Of course, the FCC has already developed standards for the technical operations of BPL in 2004. Those technical standards protect against possible radio frequency interference from BPL systems and are backed by the National Telecommunications and Information Administration (NTIA), which oversees federal use of the radio frequency spectrum. Those standards were challenged in federal court, and the case is winding up now that the briefs have been filed and arguments have been made. The resolution of that case should remove any remaining uncertainty that might still hang over BPL. Thus far, the FCC’s standards for BPL operations appear to be effective, and BPL operators have operated in compliance with those standards.
Alternative Technologies
BPL is just one of many technology options for broadband services and utility applications, but it has certain features and functionalities that are unique from alternative technologies.
First, BPL provides synchronous speeds. Some technologies, such as DSL, provide relatively higher speeds for downloads than they do uploads. The advantage for BPL is that synchronous speeds are better suited for real-time applications such as voice and video gaming. More upload bandwidth is also better for file sharing, which is particularly important for SOHO consumers.
Second, BPL can reach areas that wireless can’t and it can do so without the need to drill or run cable. So, for utilities looking to monitor their underground electric networks, this is one advantage to BPL. Likewise, consumers may prefer BPL to fiber or coax in the home or office, because it provides high bandwidth and is easy to install.
Conclusion
With all of these things going for it, you may wonder when BPL will be available in your area. The answer is it depends. First, it depends on the extent to which utilities adopt BPL for their smart-grid deployments. That will also depend on the manufacturers to produce equipment that is standardized to utility specifications and is cost-effective.
Finally, it will depend on the success of the large-scale deployments of BPL, which are underway now. If all goes well, other utilities will also adopt BPL, and consumers will reap the benefits both in terms of access and competition in the broadband market and in terms of smart-grid applications that improve the efficiency, reliability and security of the nation’s critical infrastructure.
Brett Kilbourne is director of regulatory services at the United Power Line Council.
Monday, December 31, 2007
Saturday, December 29, 2007
* U.S.Smart Grid investment from 2007-2020 is forecasted to be in the range of $70-120 Billion *
Touch, Reach,Digitize:
Are utilities looking hard enough at Smart Grid’s
communications backbone?
By Robert Robinson & Mark Hoffman
Upgrading the U.S. electric
power distribution
grid is one of the last
greenfield technology
transformations awaiting
our economy, and the
ramifications of utility investment that will
be required should not be underestimated.
Technology innovations in power delivery
have been fermenting for years, but only
now is the confluence of physical needs and
social expectations creating an environment
in which real and sustained monetary
commitments are being made to create a
“Smart Grid,” built on information-based
devices, digital communication, and advanced
analytics.
We believe that the feasibility and merits
of a highly integrated Smart Grid are now at
hand, and that tangible functionality is available
to be deployed in the near term. The
issue is no longer whether such a grid can
be created, but when it will be implemented
and what functionality it will contain.
To achieve the full potential of Smart
Grid, a communications network must be
in place to allow the existing power distribution
grid to monitor and measure usage
in real time, visualize network performance,
and create an enablement platform
that engages everyone from system operators
to customers very differently.
One key decision that utilities and regulators
now face is to select the backbone
digital communications infrastructure
that provides the most prudent long-term
platform from which to extract end-to-end
Smart Grid benefits. No single technology
is appropriate for every situation across
the U.S. power industry’s distribution
footprint, but if you envision an end-game
that incorporates true nodal digitization of
the grid, we believe a compelling case can
be made for a communications infrastructure
based on Broadband over Power Line
(BPL) technology. For utilities that adopt
the infrastructure philosophy of touch,
reach and digitize, BPL can provide a sustainable,
long-term foundation for differentiated
Smart Grid outcomes.
Making a Decision
For utility executives, a technology investment
the scope and scale of Smart Grid
is daunting. All-in, the cumulative U.S.
Smart Grid investment from 2007-2020 is
forecasted to be in the range of $70 - 120
Bn, a very wide range highlighting many
uncertainties. The risks posed by committing
to significant infrastructure additions
are very real, and overcoming these risks
necessitates broad regulatory and stakeholder
engagement. A healthy and open
debate on end-game Smart Grid requirements
is a must.
For utility regulators, the “bargain” for
long-term return-on-investment certainty
requires the extraction of full and fair technology
potential on behalf of customers,
while also allowing the business case for
Smart Grid to claim forward social benefits
that the utility cannot realistically capture
within its regulated cost of service. As regulators
decide the merits of grid digitization,
they need full visibility into how these
backbone decisions might play out.
For many, Smart Grid business cases
have been a long time in the making, emanating
largely from ~ 15-year efforts to
justify the economics of automated meter
reading (AMR). While the terminology
and technology have evolved, an advanced
metering infrastructure (AMI) is still the
anchor benefit in virtually all Smart Grid
analyses. Digital meter functionality and
the associated communications infrastructure
to enable it are the largest two investment
costs that utilities must recover in
rate base.
READ MORE: http://www.currentgroup.com/news/releases/11-07_Utility_Products.pdf?prid=1094
Are utilities looking hard enough at Smart Grid’s
communications backbone?
By Robert Robinson & Mark Hoffman
Upgrading the U.S. electric
power distribution
grid is one of the last
greenfield technology
transformations awaiting
our economy, and the
ramifications of utility investment that will
be required should not be underestimated.
Technology innovations in power delivery
have been fermenting for years, but only
now is the confluence of physical needs and
social expectations creating an environment
in which real and sustained monetary
commitments are being made to create a
“Smart Grid,” built on information-based
devices, digital communication, and advanced
analytics.
We believe that the feasibility and merits
of a highly integrated Smart Grid are now at
hand, and that tangible functionality is available
to be deployed in the near term. The
issue is no longer whether such a grid can
be created, but when it will be implemented
and what functionality it will contain.
To achieve the full potential of Smart
Grid, a communications network must be
in place to allow the existing power distribution
grid to monitor and measure usage
in real time, visualize network performance,
and create an enablement platform
that engages everyone from system operators
to customers very differently.
One key decision that utilities and regulators
now face is to select the backbone
digital communications infrastructure
that provides the most prudent long-term
platform from which to extract end-to-end
Smart Grid benefits. No single technology
is appropriate for every situation across
the U.S. power industry’s distribution
footprint, but if you envision an end-game
that incorporates true nodal digitization of
the grid, we believe a compelling case can
be made for a communications infrastructure
based on Broadband over Power Line
(BPL) technology. For utilities that adopt
the infrastructure philosophy of touch,
reach and digitize, BPL can provide a sustainable,
long-term foundation for differentiated
Smart Grid outcomes.
Making a Decision
For utility executives, a technology investment
the scope and scale of Smart Grid
is daunting. All-in, the cumulative U.S.
Smart Grid investment from 2007-2020 is
forecasted to be in the range of $70 - 120
Bn, a very wide range highlighting many
uncertainties. The risks posed by committing
to significant infrastructure additions
are very real, and overcoming these risks
necessitates broad regulatory and stakeholder
engagement. A healthy and open
debate on end-game Smart Grid requirements
is a must.
For utility regulators, the “bargain” for
long-term return-on-investment certainty
requires the extraction of full and fair technology
potential on behalf of customers,
while also allowing the business case for
Smart Grid to claim forward social benefits
that the utility cannot realistically capture
within its regulated cost of service. As regulators
decide the merits of grid digitization,
they need full visibility into how these
backbone decisions might play out.
For many, Smart Grid business cases
have been a long time in the making, emanating
largely from ~ 15-year efforts to
justify the economics of automated meter
reading (AMR). While the terminology
and technology have evolved, an advanced
metering infrastructure (AMI) is still the
anchor benefit in virtually all Smart Grid
analyses. Digital meter functionality and
the associated communications infrastructure
to enable it are the largest two investment
costs that utilities must recover in
rate base.
READ MORE: http://www.currentgroup.com/news/releases/11-07_Utility_Products.pdf?prid=1094
Monday, December 24, 2007
Gadgets affected by analog shutdown
Gadgets affected by analog shutdown
By The Associated Press
Fri Dec 21, 3:49 PM ET
Carriers will start shutting down the country's oldest cellular network, for analog devices, in February. How to know if you will be affected:
• Cell phones. If your phone is less than five years old, or has features like texting, Internet access or a built-in camera, it's not analog. An unknown number of analog handsets are still in use. Carriers say it's less than 1 percent of all U.S. cell phones. But with 250 million cell phones in use, that could still mean a million phones.
In particular, check phones that are kept around as 911-only phones. Such phones, which don't have a phone number and aren't initialized with a carrier, were given out by some donation programs that collected old phones.
The main carriers with analog service are AT&T Inc., Verizon Wireless and Alltel. Carriers have been telling analog customers about the shutdown and offering them new digital service plans and phones. Sprint Nextel and T-Mobile USA have no analog networks.
Separately from the analog shutdown, Alltel and AT&T will finish phasing out networks that use a first-generation digital technology known as D-AMPS or TDMA (for Time Division Multiple Access). This affects only cell phones — and only older ones. AT&T and its predecessor companies have been phasing out TDMA since 2001.
• Car communication systems. Generally, cars from the 2003 model year and older with OnStar from General Motors Corp., TeleAid from Mercedes-Benz or Lexus Link are affected, and most won't be upgradable. Upgrade kits are available for most OnStar systems from model years 2004 and 2005.
Class action lawsuits, consolidated in federal court in Detroit, are seeking compensation for the lowered value of the more than 500,000 affected cars with OnStar plus about 200,000 with other systems.
• Home alarms. Affected are burglar and fire alarms that use the analog network as a sole or backup link between the home and an alarm center. Generally, only homes with no wired phone service have used analog wireless service. Homes that have them will lose wireless backup alarms, which kick in if someone cuts the phone line. Alarm systems using digital wireless links became available in 2006.
By The Associated Press
Fri Dec 21, 3:49 PM ET
Carriers will start shutting down the country's oldest cellular network, for analog devices, in February. How to know if you will be affected:
• Cell phones. If your phone is less than five years old, or has features like texting, Internet access or a built-in camera, it's not analog. An unknown number of analog handsets are still in use. Carriers say it's less than 1 percent of all U.S. cell phones. But with 250 million cell phones in use, that could still mean a million phones.
In particular, check phones that are kept around as 911-only phones. Such phones, which don't have a phone number and aren't initialized with a carrier, were given out by some donation programs that collected old phones.
The main carriers with analog service are AT&T Inc., Verizon Wireless and Alltel. Carriers have been telling analog customers about the shutdown and offering them new digital service plans and phones. Sprint Nextel and T-Mobile USA have no analog networks.
Separately from the analog shutdown, Alltel and AT&T will finish phasing out networks that use a first-generation digital technology known as D-AMPS or TDMA (for Time Division Multiple Access). This affects only cell phones — and only older ones. AT&T and its predecessor companies have been phasing out TDMA since 2001.
• Car communication systems. Generally, cars from the 2003 model year and older with OnStar from General Motors Corp., TeleAid from Mercedes-Benz or Lexus Link are affected, and most won't be upgradable. Upgrade kits are available for most OnStar systems from model years 2004 and 2005.
Class action lawsuits, consolidated in federal court in Detroit, are seeking compensation for the lowered value of the more than 500,000 affected cars with OnStar plus about 200,000 with other systems.
• Home alarms. Affected are burglar and fire alarms that use the analog network as a sole or backup link between the home and an alarm center. Generally, only homes with no wired phone service have used analog wireless service. Homes that have them will lose wireless backup alarms, which kick in if someone cuts the phone line. Alarm systems using digital wireless links became available in 2006.
Friday, December 21, 2007
BPL not Dead Yet at Duke Energy !!!
Designing the Utility of the Future: Duke Energy Takes a Holistic View of Distribution
By Steven M. Brown, editor in chief
A number of utilities are beginning to take what might best be termed a “holistic” approach to distribution system improvement. Rather than piecemeal, siloed projects focusing on one specific area—like advanced metering, distribution automation or substation automation—these utilities are undertaking broader visions. They’re looking at how technology implementations in one part of the distribution system can interact with and work toward the betterment of enhancements in other parts of the distribution system and the power system as a whole.
Kansas City Power & Light is a prime example of this philosophy of holistic system enhancement with its “Comprehensive Energy Plan” (reported on in the September 2007 issue of this magazine; see “Issue Archives” at www.utility-automation.com). KCP&L’s Comprehensive Energy Plan was designed to meet the growing need for energy in the KCP&L service area. The plan includes proposals for new coal-fired and wind-fueled generation, investment in demand response programs, and six projects related to distribution automation. Also following this “holistic” approach to system improvement, two major Texas utilities, CenterPoint Energy Houston Electric and Oncor (formerly TXU Electric Delivery), are putting in broadband over powerline networks that promise to power a host of intelligent distribution system applications, including advanced metering, distribution automation and outage restoration.
Add to the list of utilities taking broad approaches to distribution system improvement Duke Energy with its “Utility of the Future” project.
Duke’s Utility of the Future initiative spans the entire distribution system and encompasses advanced metering, distribution automation, substation automation and even the integration of small-scale distributed generation. Duke is in the early stages of this initiative, which will culminate in a five- to seven-year build-out across the company’s service territory at a cost of just under a billion dollars.
Duke Energy has a service territory of approximately 47,000 square miles and delivers electric power to nearly 4 million customers in the Carolinas, Ohio, Kentucky and Indiana. The company delivers power to those customers over a network that consists of 20,000 miles of transmission lines and 106,000 miles of distribution lines. The primary focus of Duke’s Utility of the Future initiative is to build a networked infrastructure of intelligent devices on those 106,000 miles of distribution lines.
“What we envision is combining our new and existing power delivery assets—meters, capacitors, line sensors, substations, everything that’s on our distribution grid—and connecting those with sensing, monitoring and communication devices, creating a network to retrieve information from and deliver information to those assets,” said Matt Smith, Duke’s director of technology development. Smith also serves as director of the Utility of the Future initiative.
“For our end state, we envision a network of devices interacting to increase system efficiency, both for us and for our customers,” he said.
Unlike much of the current wave of “smart grid” programs, Duke’s Utility of the Future plan doesn’t necessarily have the customer meter as its focal point. While advanced metering is an integral part of the Utility of the Future plan and several of the pilot programs associated with it, Smith said Duke is looking at the meter as one of many endpoints that can serve as a source of distribution system information.
“It’s just like we look at our own company’s internal computer network,” Smith said. “Every computer or printer on the network is an endpoint. They all serve different purposes, but one doesn’t necessarily provide information that’s more important than another.”
Smith does acknowledge that smart meters will provide Duke Energy with important information about how the utility’s customers use energy. Pushing energy efficiency initiatives out to customers—something Smith refers to as “universal access to energy efficiency”—is one of the main goals of the Utility of the Future effort. Smith said that, currently, the meter provides the best interface between Duke Energy and its customers, but, in the future, this interface could move either closer to the utility or closer to the customer. “It may be devices in the home that we interact with,” he said. “Or it may move further into our system. A wireless communication device may sit at the transformer and communicate inside the home without going through the meter.
“We’re pursuing a concept that would be a sort of dashboard in the home where the customer would have direct access to information about their usage and what’s happening on their side of the meter,” Smith said.
He noted that the information Duke currently provides its electric customers is identical to the type of information most electric utilities provide: a rearward view of how much energy a customer has used on a monthly basis for the past year. But, taking a retrospective look at energy usage does little to empower consumers to manage their usage in the here and now.
“What we want to do is increase the amount of information, give them more granular insight, whether it’s on an hourly basis or every 15 minutes, some increment where they can see with more clarity what’s happening (with their energy usage),” Smith said. He added that he would like to see this energy usage data driven down to the device level so customers can see, for instance, what their top five energy consuming devices are. He also wants to be able to deliver this information in near-real-time, as opposed to the current method of providing historical information.
“Our focus is not just the ‘smart grid’ but how we enable our customers to participate in energy efficiency.”
Besides promoting energy efficiency, Smith said Duke may also be able to offer such metering-related programs as prepaid metering and remote connect/disconnect to customers in the future. (See pages 40-44 of this issue for more information on remote connect/disconnect programs.) He also noted that power reliability, power quality and outage restoration are other areas that will benefit from the Utility of the Future projects related to metering.
Moving back onto the Duke system from the customer meter, the Utility of the Future program will also include such distribution automation functionality as line sectionalizing so Duke is better able to isolate faults on its system and improve power reliability. Substation automation and communication with intelligent devices inside the substation also fall under the Utility of the Future umbrella.
“As we add new substations or upgrade existing substations, we’re making sure we put in devices that are capable of connecting to a network, that they have standard interfaces, like an Ethernet-type connection. And we’re focusing on interfacing with the right communication systems.”
Therein lies one of the main challenges Smith believes Duke will encounter as they move forward with the Utility of the Future initiative: technology selection.
“We’re looking for the best combination of technologies,” Smith said. “We don’t think one technology will work across all our territories. We believe we’ll need a combination of both communications and endpoint technologies. We’re not seeing one vendor who can come in and meet all our needs from the substation to the customer home in every service territory. Our challenge isn’t so much finding technology that works as it is finding the right combination of technologies that will work.”
Smith noted that the proprietary nature of many vendor offerings also pose a problem. Open standards, Smith says, are crucial to Duke’s vision.
“Our number one obstacle is interoperability of devices, without a doubt. What we see in the vendor community are isolated products. They generally offer a product that will work across our system but not across other vendors.”
Currently, Smith said Duke is in the process of determining exactly what communication and endpoint technologies they need in place to arrive at the Utility of the Future. He said this technology evaluation will continue through the second quarter of 2008. Starting at the end of 2008, Duke should be ready to make decisions on exactly how they will deploy this new technology. After that, Smith envisions a five- to seven-year build-out across Duke’s service territories, with the bulk of the build-out coming in years one through five.
Smith said Duke’s capital plan over the next five years for the Utility of the Future initiative is estimated at $975 million.
BPL not Dead Yet at Duke Energy
Matt Smith, Duke Energy’s director of technology development and director of the company’s Utility of the Future initiative, says a recent media report that the utility is abandoning efforts in broadband over powerline (BPL) communications technology isn’t entirely accurate. He said BPL as a communications medium, though not without its shortcomings, is still in the mix as the company looks to build a broad network of intelligent devices throughout its distribution system.
Duke energy currently has broadband over powerline technology deployed in Cincinnati, Ohio, and Charlotte, N.C.
“We’re finding that the technology (BPL) is efficient in delivering information (in the form of broadband Internet access) to the home. We’ve had positive response from customers,” Smith said. “On the utility side of the meter, we’re finding that the equipment is fairly expensive at this point and that we need more of it than we anticipated.”
Smith said that in the early stages of Duke’s evaluation of BPL, there was an assumption that BPL couplers could be placed at every other transformer or every other customer drop. (A coupler is a device that allows data on power lines to bypasses the transformer to ensure optimal strength of the BPL signal.) “We’re finding we need more BPL equipment than we had anticipated, and so the cost-benefit has been challenging,” he said.
Smith said that rather than abandoning BPL, Duke will look to leverage existing BPL assets to interface with the intelligent devices the company is installing as part of its Utility of the Future effort. While he said Duke is not currently installing new BPL equipment to further the Utility of the Future project, the company is trying to determine whether there is a cost-effective way to use the BPL assets already in place to form at least part of the communications network that will interface with intelligent devices such as meters, transformers, line sensors and equipment within substations.
Utility Automation & Engineering T&D December, 2007
Author(s) : Steven Brown
By Steven M. Brown, editor in chief
A number of utilities are beginning to take what might best be termed a “holistic” approach to distribution system improvement. Rather than piecemeal, siloed projects focusing on one specific area—like advanced metering, distribution automation or substation automation—these utilities are undertaking broader visions. They’re looking at how technology implementations in one part of the distribution system can interact with and work toward the betterment of enhancements in other parts of the distribution system and the power system as a whole.
Kansas City Power & Light is a prime example of this philosophy of holistic system enhancement with its “Comprehensive Energy Plan” (reported on in the September 2007 issue of this magazine; see “Issue Archives” at www.utility-automation.com). KCP&L’s Comprehensive Energy Plan was designed to meet the growing need for energy in the KCP&L service area. The plan includes proposals for new coal-fired and wind-fueled generation, investment in demand response programs, and six projects related to distribution automation. Also following this “holistic” approach to system improvement, two major Texas utilities, CenterPoint Energy Houston Electric and Oncor (formerly TXU Electric Delivery), are putting in broadband over powerline networks that promise to power a host of intelligent distribution system applications, including advanced metering, distribution automation and outage restoration.
Add to the list of utilities taking broad approaches to distribution system improvement Duke Energy with its “Utility of the Future” project.
Duke’s Utility of the Future initiative spans the entire distribution system and encompasses advanced metering, distribution automation, substation automation and even the integration of small-scale distributed generation. Duke is in the early stages of this initiative, which will culminate in a five- to seven-year build-out across the company’s service territory at a cost of just under a billion dollars.
Duke Energy has a service territory of approximately 47,000 square miles and delivers electric power to nearly 4 million customers in the Carolinas, Ohio, Kentucky and Indiana. The company delivers power to those customers over a network that consists of 20,000 miles of transmission lines and 106,000 miles of distribution lines. The primary focus of Duke’s Utility of the Future initiative is to build a networked infrastructure of intelligent devices on those 106,000 miles of distribution lines.
“What we envision is combining our new and existing power delivery assets—meters, capacitors, line sensors, substations, everything that’s on our distribution grid—and connecting those with sensing, monitoring and communication devices, creating a network to retrieve information from and deliver information to those assets,” said Matt Smith, Duke’s director of technology development. Smith also serves as director of the Utility of the Future initiative.
“For our end state, we envision a network of devices interacting to increase system efficiency, both for us and for our customers,” he said.
Unlike much of the current wave of “smart grid” programs, Duke’s Utility of the Future plan doesn’t necessarily have the customer meter as its focal point. While advanced metering is an integral part of the Utility of the Future plan and several of the pilot programs associated with it, Smith said Duke is looking at the meter as one of many endpoints that can serve as a source of distribution system information.
“It’s just like we look at our own company’s internal computer network,” Smith said. “Every computer or printer on the network is an endpoint. They all serve different purposes, but one doesn’t necessarily provide information that’s more important than another.”
Smith does acknowledge that smart meters will provide Duke Energy with important information about how the utility’s customers use energy. Pushing energy efficiency initiatives out to customers—something Smith refers to as “universal access to energy efficiency”—is one of the main goals of the Utility of the Future effort. Smith said that, currently, the meter provides the best interface between Duke Energy and its customers, but, in the future, this interface could move either closer to the utility or closer to the customer. “It may be devices in the home that we interact with,” he said. “Or it may move further into our system. A wireless communication device may sit at the transformer and communicate inside the home without going through the meter.
“We’re pursuing a concept that would be a sort of dashboard in the home where the customer would have direct access to information about their usage and what’s happening on their side of the meter,” Smith said.
He noted that the information Duke currently provides its electric customers is identical to the type of information most electric utilities provide: a rearward view of how much energy a customer has used on a monthly basis for the past year. But, taking a retrospective look at energy usage does little to empower consumers to manage their usage in the here and now.
“What we want to do is increase the amount of information, give them more granular insight, whether it’s on an hourly basis or every 15 minutes, some increment where they can see with more clarity what’s happening (with their energy usage),” Smith said. He added that he would like to see this energy usage data driven down to the device level so customers can see, for instance, what their top five energy consuming devices are. He also wants to be able to deliver this information in near-real-time, as opposed to the current method of providing historical information.
“Our focus is not just the ‘smart grid’ but how we enable our customers to participate in energy efficiency.”
Besides promoting energy efficiency, Smith said Duke may also be able to offer such metering-related programs as prepaid metering and remote connect/disconnect to customers in the future. (See pages 40-44 of this issue for more information on remote connect/disconnect programs.) He also noted that power reliability, power quality and outage restoration are other areas that will benefit from the Utility of the Future projects related to metering.
Moving back onto the Duke system from the customer meter, the Utility of the Future program will also include such distribution automation functionality as line sectionalizing so Duke is better able to isolate faults on its system and improve power reliability. Substation automation and communication with intelligent devices inside the substation also fall under the Utility of the Future umbrella.
“As we add new substations or upgrade existing substations, we’re making sure we put in devices that are capable of connecting to a network, that they have standard interfaces, like an Ethernet-type connection. And we’re focusing on interfacing with the right communication systems.”
Therein lies one of the main challenges Smith believes Duke will encounter as they move forward with the Utility of the Future initiative: technology selection.
“We’re looking for the best combination of technologies,” Smith said. “We don’t think one technology will work across all our territories. We believe we’ll need a combination of both communications and endpoint technologies. We’re not seeing one vendor who can come in and meet all our needs from the substation to the customer home in every service territory. Our challenge isn’t so much finding technology that works as it is finding the right combination of technologies that will work.”
Smith noted that the proprietary nature of many vendor offerings also pose a problem. Open standards, Smith says, are crucial to Duke’s vision.
“Our number one obstacle is interoperability of devices, without a doubt. What we see in the vendor community are isolated products. They generally offer a product that will work across our system but not across other vendors.”
Currently, Smith said Duke is in the process of determining exactly what communication and endpoint technologies they need in place to arrive at the Utility of the Future. He said this technology evaluation will continue through the second quarter of 2008. Starting at the end of 2008, Duke should be ready to make decisions on exactly how they will deploy this new technology. After that, Smith envisions a five- to seven-year build-out across Duke’s service territories, with the bulk of the build-out coming in years one through five.
Smith said Duke’s capital plan over the next five years for the Utility of the Future initiative is estimated at $975 million.
BPL not Dead Yet at Duke Energy
Matt Smith, Duke Energy’s director of technology development and director of the company’s Utility of the Future initiative, says a recent media report that the utility is abandoning efforts in broadband over powerline (BPL) communications technology isn’t entirely accurate. He said BPL as a communications medium, though not without its shortcomings, is still in the mix as the company looks to build a broad network of intelligent devices throughout its distribution system.
Duke energy currently has broadband over powerline technology deployed in Cincinnati, Ohio, and Charlotte, N.C.
“We’re finding that the technology (BPL) is efficient in delivering information (in the form of broadband Internet access) to the home. We’ve had positive response from customers,” Smith said. “On the utility side of the meter, we’re finding that the equipment is fairly expensive at this point and that we need more of it than we anticipated.”
Smith said that in the early stages of Duke’s evaluation of BPL, there was an assumption that BPL couplers could be placed at every other transformer or every other customer drop. (A coupler is a device that allows data on power lines to bypasses the transformer to ensure optimal strength of the BPL signal.) “We’re finding we need more BPL equipment than we had anticipated, and so the cost-benefit has been challenging,” he said.
Smith said that rather than abandoning BPL, Duke will look to leverage existing BPL assets to interface with the intelligent devices the company is installing as part of its Utility of the Future effort. While he said Duke is not currently installing new BPL equipment to further the Utility of the Future project, the company is trying to determine whether there is a cost-effective way to use the BPL assets already in place to form at least part of the communications network that will interface with intelligent devices such as meters, transformers, line sensors and equipment within substations.
Utility Automation & Engineering T&D December, 2007
Author(s) : Steven Brown
Bush Signs US Energy Bill into law Wednesday (Dec. 19, 2007)
This US energy bill enacts as well the creation of a smart grid and demand response network as an US policy. Among the many provisions of the new act are: a "National Action Plan" for Demand Response; a national commitment to "smart grid" technologies, (creates a smart grid task force to report on the development of this policy within the year, establishes a smart grid advisory panel to the development of new smart grid technologies, and authorizes $100 million to support smart grid technology research and development); a requirement that every state ensure that it promotes energy efficiency by providing the proper economic rewards and by removing regulatory and management disincentives (commonly known in the trade as "decoupling"); a requirement that every state consider offering demand response programs to reduce peak electricity consumption, etc. (See a previous post in this blog about this issue: US Legislation on Smart Grids )
Intellon Shares Rise in Debut on Nasdaq
Friday December 14, 12:27 pm ET
Intellon Corp. Shares Gain More Than 20 Percent After IPO Prices Well Below Expectations
NEW YORK (AP) -- Shares of Intellon Corp. gained more than 20 percent in their first day of trading Friday despite the circuit-maker's initial public offering pricing well below the anticipated price range.
Shares rose $1.32, or 22 percent, to $7.32 in midday trading. After pricing at $6, the shares have traded as high as $7.90 and have yet to trade below $7.
The Ocala, Fla.-based company, which makes integrated circuits that enable high-speed communications over electrical wiring, had expected the offering of 7.5 million shares to price between $9 and $11 per share.
Based on the offering price, Intellon raised about $45 million, before expenses.
The company plans to use proceeds from the IPO for working capital, capital expenditures and other general corporate purposes, including potential acquisitions.
Intellon's products are designed to allow customers to share media among personal computers and consumer electronics throughout the home using existing electrical wiring.
For the nine months ended Sept. 30, Intellon narrowed its loss to $4.4 million, from $5.3 million in the prior-year period. At the same time, the company's revenue grew to $36.6 million, from $24.8 million in the first nine months of 2006.
Deutsche Bank Securities, Jefferies & Co., Piper Jaffray and Oppenheimer & Co. served as underwriters.
Intellon's shares are trading on the Nasdaq Global Market under the symbol "ITLN."
Intellon Corp. Shares Gain More Than 20 Percent After IPO Prices Well Below Expectations
NEW YORK (AP) -- Shares of Intellon Corp. gained more than 20 percent in their first day of trading Friday despite the circuit-maker's initial public offering pricing well below the anticipated price range.
Shares rose $1.32, or 22 percent, to $7.32 in midday trading. After pricing at $6, the shares have traded as high as $7.90 and have yet to trade below $7.
The Ocala, Fla.-based company, which makes integrated circuits that enable high-speed communications over electrical wiring, had expected the offering of 7.5 million shares to price between $9 and $11 per share.
Based on the offering price, Intellon raised about $45 million, before expenses.
The company plans to use proceeds from the IPO for working capital, capital expenditures and other general corporate purposes, including potential acquisitions.
Intellon's products are designed to allow customers to share media among personal computers and consumer electronics throughout the home using existing electrical wiring.
For the nine months ended Sept. 30, Intellon narrowed its loss to $4.4 million, from $5.3 million in the prior-year period. At the same time, the company's revenue grew to $36.6 million, from $24.8 million in the first nine months of 2006.
Deutsche Bank Securities, Jefferies & Co., Piper Jaffray and Oppenheimer & Co. served as underwriters.
Intellon's shares are trading on the Nasdaq Global Market under the symbol "ITLN."
Thursday, December 20, 2007
Corinex Introduces the AV200 Powerline Ethernet Wall Mount F for Multimedia Networks without New Wiring (19/12/2007)
Corinex Introduces the AV200 Powerline Ethernet Wall Mount F for Multimedia Networks without New Wiring
(19/12/2007)
Corinex Communications introduces the AV200 Powerline Ethernet Wall Mount F, designed to support the distribution of video on demand (VOD), IPTV, voice and broadband Internet access over existing electrical wires. Made with an electrical pass-through female outlet, the AV200 Powerline Ethernet Wall Mount F is a convenient solution for consumers wanting to create multimedia network applications without adding any new wiring to their home. The pass-through outlet allows consumers to plug the Wall Mount F into a standard electrical outlet and still have an available outlet to power other devices such as TVs and computers.
The Wall Mount F has an integrated noise filter that eliminates noise and interference from devices such as hair dryers and vacuums, thus ensuring perfect video streaming. In addition, Wall Mount F includes a video performance indicator which allows a consumer or telecom operator helpdesk to see the status of the Powerline networking in an instant via a multicolor LED.
"The AV200 Powerline Ethernet Wall Mount F provides consumers with an easy-to-install alternative to wiring their entire home in an effort to create a digital home network," commented Brian Donnelly, Corinex's Vice President of Marketing. "Not only is the Wall Mount F easy to install, but it guarantees perfect, uninterrupted video streaming without even taking up a wall outlet."
AV200 Powerline technology by Corinex creates a secure, faster than wireless, 200 Mbps connection with numerous uses. By simply plugging in one Corinex Wall Mount F into a modem or router and a second into any Ethernet enabled computing or media device, all the electrical outlets in the home are ready to receive high bandwidth multimedia signals. Installation takes just minutes.
With the Wall Mount F, VoIP, broadcast television and multiplayer head-to-head games do not experience glitches, frame loss or delays, even if other users in the home network are downloading large files, Websurfing or streaming MP3 songs.
The AV200 Powerline Ethernet Wall Mount F is available starting today for $124 MSRP.
www.corinex.com
(19/12/2007)
Corinex Communications introduces the AV200 Powerline Ethernet Wall Mount F, designed to support the distribution of video on demand (VOD), IPTV, voice and broadband Internet access over existing electrical wires. Made with an electrical pass-through female outlet, the AV200 Powerline Ethernet Wall Mount F is a convenient solution for consumers wanting to create multimedia network applications without adding any new wiring to their home. The pass-through outlet allows consumers to plug the Wall Mount F into a standard electrical outlet and still have an available outlet to power other devices such as TVs and computers.
The Wall Mount F has an integrated noise filter that eliminates noise and interference from devices such as hair dryers and vacuums, thus ensuring perfect video streaming. In addition, Wall Mount F includes a video performance indicator which allows a consumer or telecom operator helpdesk to see the status of the Powerline networking in an instant via a multicolor LED.
"The AV200 Powerline Ethernet Wall Mount F provides consumers with an easy-to-install alternative to wiring their entire home in an effort to create a digital home network," commented Brian Donnelly, Corinex's Vice President of Marketing. "Not only is the Wall Mount F easy to install, but it guarantees perfect, uninterrupted video streaming without even taking up a wall outlet."
AV200 Powerline technology by Corinex creates a secure, faster than wireless, 200 Mbps connection with numerous uses. By simply plugging in one Corinex Wall Mount F into a modem or router and a second into any Ethernet enabled computing or media device, all the electrical outlets in the home are ready to receive high bandwidth multimedia signals. Installation takes just minutes.
With the Wall Mount F, VoIP, broadcast television and multiplayer head-to-head games do not experience glitches, frame loss or delays, even if other users in the home network are downloading large files, Websurfing or streaming MP3 songs.
The AV200 Powerline Ethernet Wall Mount F is available starting today for $124 MSRP.
www.corinex.com
Friday, December 14, 2007
Voices: Chano Gómez on powerline networking's "universal" hope
Voices: Chano Gómez on powerline networking's "universal" hope
By Brian Dipert, Senior Technical Editor -- 12/14/2007
EDN
A profusion of incompatible "standards," the lingering memory of poor initial products, and the sheer technical challenge of the application have thus far retarded the adoption of powerline networking. Here, Chano Gómez, vice president of technology and strategic partnerships with chipmaker DS2, offers technical and strategic insight into the UPA (Universal Powerline Association) technology his company champions. A future installment of Voices will feature an interview with Andraes Melder of Intellon, which leads the opposing HomePlug Powerline Alliance.
OVERVIEW
I'd like to begin by asking you to provide an introductory summary of the historical development and current status of UPA technology, as it applies to LANs, to broadband Internet-access service distribution (WANs), and to other past and present applications, such as power-meter monitoring.
First, let me clarify that I'm not a UPA official, or even DS2's representative at UPA, so what I'll explain here is my personal view of the historical developments, and not UPA's or DS2's official position.
To understand why UPA's technology is the way it is, we have to go back to 2000, many years before UPA was founded. In 2000, a few months after I joined the company as a junior system architecture engineer, DS2 was focused on developing technology for broadband-access applications that would allow power companies to provide Internet access and VOIP services to their energy customers. At that time, some companies were already working on applying powerline technology for home-networking applications, but as far as I remember, we were 100% focused on access.
When we started working on the next-generation platform (200-Mbps data rate), DS2 realized that we had an opportunity to address the needs of additional market segments, in particular higher speeds for whole-home multimedia networking, so the architecture was changed to accommodate requirements for future home-networking usage scenarios. That turned out to be a pretty good idea, which allowed us to reuse the same platform to deliver different products for different market segments and differentiate ourselves from other players in the in-home powerline market who were offering lower-speed product, basically for data. Both markets are very interesting and complementary: Broadband access so far is a low-volume, high-margin segment, while home networking is a high-volume, low-margin segment.
Why am I providing all these historic details? Because they explain very well DS2's unique vision of the powerline-networking market. We really believe that access applications and home-networking applications represent legitimate uses of powerline technology and both have legitimate requirements that must be addressed by vendors and industry standards. For a long time, there were companies in our industry that thought that only home-networking applications had the right to use the powerline medium, while other companies had the opposite view. Over time, those positions have become less radical when discussed in public forums, but still many companies have that bias in their DNA. ("Home networks must take 90% of the bandwidth, and access should just get the remains." Or "Access is the really critical application, and home networks can use WiFi if needed.") DS2 has a large number of customers and partners in both camps, so we really believe that we must create standards that allow both applications to share bandwidth in a fair way. It's in our DNA. Our engineering resources are split 50-50 between both markets.
For many years, we were members of an organization [HomePlug] that did not share that vision. We tried to change that, unsuccessfully. In December 2004 we left that organization and, along with a group of partners who shared our vision, founded the Universal Powerline Association. The original goal of UPA was to create standards for coexistence between access and home networks. However, feedback from the market pointed to the need for real interoperability standards, so UPA decided to extend its scope in order to create a standard called UPA DHS (Digital Home System) and also to certify product compliance to that standard to deliver on interoperability where others had failed.
That effort has been quite successful. The first 200-Mbps product introduced in the US consumer market was based on the UPA standard, and according to The NPD Group, more than 50% of 200-Mbps products sold in the US retail market are based on the UPA specification and have the UPA logo. Right now, UPA includes members from very varied backgrounds: Companies from North America, Europe, and Japan; companies developing access products and home-networking products; semiconductor companies, power companies, and service providers. There are specific groups focused on high-speed applications, while others are developing standards for low-speed control applications.
In your mind, how do UPA technology's attributes enable it to coexist, supplement, and/or supplant other traditional data-distribution technologies, for both LAN and WAN applications, and both today and in the future?
From the very beginning, the foundation for UPA technology had to be very flexible, because it had to provide solutions for many different markets. Initially that represented more work than what would be required for a technology focused on a single narrow application, but at the end, we think the effort paid off, because the technology is now being used in many different environments, as a complement to many different LAN/WAN technologies.
I'll just give some examples of how our customers and partners are using UPA technology as a complement to other technologies. The most popular scenario is service providers using powerline technology as an extension of DSL, DSL2+, VDSL, and FFTH for IPTV distribution inside customers' homes. UPA technology's flexibility in terms of frequency band is very useful here, as it allows device manufacturers to tune the spectrum used by the powerline transmitter in order to ensure coexistence with other technologies (such as VDSL) whose spectrum partially overlaps the one used by powerline technology.
Another very popular scenario is using powerline to provide Internet and VOIP access to individual apartments in MDUs [multidwelling units] in FTTB [fiber-to-the-building] deployments.
In the consumer space, many combined products are already in demand by users: using powerline to extend the range of existing wireless networks or using powerline as a backbone for interconnecting wireless access points in enterprise or commercial environments. Also, using powerline as a backbone for short-range UWB [ultrawideband] networks could be an interesting application once UWB becomes more popular.
In general, as more applications and services converge to IP-based protocols, it becomes easier for manufacturers, consumers, and service providers to interface them with powerline networks. You should expect to see powerline technology as an extension of WiMax in high-rise buildings, or as a backbone connection for nanocells and femtocells in cellular networks.
TECHNOLOGY SPECIFICS
How does UPA modulate data on the ac power signal, and how does it handle detection, correction, and/or retransmission of errors in that data?
UPA's physical layer is based on OFDM [orthogonal frequency division multiplexing] modulation. OFDM was chosen as the modulation technique because of its inherent adaptability in the presence of frequency-selective channels, its resilience to jamming signals, its robustness to impulsive noise, and its capacity of achieving high spectral efficiency.
Detection and correction of errors is achieved by a concatenation of four-dimensional trellis codes and Reed-Solomon forward error correction, specially tuned to cope with the very special powerline channel impairments. For those cases in which packets become so corrupted by noise that they cannot be recovered, a retransmission mechanism is used. Packet fragments are numbered individually, and each pair of transmitter and receiver keep track of which one has been received correctly and which one needs to be retransmitted, using an ACK protocol.
How does UPA compensate for varying noise levels on the power grid, caused by fluorescent lights and motor-driven products such as vacuum cleaners, hair dryers, and heating and air-conditioning fans? How do you educate consumers on the potential need to install noise filters on the power inputs of these interference sources, in order to ensure reliable powerline-network operation?
The interesting feature of noise found in powerline is that it's not like the famous "white Gaussian noise" found in any digital-communications textbook. It's "colored" noise (stronger in some frequencies and weaker in others), non-Gaussian (you have very strong peaks that do not follow a normal distribution), and nonstatic (you have shorts periods of silence followed by shorts periods of strong noise). So, a powerline device has to find out which are the "clean" time/frequency slots and make sure to avoid the noisy ones. And once this is done, somebody will plug/unplug something in a room nearby, and you have to repeat the time/frequency analysis all over again, in only a few milliseconds, to ensure that the user does not experience any service interruption.
Fortunately, advances in DSP and ASIC technology provide enough computing power to perform a pretty accurate time/frequency analysis of the communication medium, and we are able to ascertain which are the "slots" where we can transmit with efficiencies of up to 10 bits/second/Hz and which are the ones where maximum efficiency is lower (or even zero).
Regarding the issue of how to educate consumers about "best practices" for maximizing the performance of their networks, we work on two different fronts. On one hand, we work with our partners with more experience in the consumer market (companies such as Netgear, D-Link, and Buffalo Technology) to ensure that their product packaging and user manuals explain the best way to use the product (for example, always recommending users to connect the adapter directly into a wall socket and not into a surge-protected power strip). Additionally, we try to provide useful feedback to users so that they can easily recognize which is the best way to use the product. In January 2007 we launched a reference design (code name DH10PF) with multicolored LEDs so that users can easily see if the network is operating at full performance (green for excellent performance, yellow for good performance, red for bad performance). The feedback we have received so far, both from consumers and service providers, is that the system is very intuitive and achieves the goal for which it was designed. Since then, other powerline vendors have started to "borrow" the idea, so it must be good.
How does UPA handle the reality that two power outlets (either in close proximity or not) that a consumer may want to interconnect via UPA may be fed by different circuit breakers, and (even more challenging) may derive from opposite phases of the 220V (US) source feed?
First, let me clarify that the fact that two outlets are in different circuits does not necessarily mean that powerline technology won't work there. In general, there is sufficient signal coupling between the wires (due to capacitance) to ensure that a connection can be done with adequate performance. My impression is that a lot of people have bad memories from legacy home-control powerline technologies that operated at much lower frequencies (kHz instead of MHz) and that didn't work very well when sockets were in different circuits and/or phases.
Having said that, it's also true that having signals "jumping" from circuit to circuit means that signal strength is slightly lower, so there is usually a certain amount of performance degradation in that situation. If you combine this with additional factors like a damaged wire and a strong noise source and low impedance due to several devices being connected in parallel in the same socket, you may find sockets where a connection cannot be established.
There are cases (in extremely large homes, or industrial/commercial environments) where a combination of long distance and circuit/phase change could result in bad performance in some socket pairs. UPA provides a very elegant and efficient way of solving this problem using repeaters. The user just needs to connect an additional powerline adapter in a socket close to the electrical switch panel (which is the electrical "center of gravity" of the building), and that adapter will automatically become a signal regenerator. As far as we know, this feature is unique to UPA technology. The solution is simple and elegant, because no configuration and no special hardware or firmware is required (any powerline adapter can be a repeater).
UPA is specified as a "200-Mbps" technology, but testing results suggest that it delivers only a limited percentage of that speed in real-life usage. Why was the peak PHY rate chosen as the technology designator? What range of TCP and UDP speeds do you believe most consumers will experience in real-life settings, and what kinds of applications are supportable (and conversely unsupportable) by those speeds, both in single- and multiple-coincident-data-stream situations? And do you address potential consumer confusion and frustration, when they don't get the performance results that the "200-Mbps" stamp on the outside of the product box might otherwise suggest they'll achieve?
If you take a look at the 802.11g specification, it's supposed to provide a data rate of up to 54 Mbps. Most of the 802.11g products have a label somewhere on the box that mentions "54 Mbps." If you measure TCP/IP throughput yourself, using standard tools such as "iperf," you'll see that the maximum you'll get (with all possible optimizations enabled) will be around 25 Mbps if you test between your access point and an end point, and around 12 Mbps if you test between two end points. That's in the best possible conditions (short distance) and it will degrade as you increase distance or obstacles between devices or in case you have other wireless networks or cordless phones in your vicinity.
If you take two UPA powerline adapters and perform the same test, in the same conditions, you'll get a maximum speed of 95 Mbps. Most UPA-based powerline products for the consumer market have a Fast-Ethernet interface, which is the reason why you cannot get beyond 95 Mbps. UPA products for access applications that have a Gigabit-Ethernet interface can provide up to 120 Mbps. Like in wireless networks, this data rate will decrease as you increase transmission distance and introduce noise sources (exactly in the same way as with cordless phones and 802.11g networks). So, the ratio between maximum throughput and PHY data rate in UPA powerline technology (50% to 60% depending on the test equipment) is not worse than in 802.11g technology (50% to 25% depending on test set up).
Does this mean that it's OK for the powerline industry to use the "200 Mbps" as the technology designator? To be honest, if I had the opportunity to start all over again, without the historic context provided by other networking technologies, I wouldn't have used the 200-Mbps label. I'd rather use a label that describes the expected application of the technology, without references to specific data rates. But given that we had to introduce a new technology in a market where consumers already knew that 802.11b worked at 11 Mbps, 802.11g at 54 Mbps, and HomePlug 1.0 at 14 Mbps, we needed to provide a reference that consumers could use for performing an apples-to-apples comparison.
UPA technology seems from my standpoint to focus on optimizing UDP performance, versus TCP. Assuming you agree with my perspective, why and how was UDP prioritized in the definition and implementation of the technology? And how does UPA stack up (from cost and other standpoints) against other powerline technologies that offer consumers more limited (or nonexistent) TCP-versus-UDP prioritization capabilities?
UPA technology itself was not designed to optimize UDP versus TCP. I think that some test results may create that perception because, in general, TCP performance is more sensitive to "events" in the communication channel than UDP. Most UDP test tools will generate packets as fast as possible and will flood all the bandwidth provided by the network. On the other hand, TCP has built-in mechanisms that reduce the data rate at which the transmitter generates packets when the protocol "thinks" that the network is congested. For example, if a packet experiences higher latency than the rest (for example, if the packet had to be retransmitted because of channel noise), the TCP stack may think that the network is congested and will reduce the data rate. Also, most TCP implementations limit the amount of packets (TCP window size) that can be transmitted before an acknowledgment (ACK) is received. This can also artificially limit the data rate obtained by TCP.
In my tests at home, the parameter that has most impact in performance is TCP window size. This is due to the fact that the "bandwidth × latency" product in powerline networks is higher than in Fast-Ethernet or Gigabit-Ethernet networks, so unless you have a large TCP window, your PC won't be able to fill the pipe with enough data. End users can change that in their computers by changing a registry value.
Another thing to consider is whether specific implementations of UPA products provide default prioritization schemes that give higher priority to UDP streams than TCP streams in case both are present and the network is congested. I know of at least one vendor who provides this default scheme, justified because in most cases UDP streams are used for multimedia applications (VOIP, video streaming) and it could make sense to prioritize those instead of TCP flows for nonmultimedia applications. Other UPA vendors provide different default priority schemes, based on 802.1p tags or on specific UDP port numbers.
UPA allows for QOS (quality of service) prioritization of particular types of data streams. How do you balance the desire for a robust consumer experience from a QOS (or other) standpoint against consumers' desire for a robust out-of-box experience (one that doesn't require tedious and confusing calibration and customization of QOS and other technology parameters)?
I don't expect consumers to perform any QOS configuration at all. Most of the QOS configuration options you may have seen in products that use DS2 silicon are there for other purposes: in some cases, the same product sold in retail is also used for the service-provider market. Those QOS options are there to allow service providers to customize prioritization for their specific service requirements. You need to be familiar with IEEE 802 standards and IETF RFC document in order to do anything useful with most of those QOS settings.
In my hands-on experience, powerline often exhibits extended latencies as compared with other networking technologies. What (if any) applications are therefore not candidates for using powerline as their transport scheme? And how can applications (and the operating systems they run on top of) compensate for powerline networks' extended latencies?
According to my experience, the effect of extra latency can be compensated for by increasing the maximum TCP window size used by applications and the operating system. This can be done easily with a registry setting in Windows. I understand that the configuration tool that comes with the CD provided by most vendors already does this registry change automatically, as it does not have any negative effect in any other application.
I'm glad to see that UPA appears to support consumer-upgradeable firmware. But, perhaps obviously, a no-upgrade-required scenario is even more preferable. What did (and does) the UPA standards body do to ensure robust out-of-box technology support, both for various network protocols and for interoperability of various manufacturers' UPA-cognizant equipment?
For me, these are totally unrelated things. Regardless of whether the final product is upgradeable or not, UPA has certification procedures in place to ensure that any product that has the UPA logo interoperates with any other products with the same logo. If you go to Best Buy and get a Netgear HDX101 unit and a D-Link DHP-300 unit and put them in your network, they will communicate with each other, regardless of the firmware version.
Firmware upgradeability is not a mechanism for vendors to ship nonstandard products with the hope of fixing problems later. Firmware upgradeability is a mechanism to add new features for installed products. If we look beyond the consumer market and consider also the service provider market, the requirements are even stronger. Most of our customers wouldn't even consider installing a single box that was not firmware upgradeable from a remote-management center using standard TCP/IP protocols.
Actually, this single feature is one of the main reasons why UPA technology is so popular with IPTV service providers. Right now, only UPA products provide remote management using an embedded TCP/IP stack. No other powerline product provides that today. They don't even have an IP address, let alone a complete TCP/IP stack.
How much of a concern is data security across a shared power-distribution topology, both in a multiresident neighborhood environment and in a multiapartment, single-premises setting? And how do you consequently educate consumers on the potential need to change the default encryption password and make other security adjustments? What encryption scheme(s) does UPA use, is encryption enabled by default (and at what performance impact versus an encryption-disabled alternative configuration), and can the encryption protocol be upgraded or otherwise enhanced on a situation-by-situation basis?
The latest AITANA chipset announced by DS2 at IDF Fall 2007 provides 256-bit AES encryption, which as far as I know, is the strongest encryption available in any powerline product today. The encryption engine is hardware-based, and the system has been designed in way that provides full performance regardless of whether encryption is used or not—unlike wireless systems, which usually have degraded performance when encryption is enabled. AITANA also supports 156-bit Triple-DES encryption for backwards compatibility with previous products. Key-exchange protocols are software-based, which means that they can be upgraded easily if better protocols are created.
One aspect on which we have spent a lot of time and engineering resources is finding ways in which users can enable security as easily as possible, even without using a computer. This no-computer goal may seem extreme, but it's important to note that one of the most popular applications of UPA technology is for in-home distribution of IPTV content (service providers like British Telecom, Telefonica, and Portugal Telecom are good examples). In many cases, subscribers to IPTV services may not even have a computer. They just signed up for the service because it was cheaper than regular cable or maybe had better content than satellite. They have no idea that their TV service is delivered via an ADSL2+ modem and don't even know what IPTV stands for.
For our DH10PF reference design, we came up with a feature called OBUS (One-BUtton Security), which basically allows the user to set up an encrypted network just by pressing a button on each powerline adapter within 30 seconds of each other. No computer required. No passwords to remember. If the LED is green, your secure network is up and running.
Other vendors tried similar ideas in the past, but with severe limitations: the units had to be connected physically close to each other for initial setup, and complex switches had to be configured for the system to work. We think our approach is the most user-friendly, and the fact that other vendors have started to "borrow" the idea seems to validate that.
The amateur (aka "ham") radio community has been quite vocal with its concerns regarding potential interference in the presence of an active powerline network, due to radiated powerline noise and consequent inductive coupling to the amateur radio setup's broadcast and reception antenna. Are the concerns valid, and if so, how has UPA technology been architected to mitigate these concerns (via notch filters or other schemes)? And what potential impact does such mitigation have on powerline performance and other robustness measures? What other potential destructive interference scenarios (wireless keyboards and mice, for example) exist?
I was asked about this specific topic more than a year ago. [Editor's note: Gomez here provided a link to this August 2006 interview with Computing Unplugged, and specifically called our attention to the following quotes from that interview.]
When the first trials of BPL [broadband over powerline] technology started, around seven or eight years ago, BPL systems transmitted high power levels and did not have special mechanisms to protect radio services. As the industry has learned more about the problems found with real installations, it has improved the technology, reducing power levels and providing sophisticated notching techniques to avoid interference.
In 2003 (three years ago), DS2 introduced its second Generation powerline chipset, which was the first in the industry to provide speeds up to 200 Mbps, and 40 dB programmable notches. These chips have been designed to allow BPL vendors to design equipment that meets FCC requirements, to adequately protect ham radio bands and to provide additional mitigation mechanism in case any isolated interference case is detected in a BPL network. The ARRL lab tested this technology in April this year and issued a favorable review.
You can see ARRL's view of DS2's second generation technology here.
As I understand it, powerline technology is unable to work through a surge protector or UPS (battery-backed uninterruptable power supply). How do you educate consumers about dealing with this issue? And how do you deal with the fact that surge-protection filters are increasingly being built directly into ac outlets?
It's true that most surge protector and UPS devices will block the frequencies used by powerline technology. We have done two things to avoid this problem: 1) Reduce the number of cases in which the user is forced to connect the powerline adapter to a power strip, and 2) Make it obvious for the user that connecting to a surge-protected power strip is a "bad idea."
The first problem is avoided by one of the most visible features of our DH10PF reference design: a passthrough socket on the powerline adapter, which allows the user to connect the adapter directly to a wall socket, while a power strip can still be connected to the passthrough socket. With this design, there is no reason why a user could not connect the adapter directly to a wall socket.
The second problem is solved with visual indicators, so that the user can immediately see that connecting the adapter in the wall socket provides a solid green LED, while connecting to a surge-protected power strip gives you a yellow or red LED.
Regarding surge-protection filters built directly into ac outlets, so far we have not seen them a lot. It's possible that they have become more popular in recently built homes, but these are coincidentally the kind of homes where CAT5 wiring is also built in, so this probably is not the target market for powerline technology anyway.
COMPETITION, COMPATIBILITY, AND STANDARDIZATION
In contrast to HomePlug, UPA (along with the other standards and certification bodies) seems from my viewpoint to be heavily DS2-influenced. Pragmatically, are these true standards bodies, open to influence from numerous industry participants, or are they "standards bodies" in name only? And how do DS2 and the UPA plan to evolve and mature in the future?
I don't agree with that view. I don't like talking about my competition, but I'll answer your question. You give the example of HomePlug versus UPA. I think UPA is as influenced by DS2 as HomePlug is by Intellon or HD-PLC is by Panasonic. We all like to call ourselves "standard" and call everybody else "proprietary," but in practice, if you are a device manufacturer and you want to buy HomePlug AV silicon, right now you can only buy from one vendor (Intellon), if you want to buy HD-PLC silicon you can only buy from Panasonic and if you want to buy from UPA you can only buy from DS2. As an industry, we now have the chance to solve this problem at IEEE P1901. This is the only opportunity we have to make this industry grow instead of recreating a Blu-Ray versus HD-DVD battle (but worse, with three competing specifications).
We still have a huge challenge in front of us. We need to work together as an industry to improve the quality of the existing proposal so that it can provide a technical solution to meet the needs of the BPL market. A large part of the consumer-electronics industry has been reluctant to integrate powerline technology in its products because of a lack of a single global standard. We now have an excellent opportunity to remove that obstacle by working together to create a single PHY/MAC specification that ensures complete interoperability between silicon vendors.
But we really need to make an effort to make it work, and so far the proposal on the table does not seem to achieve the most important goal: interoperability. The latest "2 PHY, 1 MAC" proposal discussed at IEEE P1901, if left unchanged, will create a situation where two products supposedly compliant with the IEEE P1901 specification may not be interoperable, because one of them is based on an OFDM PHY while the other is based on a Wavelet PHY.
Does the UPA specification provide room for proprietary (and backward-compatible) standards-based enhancements, as was the case with HomePlug 1.0 Turbo versus HomePlug 1.0? If so, how does the standards body plan to handle cases of companies that decide to implement such enhancements?
Yes, it does. I would even say that any standard that is well designed will always provide room for performance enhancements. DS2 recently announced the development of 400-Mbps powerline technology that is backward compatible with the existing UPA 200-Mbps products. So far, this is a technology exclusively developed by DS2, although I don't see any reason why UPA could not extend the specification to make 400 Mbps standard.
Your two primary competitors, as I view the marketplace, are the Intellon-championed HomePlug technology and the Panasonic-developed HD-PLC approach. Do you agree, or are there other powerline schemes that I've overlooked? How do you position yourself, as a technology and as a company creating products based on that technology, against your competitors? And how do you see the competitive market both today (worldwide and geography specific) and as it will evolve over time?
I think you made a pretty accurate description of the competitive landscape. There are other companies developing powerline technology, but either they are in very early development stage or are just focused on specific niche applications.
The main positioning difference between UPA and other organizations, such as HomePlug and/or HD-PLC, is that while the other organizations are mainly focused on developing specifications and products for the home-networking market (with BPL access as a second-category "afterthought" in the case of HomePlug or simply ignored in the case of HD-PLC), UPA has always been focused on developing the best possible technology that can satisfy the needs of both markets simultaneously.
So the summary is, unlike HomePlug and HD-PLC, which are focused on home networks only, UPA provides universal solutions for all markets.
Now, once the positioning of UPA is clear, what is the DS2 position in the framework of UPA? DS2 is a company that has consistently been the technology leader in powerline technology: We work hard to always be the first company to introduce the next performance level or the next key feature. We were the first company to introduce 45-Mbps products (at a time when the state-of-the-art in powerline was 14 Mbps), the first to introduce 200 Mbps (when, again, the competition was stuck at 14 Mbps), and we are now the first to introduce 400-Mbps technology. From the point of view of features, we were the first company to introduce elements that are now the standard reference in the industry: programmable notches, TDMA MAC, programmable QOS, single-chip repeaters, frequency-division repeaters, IP-based remote management, and more.
Currently, as I understand it (please confirm), multiple coincidently operating "200-Mbps" powerline technologies will actually degrade each other—far from coexisting (or, ideally, interoperating). Could you go into more detail regarding the coexistence and interoperability work going on in the IEEE?
Let me clarify a key issue regarding the different coexistence proposals discussed at IEEE P1901. As of today, the most mature proposal for solving the coexistence issue has been authored by engineers of DS2, Panasonic, and several other members of UPA and CEPCA [Consumer Electronics Powerline Communication Alliance] like Mitsubishi Electric and SiConnect. This proposal is the result of almost two years of hard work between all these companies. It includes well-defined mechanisms for achieving coexistence between access and in-home systems, and between different in-home systems. The proposal includes well-defined common signals that can be understood by otherwise incompatible OFDM and Wavelet systems. It also includes mechanisms for coexistence with non-P1901 and legacy devices. It is a very good specification, and we are very proud of the work we have done along with the rest of the industry.
Currently, multiple incompatible "200-Mbps" products sit side-by-side on retailers' shelves, and some networking vendors even offer incompatible powerline technologies within the same product line. How much consumer confusion currently exists, and how are you working with your customers (the networking vendors, along with their customers, the retailers) to minimize it? Until either a standards body such as the IEEE mandates interoperability, or might-makes-right market pressures cull out technology alternatives, won't consumer frustration inevitably grow as the powerline-networking market grows? And won't this incompatibility frustration put an unfortunate cap on powerline networking's market-growth potential?
I completely agree with you on the analysis of the current fragmentation of the market. The powerline market is growing fast, but nowhere as fast as it could in case a single standard existed.
Our position here is clear: We need a single standard, with a single PHY and a single MAC, so that all products are interoperable. We are willing to do whatever is needed to achieve this, even if this means re-designing silicon and departing from our current PHY and/or MAC. IEEE P1901 represents an excellent opportunity to achieve this, but so far it looks like we as an industry may miss this opportunity again, at least based on the current proposals being discussed at P1901, which will allow non-interoperable products to be labeled as P1901-compliant. Users will buy those products only to find later that they don't interoperate.
The current situation in the market is like having three groups of people speaking in three different languages, say German, French, and Chinese.
Ideally, a reasonable solution would mean choosing one the competing languages as the standard (say, Chinese), choosing a "neutral" fourth language as a standard (say, Greek), or creating a new "best-of-breed" language with the best element of all them (say, Esperanto). Any of those solutions would be acceptable, although the last one is probably the best for the industry.
The problem is that the scheme being proposed at IEEE P1901 consists of arbitrarily labeling German and French (but not Chinese) as "the standard." Obviously, this does not solve the problem, but allows the Germans and French to get rid of the Chinese. As presenting an idea like this would be laughable, this is being presented as the idea of "dual PHY, single MAC," which basically means that although German speakers won't talk to French speakers, both will have some common elements (words made of letters, spaces between words, etc.). The value for consumers is essentially zero, but still the Germans and French can put a P1901 logo in their boxes.
FUTURE DIRECTIONS
What, in your mind, are current shortcomings of UPA technology that the standards body (and/or proprietary UPA-based enhancements) plan to address in the future, and what are the timeframes for these enhancements?
I wouldn't use the word "shortcomings" here. In general, everybody wants products that work faster, better and cost less. This is something that can be stated about any technology or industry. In general, we (not only UPA, but also HomePlug and HD-PLC) need to address the interoperability issue. This is the more important shortcoming and a historic one. If we can now move forward and address the interoperability issue with a good solution at IEEE P1901, all players will be successful in a market that will be orders of magnitude larger than it is today.
Getting back to DS2's "400-Mbps" announcement, what kind of performance can users reasonably expect, what is the timeframe for the technology's high-volume production implementation, and will it be backward-compatible with today's UPA?
From the performance point of view, you can expect roughly twice the performance of current 200-Mbps systems. We are not providing product details yet, but if you take a look at our press release, we say: "DS2 400-Mbps technology will be available in next-generation products from DS2 on time to satisfy the demands for extra bandwidth in the digital home and last-mile applications that most analysts predict will happen from 2009 onwards." That's all the information we can provide now, but you'll probably see more details at CES in January.
The technology will be backward-compatible (in the sense of "fully interoperable") with existing 200-Mbps UPA-compliant products, thus offering an easy migration path to our current customers. In the past, other vendors broke backward-interoperability when they introduced new performance levels (that's the case with HomePlug AV products, which are not interoperable with HomePlug 1.0 or HomePlug Turbo products). We want to make sure we don't make that mistake in this case.
While working on a recent home-automation project (please see "Homeland security: monitoring and manipulating remote residences") I've discovered that current powerline home-control technologies, such as X10 and Insteon, have numerous shortcomings. The recently ratified HomePlug Command and Control 1.0 specification is therefore admittedly of great interest to me. Please describe any UPA work with respect to augmenting today's specifications with command-and-control capabilities, including anticipated product-availability timeframes.
UPA recently announced the creation of a working group with the purpose of addressing the needs of that market. The effort started in September 2007 and the working group plans to publish a specification in nine months.
Currently, powerline transceivers locate external to system power supplies, but AMD and Intel have both demonstrated systems containing powerline-networking-cognizant power supplies. When will integrated powerline networking be widely available for PC and other applications? And should EDN's readers anticipate cost savings, and/or other benefits, resulting from this integration?
Although the integration of powerline technology inside power supplies will bring some cost savings, I think the main benefit will come from simplifying the user experience, so that they will just need to plug a single power cord to get everything interconnected.
Right now, most PC vendors are waiting for the standards situation to settle before making significant investments in this kind of application. Hopefully, once IEEE P1901 finishes its work somewhere in 2009, and if the end result is a single-PHY, single-MAC standard, PC manufacturers will start to demand this kind of integrated product.
WRAPUP
Thanks for your time. In closing, what topics have we not yet covered in the above questions that you'd like to briefly comment on?
I'd like to finish again with the issue of standards. This is really the single most significant issue faced by our industry now. IEEE P1901 is the only chance we have to solve it, but we must solve it with real solutions, not shortsighted solutions that perpetuate the existence of non-interoperable products (such as Wavelet-based and OFDM-based devices) that keep manufacturers and consumers locked up with the same vendor forever.
By Brian Dipert, Senior Technical Editor -- 12/14/2007
EDN
A profusion of incompatible "standards," the lingering memory of poor initial products, and the sheer technical challenge of the application have thus far retarded the adoption of powerline networking. Here, Chano Gómez, vice president of technology and strategic partnerships with chipmaker DS2, offers technical and strategic insight into the UPA (Universal Powerline Association) technology his company champions. A future installment of Voices will feature an interview with Andraes Melder of Intellon, which leads the opposing HomePlug Powerline Alliance.
OVERVIEW
I'd like to begin by asking you to provide an introductory summary of the historical development and current status of UPA technology, as it applies to LANs, to broadband Internet-access service distribution (WANs), and to other past and present applications, such as power-meter monitoring.
First, let me clarify that I'm not a UPA official, or even DS2's representative at UPA, so what I'll explain here is my personal view of the historical developments, and not UPA's or DS2's official position.
To understand why UPA's technology is the way it is, we have to go back to 2000, many years before UPA was founded. In 2000, a few months after I joined the company as a junior system architecture engineer, DS2 was focused on developing technology for broadband-access applications that would allow power companies to provide Internet access and VOIP services to their energy customers. At that time, some companies were already working on applying powerline technology for home-networking applications, but as far as I remember, we were 100% focused on access.
When we started working on the next-generation platform (200-Mbps data rate), DS2 realized that we had an opportunity to address the needs of additional market segments, in particular higher speeds for whole-home multimedia networking, so the architecture was changed to accommodate requirements for future home-networking usage scenarios. That turned out to be a pretty good idea, which allowed us to reuse the same platform to deliver different products for different market segments and differentiate ourselves from other players in the in-home powerline market who were offering lower-speed product, basically for data. Both markets are very interesting and complementary: Broadband access so far is a low-volume, high-margin segment, while home networking is a high-volume, low-margin segment.
Why am I providing all these historic details? Because they explain very well DS2's unique vision of the powerline-networking market. We really believe that access applications and home-networking applications represent legitimate uses of powerline technology and both have legitimate requirements that must be addressed by vendors and industry standards. For a long time, there were companies in our industry that thought that only home-networking applications had the right to use the powerline medium, while other companies had the opposite view. Over time, those positions have become less radical when discussed in public forums, but still many companies have that bias in their DNA. ("Home networks must take 90% of the bandwidth, and access should just get the remains." Or "Access is the really critical application, and home networks can use WiFi if needed.") DS2 has a large number of customers and partners in both camps, so we really believe that we must create standards that allow both applications to share bandwidth in a fair way. It's in our DNA. Our engineering resources are split 50-50 between both markets.
For many years, we were members of an organization [HomePlug] that did not share that vision. We tried to change that, unsuccessfully. In December 2004 we left that organization and, along with a group of partners who shared our vision, founded the Universal Powerline Association. The original goal of UPA was to create standards for coexistence between access and home networks. However, feedback from the market pointed to the need for real interoperability standards, so UPA decided to extend its scope in order to create a standard called UPA DHS (Digital Home System) and also to certify product compliance to that standard to deliver on interoperability where others had failed.
That effort has been quite successful. The first 200-Mbps product introduced in the US consumer market was based on the UPA standard, and according to The NPD Group, more than 50% of 200-Mbps products sold in the US retail market are based on the UPA specification and have the UPA logo. Right now, UPA includes members from very varied backgrounds: Companies from North America, Europe, and Japan; companies developing access products and home-networking products; semiconductor companies, power companies, and service providers. There are specific groups focused on high-speed applications, while others are developing standards for low-speed control applications.
In your mind, how do UPA technology's attributes enable it to coexist, supplement, and/or supplant other traditional data-distribution technologies, for both LAN and WAN applications, and both today and in the future?
From the very beginning, the foundation for UPA technology had to be very flexible, because it had to provide solutions for many different markets. Initially that represented more work than what would be required for a technology focused on a single narrow application, but at the end, we think the effort paid off, because the technology is now being used in many different environments, as a complement to many different LAN/WAN technologies.
I'll just give some examples of how our customers and partners are using UPA technology as a complement to other technologies. The most popular scenario is service providers using powerline technology as an extension of DSL, DSL2+, VDSL, and FFTH for IPTV distribution inside customers' homes. UPA technology's flexibility in terms of frequency band is very useful here, as it allows device manufacturers to tune the spectrum used by the powerline transmitter in order to ensure coexistence with other technologies (such as VDSL) whose spectrum partially overlaps the one used by powerline technology.
Another very popular scenario is using powerline to provide Internet and VOIP access to individual apartments in MDUs [multidwelling units] in FTTB [fiber-to-the-building] deployments.
In the consumer space, many combined products are already in demand by users: using powerline to extend the range of existing wireless networks or using powerline as a backbone for interconnecting wireless access points in enterprise or commercial environments. Also, using powerline as a backbone for short-range UWB [ultrawideband] networks could be an interesting application once UWB becomes more popular.
In general, as more applications and services converge to IP-based protocols, it becomes easier for manufacturers, consumers, and service providers to interface them with powerline networks. You should expect to see powerline technology as an extension of WiMax in high-rise buildings, or as a backbone connection for nanocells and femtocells in cellular networks.
TECHNOLOGY SPECIFICS
How does UPA modulate data on the ac power signal, and how does it handle detection, correction, and/or retransmission of errors in that data?
UPA's physical layer is based on OFDM [orthogonal frequency division multiplexing] modulation. OFDM was chosen as the modulation technique because of its inherent adaptability in the presence of frequency-selective channels, its resilience to jamming signals, its robustness to impulsive noise, and its capacity of achieving high spectral efficiency.
Detection and correction of errors is achieved by a concatenation of four-dimensional trellis codes and Reed-Solomon forward error correction, specially tuned to cope with the very special powerline channel impairments. For those cases in which packets become so corrupted by noise that they cannot be recovered, a retransmission mechanism is used. Packet fragments are numbered individually, and each pair of transmitter and receiver keep track of which one has been received correctly and which one needs to be retransmitted, using an ACK protocol.
How does UPA compensate for varying noise levels on the power grid, caused by fluorescent lights and motor-driven products such as vacuum cleaners, hair dryers, and heating and air-conditioning fans? How do you educate consumers on the potential need to install noise filters on the power inputs of these interference sources, in order to ensure reliable powerline-network operation?
The interesting feature of noise found in powerline is that it's not like the famous "white Gaussian noise" found in any digital-communications textbook. It's "colored" noise (stronger in some frequencies and weaker in others), non-Gaussian (you have very strong peaks that do not follow a normal distribution), and nonstatic (you have shorts periods of silence followed by shorts periods of strong noise). So, a powerline device has to find out which are the "clean" time/frequency slots and make sure to avoid the noisy ones. And once this is done, somebody will plug/unplug something in a room nearby, and you have to repeat the time/frequency analysis all over again, in only a few milliseconds, to ensure that the user does not experience any service interruption.
Fortunately, advances in DSP and ASIC technology provide enough computing power to perform a pretty accurate time/frequency analysis of the communication medium, and we are able to ascertain which are the "slots" where we can transmit with efficiencies of up to 10 bits/second/Hz and which are the ones where maximum efficiency is lower (or even zero).
Regarding the issue of how to educate consumers about "best practices" for maximizing the performance of their networks, we work on two different fronts. On one hand, we work with our partners with more experience in the consumer market (companies such as Netgear, D-Link, and Buffalo Technology) to ensure that their product packaging and user manuals explain the best way to use the product (for example, always recommending users to connect the adapter directly into a wall socket and not into a surge-protected power strip). Additionally, we try to provide useful feedback to users so that they can easily recognize which is the best way to use the product. In January 2007 we launched a reference design (code name DH10PF) with multicolored LEDs so that users can easily see if the network is operating at full performance (green for excellent performance, yellow for good performance, red for bad performance). The feedback we have received so far, both from consumers and service providers, is that the system is very intuitive and achieves the goal for which it was designed. Since then, other powerline vendors have started to "borrow" the idea, so it must be good.
How does UPA handle the reality that two power outlets (either in close proximity or not) that a consumer may want to interconnect via UPA may be fed by different circuit breakers, and (even more challenging) may derive from opposite phases of the 220V (US) source feed?
First, let me clarify that the fact that two outlets are in different circuits does not necessarily mean that powerline technology won't work there. In general, there is sufficient signal coupling between the wires (due to capacitance) to ensure that a connection can be done with adequate performance. My impression is that a lot of people have bad memories from legacy home-control powerline technologies that operated at much lower frequencies (kHz instead of MHz) and that didn't work very well when sockets were in different circuits and/or phases.
Having said that, it's also true that having signals "jumping" from circuit to circuit means that signal strength is slightly lower, so there is usually a certain amount of performance degradation in that situation. If you combine this with additional factors like a damaged wire and a strong noise source and low impedance due to several devices being connected in parallel in the same socket, you may find sockets where a connection cannot be established.
There are cases (in extremely large homes, or industrial/commercial environments) where a combination of long distance and circuit/phase change could result in bad performance in some socket pairs. UPA provides a very elegant and efficient way of solving this problem using repeaters. The user just needs to connect an additional powerline adapter in a socket close to the electrical switch panel (which is the electrical "center of gravity" of the building), and that adapter will automatically become a signal regenerator. As far as we know, this feature is unique to UPA technology. The solution is simple and elegant, because no configuration and no special hardware or firmware is required (any powerline adapter can be a repeater).
UPA is specified as a "200-Mbps" technology, but testing results suggest that it delivers only a limited percentage of that speed in real-life usage. Why was the peak PHY rate chosen as the technology designator? What range of TCP and UDP speeds do you believe most consumers will experience in real-life settings, and what kinds of applications are supportable (and conversely unsupportable) by those speeds, both in single- and multiple-coincident-data-stream situations? And do you address potential consumer confusion and frustration, when they don't get the performance results that the "200-Mbps" stamp on the outside of the product box might otherwise suggest they'll achieve?
If you take a look at the 802.11g specification, it's supposed to provide a data rate of up to 54 Mbps. Most of the 802.11g products have a label somewhere on the box that mentions "54 Mbps." If you measure TCP/IP throughput yourself, using standard tools such as "iperf," you'll see that the maximum you'll get (with all possible optimizations enabled) will be around 25 Mbps if you test between your access point and an end point, and around 12 Mbps if you test between two end points. That's in the best possible conditions (short distance) and it will degrade as you increase distance or obstacles between devices or in case you have other wireless networks or cordless phones in your vicinity.
If you take two UPA powerline adapters and perform the same test, in the same conditions, you'll get a maximum speed of 95 Mbps. Most UPA-based powerline products for the consumer market have a Fast-Ethernet interface, which is the reason why you cannot get beyond 95 Mbps. UPA products for access applications that have a Gigabit-Ethernet interface can provide up to 120 Mbps. Like in wireless networks, this data rate will decrease as you increase transmission distance and introduce noise sources (exactly in the same way as with cordless phones and 802.11g networks). So, the ratio between maximum throughput and PHY data rate in UPA powerline technology (50% to 60% depending on the test equipment) is not worse than in 802.11g technology (50% to 25% depending on test set up).
Does this mean that it's OK for the powerline industry to use the "200 Mbps" as the technology designator? To be honest, if I had the opportunity to start all over again, without the historic context provided by other networking technologies, I wouldn't have used the 200-Mbps label. I'd rather use a label that describes the expected application of the technology, without references to specific data rates. But given that we had to introduce a new technology in a market where consumers already knew that 802.11b worked at 11 Mbps, 802.11g at 54 Mbps, and HomePlug 1.0 at 14 Mbps, we needed to provide a reference that consumers could use for performing an apples-to-apples comparison.
UPA technology seems from my standpoint to focus on optimizing UDP performance, versus TCP. Assuming you agree with my perspective, why and how was UDP prioritized in the definition and implementation of the technology? And how does UPA stack up (from cost and other standpoints) against other powerline technologies that offer consumers more limited (or nonexistent) TCP-versus-UDP prioritization capabilities?
UPA technology itself was not designed to optimize UDP versus TCP. I think that some test results may create that perception because, in general, TCP performance is more sensitive to "events" in the communication channel than UDP. Most UDP test tools will generate packets as fast as possible and will flood all the bandwidth provided by the network. On the other hand, TCP has built-in mechanisms that reduce the data rate at which the transmitter generates packets when the protocol "thinks" that the network is congested. For example, if a packet experiences higher latency than the rest (for example, if the packet had to be retransmitted because of channel noise), the TCP stack may think that the network is congested and will reduce the data rate. Also, most TCP implementations limit the amount of packets (TCP window size) that can be transmitted before an acknowledgment (ACK) is received. This can also artificially limit the data rate obtained by TCP.
In my tests at home, the parameter that has most impact in performance is TCP window size. This is due to the fact that the "bandwidth × latency" product in powerline networks is higher than in Fast-Ethernet or Gigabit-Ethernet networks, so unless you have a large TCP window, your PC won't be able to fill the pipe with enough data. End users can change that in their computers by changing a registry value.
Another thing to consider is whether specific implementations of UPA products provide default prioritization schemes that give higher priority to UDP streams than TCP streams in case both are present and the network is congested. I know of at least one vendor who provides this default scheme, justified because in most cases UDP streams are used for multimedia applications (VOIP, video streaming) and it could make sense to prioritize those instead of TCP flows for nonmultimedia applications. Other UPA vendors provide different default priority schemes, based on 802.1p tags or on specific UDP port numbers.
UPA allows for QOS (quality of service) prioritization of particular types of data streams. How do you balance the desire for a robust consumer experience from a QOS (or other) standpoint against consumers' desire for a robust out-of-box experience (one that doesn't require tedious and confusing calibration and customization of QOS and other technology parameters)?
I don't expect consumers to perform any QOS configuration at all. Most of the QOS configuration options you may have seen in products that use DS2 silicon are there for other purposes: in some cases, the same product sold in retail is also used for the service-provider market. Those QOS options are there to allow service providers to customize prioritization for their specific service requirements. You need to be familiar with IEEE 802 standards and IETF RFC document in order to do anything useful with most of those QOS settings.
In my hands-on experience, powerline often exhibits extended latencies as compared with other networking technologies. What (if any) applications are therefore not candidates for using powerline as their transport scheme? And how can applications (and the operating systems they run on top of) compensate for powerline networks' extended latencies?
According to my experience, the effect of extra latency can be compensated for by increasing the maximum TCP window size used by applications and the operating system. This can be done easily with a registry setting in Windows. I understand that the configuration tool that comes with the CD provided by most vendors already does this registry change automatically, as it does not have any negative effect in any other application.
I'm glad to see that UPA appears to support consumer-upgradeable firmware. But, perhaps obviously, a no-upgrade-required scenario is even more preferable. What did (and does) the UPA standards body do to ensure robust out-of-box technology support, both for various network protocols and for interoperability of various manufacturers' UPA-cognizant equipment?
For me, these are totally unrelated things. Regardless of whether the final product is upgradeable or not, UPA has certification procedures in place to ensure that any product that has the UPA logo interoperates with any other products with the same logo. If you go to Best Buy and get a Netgear HDX101 unit and a D-Link DHP-300 unit and put them in your network, they will communicate with each other, regardless of the firmware version.
Firmware upgradeability is not a mechanism for vendors to ship nonstandard products with the hope of fixing problems later. Firmware upgradeability is a mechanism to add new features for installed products. If we look beyond the consumer market and consider also the service provider market, the requirements are even stronger. Most of our customers wouldn't even consider installing a single box that was not firmware upgradeable from a remote-management center using standard TCP/IP protocols.
Actually, this single feature is one of the main reasons why UPA technology is so popular with IPTV service providers. Right now, only UPA products provide remote management using an embedded TCP/IP stack. No other powerline product provides that today. They don't even have an IP address, let alone a complete TCP/IP stack.
How much of a concern is data security across a shared power-distribution topology, both in a multiresident neighborhood environment and in a multiapartment, single-premises setting? And how do you consequently educate consumers on the potential need to change the default encryption password and make other security adjustments? What encryption scheme(s) does UPA use, is encryption enabled by default (and at what performance impact versus an encryption-disabled alternative configuration), and can the encryption protocol be upgraded or otherwise enhanced on a situation-by-situation basis?
The latest AITANA chipset announced by DS2 at IDF Fall 2007 provides 256-bit AES encryption, which as far as I know, is the strongest encryption available in any powerline product today. The encryption engine is hardware-based, and the system has been designed in way that provides full performance regardless of whether encryption is used or not—unlike wireless systems, which usually have degraded performance when encryption is enabled. AITANA also supports 156-bit Triple-DES encryption for backwards compatibility with previous products. Key-exchange protocols are software-based, which means that they can be upgraded easily if better protocols are created.
One aspect on which we have spent a lot of time and engineering resources is finding ways in which users can enable security as easily as possible, even without using a computer. This no-computer goal may seem extreme, but it's important to note that one of the most popular applications of UPA technology is for in-home distribution of IPTV content (service providers like British Telecom, Telefonica, and Portugal Telecom are good examples). In many cases, subscribers to IPTV services may not even have a computer. They just signed up for the service because it was cheaper than regular cable or maybe had better content than satellite. They have no idea that their TV service is delivered via an ADSL2+ modem and don't even know what IPTV stands for.
For our DH10PF reference design, we came up with a feature called OBUS (One-BUtton Security), which basically allows the user to set up an encrypted network just by pressing a button on each powerline adapter within 30 seconds of each other. No computer required. No passwords to remember. If the LED is green, your secure network is up and running.
Other vendors tried similar ideas in the past, but with severe limitations: the units had to be connected physically close to each other for initial setup, and complex switches had to be configured for the system to work. We think our approach is the most user-friendly, and the fact that other vendors have started to "borrow" the idea seems to validate that.
The amateur (aka "ham") radio community has been quite vocal with its concerns regarding potential interference in the presence of an active powerline network, due to radiated powerline noise and consequent inductive coupling to the amateur radio setup's broadcast and reception antenna. Are the concerns valid, and if so, how has UPA technology been architected to mitigate these concerns (via notch filters or other schemes)? And what potential impact does such mitigation have on powerline performance and other robustness measures? What other potential destructive interference scenarios (wireless keyboards and mice, for example) exist?
I was asked about this specific topic more than a year ago. [Editor's note: Gomez here provided a link to this August 2006 interview with Computing Unplugged, and specifically called our attention to the following quotes from that interview.]
When the first trials of BPL [broadband over powerline] technology started, around seven or eight years ago, BPL systems transmitted high power levels and did not have special mechanisms to protect radio services. As the industry has learned more about the problems found with real installations, it has improved the technology, reducing power levels and providing sophisticated notching techniques to avoid interference.
In 2003 (three years ago), DS2 introduced its second Generation powerline chipset, which was the first in the industry to provide speeds up to 200 Mbps, and 40 dB programmable notches. These chips have been designed to allow BPL vendors to design equipment that meets FCC requirements, to adequately protect ham radio bands and to provide additional mitigation mechanism in case any isolated interference case is detected in a BPL network. The ARRL lab tested this technology in April this year and issued a favorable review.
You can see ARRL's view of DS2's second generation technology here.
As I understand it, powerline technology is unable to work through a surge protector or UPS (battery-backed uninterruptable power supply). How do you educate consumers about dealing with this issue? And how do you deal with the fact that surge-protection filters are increasingly being built directly into ac outlets?
It's true that most surge protector and UPS devices will block the frequencies used by powerline technology. We have done two things to avoid this problem: 1) Reduce the number of cases in which the user is forced to connect the powerline adapter to a power strip, and 2) Make it obvious for the user that connecting to a surge-protected power strip is a "bad idea."
The first problem is avoided by one of the most visible features of our DH10PF reference design: a passthrough socket on the powerline adapter, which allows the user to connect the adapter directly to a wall socket, while a power strip can still be connected to the passthrough socket. With this design, there is no reason why a user could not connect the adapter directly to a wall socket.
The second problem is solved with visual indicators, so that the user can immediately see that connecting the adapter in the wall socket provides a solid green LED, while connecting to a surge-protected power strip gives you a yellow or red LED.
Regarding surge-protection filters built directly into ac outlets, so far we have not seen them a lot. It's possible that they have become more popular in recently built homes, but these are coincidentally the kind of homes where CAT5 wiring is also built in, so this probably is not the target market for powerline technology anyway.
COMPETITION, COMPATIBILITY, AND STANDARDIZATION
In contrast to HomePlug, UPA (along with the other standards and certification bodies) seems from my viewpoint to be heavily DS2-influenced. Pragmatically, are these true standards bodies, open to influence from numerous industry participants, or are they "standards bodies" in name only? And how do DS2 and the UPA plan to evolve and mature in the future?
I don't agree with that view. I don't like talking about my competition, but I'll answer your question. You give the example of HomePlug versus UPA. I think UPA is as influenced by DS2 as HomePlug is by Intellon or HD-PLC is by Panasonic. We all like to call ourselves "standard" and call everybody else "proprietary," but in practice, if you are a device manufacturer and you want to buy HomePlug AV silicon, right now you can only buy from one vendor (Intellon), if you want to buy HD-PLC silicon you can only buy from Panasonic and if you want to buy from UPA you can only buy from DS2. As an industry, we now have the chance to solve this problem at IEEE P1901. This is the only opportunity we have to make this industry grow instead of recreating a Blu-Ray versus HD-DVD battle (but worse, with three competing specifications).
We still have a huge challenge in front of us. We need to work together as an industry to improve the quality of the existing proposal so that it can provide a technical solution to meet the needs of the BPL market. A large part of the consumer-electronics industry has been reluctant to integrate powerline technology in its products because of a lack of a single global standard. We now have an excellent opportunity to remove that obstacle by working together to create a single PHY/MAC specification that ensures complete interoperability between silicon vendors.
But we really need to make an effort to make it work, and so far the proposal on the table does not seem to achieve the most important goal: interoperability. The latest "2 PHY, 1 MAC" proposal discussed at IEEE P1901, if left unchanged, will create a situation where two products supposedly compliant with the IEEE P1901 specification may not be interoperable, because one of them is based on an OFDM PHY while the other is based on a Wavelet PHY.
Does the UPA specification provide room for proprietary (and backward-compatible) standards-based enhancements, as was the case with HomePlug 1.0 Turbo versus HomePlug 1.0? If so, how does the standards body plan to handle cases of companies that decide to implement such enhancements?
Yes, it does. I would even say that any standard that is well designed will always provide room for performance enhancements. DS2 recently announced the development of 400-Mbps powerline technology that is backward compatible with the existing UPA 200-Mbps products. So far, this is a technology exclusively developed by DS2, although I don't see any reason why UPA could not extend the specification to make 400 Mbps standard.
Your two primary competitors, as I view the marketplace, are the Intellon-championed HomePlug technology and the Panasonic-developed HD-PLC approach. Do you agree, or are there other powerline schemes that I've overlooked? How do you position yourself, as a technology and as a company creating products based on that technology, against your competitors? And how do you see the competitive market both today (worldwide and geography specific) and as it will evolve over time?
I think you made a pretty accurate description of the competitive landscape. There are other companies developing powerline technology, but either they are in very early development stage or are just focused on specific niche applications.
The main positioning difference between UPA and other organizations, such as HomePlug and/or HD-PLC, is that while the other organizations are mainly focused on developing specifications and products for the home-networking market (with BPL access as a second-category "afterthought" in the case of HomePlug or simply ignored in the case of HD-PLC), UPA has always been focused on developing the best possible technology that can satisfy the needs of both markets simultaneously.
So the summary is, unlike HomePlug and HD-PLC, which are focused on home networks only, UPA provides universal solutions for all markets.
Now, once the positioning of UPA is clear, what is the DS2 position in the framework of UPA? DS2 is a company that has consistently been the technology leader in powerline technology: We work hard to always be the first company to introduce the next performance level or the next key feature. We were the first company to introduce 45-Mbps products (at a time when the state-of-the-art in powerline was 14 Mbps), the first to introduce 200 Mbps (when, again, the competition was stuck at 14 Mbps), and we are now the first to introduce 400-Mbps technology. From the point of view of features, we were the first company to introduce elements that are now the standard reference in the industry: programmable notches, TDMA MAC, programmable QOS, single-chip repeaters, frequency-division repeaters, IP-based remote management, and more.
Currently, as I understand it (please confirm), multiple coincidently operating "200-Mbps" powerline technologies will actually degrade each other—far from coexisting (or, ideally, interoperating). Could you go into more detail regarding the coexistence and interoperability work going on in the IEEE?
Let me clarify a key issue regarding the different coexistence proposals discussed at IEEE P1901. As of today, the most mature proposal for solving the coexistence issue has been authored by engineers of DS2, Panasonic, and several other members of UPA and CEPCA [Consumer Electronics Powerline Communication Alliance] like Mitsubishi Electric and SiConnect. This proposal is the result of almost two years of hard work between all these companies. It includes well-defined mechanisms for achieving coexistence between access and in-home systems, and between different in-home systems. The proposal includes well-defined common signals that can be understood by otherwise incompatible OFDM and Wavelet systems. It also includes mechanisms for coexistence with non-P1901 and legacy devices. It is a very good specification, and we are very proud of the work we have done along with the rest of the industry.
Currently, multiple incompatible "200-Mbps" products sit side-by-side on retailers' shelves, and some networking vendors even offer incompatible powerline technologies within the same product line. How much consumer confusion currently exists, and how are you working with your customers (the networking vendors, along with their customers, the retailers) to minimize it? Until either a standards body such as the IEEE mandates interoperability, or might-makes-right market pressures cull out technology alternatives, won't consumer frustration inevitably grow as the powerline-networking market grows? And won't this incompatibility frustration put an unfortunate cap on powerline networking's market-growth potential?
I completely agree with you on the analysis of the current fragmentation of the market. The powerline market is growing fast, but nowhere as fast as it could in case a single standard existed.
Our position here is clear: We need a single standard, with a single PHY and a single MAC, so that all products are interoperable. We are willing to do whatever is needed to achieve this, even if this means re-designing silicon and departing from our current PHY and/or MAC. IEEE P1901 represents an excellent opportunity to achieve this, but so far it looks like we as an industry may miss this opportunity again, at least based on the current proposals being discussed at P1901, which will allow non-interoperable products to be labeled as P1901-compliant. Users will buy those products only to find later that they don't interoperate.
The current situation in the market is like having three groups of people speaking in three different languages, say German, French, and Chinese.
Ideally, a reasonable solution would mean choosing one the competing languages as the standard (say, Chinese), choosing a "neutral" fourth language as a standard (say, Greek), or creating a new "best-of-breed" language with the best element of all them (say, Esperanto). Any of those solutions would be acceptable, although the last one is probably the best for the industry.
The problem is that the scheme being proposed at IEEE P1901 consists of arbitrarily labeling German and French (but not Chinese) as "the standard." Obviously, this does not solve the problem, but allows the Germans and French to get rid of the Chinese. As presenting an idea like this would be laughable, this is being presented as the idea of "dual PHY, single MAC," which basically means that although German speakers won't talk to French speakers, both will have some common elements (words made of letters, spaces between words, etc.). The value for consumers is essentially zero, but still the Germans and French can put a P1901 logo in their boxes.
FUTURE DIRECTIONS
What, in your mind, are current shortcomings of UPA technology that the standards body (and/or proprietary UPA-based enhancements) plan to address in the future, and what are the timeframes for these enhancements?
I wouldn't use the word "shortcomings" here. In general, everybody wants products that work faster, better and cost less. This is something that can be stated about any technology or industry. In general, we (not only UPA, but also HomePlug and HD-PLC) need to address the interoperability issue. This is the more important shortcoming and a historic one. If we can now move forward and address the interoperability issue with a good solution at IEEE P1901, all players will be successful in a market that will be orders of magnitude larger than it is today.
Getting back to DS2's "400-Mbps" announcement, what kind of performance can users reasonably expect, what is the timeframe for the technology's high-volume production implementation, and will it be backward-compatible with today's UPA?
From the performance point of view, you can expect roughly twice the performance of current 200-Mbps systems. We are not providing product details yet, but if you take a look at our press release, we say: "DS2 400-Mbps technology will be available in next-generation products from DS2 on time to satisfy the demands for extra bandwidth in the digital home and last-mile applications that most analysts predict will happen from 2009 onwards." That's all the information we can provide now, but you'll probably see more details at CES in January.
The technology will be backward-compatible (in the sense of "fully interoperable") with existing 200-Mbps UPA-compliant products, thus offering an easy migration path to our current customers. In the past, other vendors broke backward-interoperability when they introduced new performance levels (that's the case with HomePlug AV products, which are not interoperable with HomePlug 1.0 or HomePlug Turbo products). We want to make sure we don't make that mistake in this case.
While working on a recent home-automation project (please see "Homeland security: monitoring and manipulating remote residences") I've discovered that current powerline home-control technologies, such as X10 and Insteon, have numerous shortcomings. The recently ratified HomePlug Command and Control 1.0 specification is therefore admittedly of great interest to me. Please describe any UPA work with respect to augmenting today's specifications with command-and-control capabilities, including anticipated product-availability timeframes.
UPA recently announced the creation of a working group with the purpose of addressing the needs of that market. The effort started in September 2007 and the working group plans to publish a specification in nine months.
Currently, powerline transceivers locate external to system power supplies, but AMD and Intel have both demonstrated systems containing powerline-networking-cognizant power supplies. When will integrated powerline networking be widely available for PC and other applications? And should EDN's readers anticipate cost savings, and/or other benefits, resulting from this integration?
Although the integration of powerline technology inside power supplies will bring some cost savings, I think the main benefit will come from simplifying the user experience, so that they will just need to plug a single power cord to get everything interconnected.
Right now, most PC vendors are waiting for the standards situation to settle before making significant investments in this kind of application. Hopefully, once IEEE P1901 finishes its work somewhere in 2009, and if the end result is a single-PHY, single-MAC standard, PC manufacturers will start to demand this kind of integrated product.
WRAPUP
Thanks for your time. In closing, what topics have we not yet covered in the above questions that you'd like to briefly comment on?
I'd like to finish again with the issue of standards. This is really the single most significant issue faced by our industry now. IEEE P1901 is the only chance we have to solve it, but we must solve it with real solutions, not shortsighted solutions that perpetuate the existence of non-interoperable products (such as Wavelet-based and OFDM-based devices) that keep manufacturers and consumers locked up with the same vendor forever.
Thursday, December 13, 2007
Powerline battles WiFi to network the broadband home
By Stuart Corner
Thursday, 13 December 2007
Page 1 of 2
!Market research firm In-Stat claims that broadband over powerline (BPL) networking is emerging as a winner in the race for multimedia home networking worldwide, and the leading manufacturer of BPL chipsets agrees, naturally.
In-Stat says that the 2007 market for broadband powerline networking equipment will be almost double its value in 2006. And according to BPL chip maker DS2, the demand is being driven by a growing realisation that wireless technologies in the home, even those using the new high speed 802.11n standard, just won't be able to deliver the bandwidth needed by current and emerging multimedia applications.
DS2 - a Spanish company that claims to have delivered the first 200Mbps powerline communications chip - says that, by 2009, average bandwidth demands to support the home user could be three times the amount actually available. "Many of the wireless and wired technology options available today are not robust enough to meet the performance demands of expanding network use within the home."
According to DS2, whose current chipsets are used in equipment able to deliver up to 400Mbps over in-home wiring, "Consumer bandwidth needs will start facing increased pressures in the next 12 months and research from the NPD Group forecasts that lower speed powerline chipsets will be replaced during 2008 - they will simply be unable to cope with the pressures of home networking - just as DS2's 400Mbps technology is set to increase at a rapid pace from 2009 onwards."
The latest generation of WiFi technology, conforming to the IEE802.11n standard, is claimed to offer up to 600Mbps but in practice these speeds are likely to be achieved only over short distances with few obstacles such as walls.
Jorge Blasco, DS2 CEO, commented: "Consumer demand for increased bandwidth is rising as people add more applications and consumer electronics products to the network. More homes are becoming digital networks and more service providers are extending their IPTV and entertainment offerings - such as British Telecom, with its BT-Vision service, Telefónica, with its Imagenio, and Verizon, which this month announced a five-fold increase in the amount of HD channels to be available on FiOS TV next year. The growth and popularity of the home network is fantastic and we now have to ensure that the bandwidth available to consumers is able to sustain the multimedia applications of the future."
Earlier this year DS2 was pushing the, claimed, lower latency of its BPL technology against WiFi as a key benefit to online gamers.
In recent years a number of service providers had adopted and supplied to end users both BPL and wireless technologies to ensure that users are able to distribute their bandwidth intensive multimedia services around their homes.
In 2005 Spanish carrier, Telefónica, purchased 30,000 powerline ethernet adaptors from Corinex (which used DS2 chips) to solve the problem of getting its DSL triple play service from the phone socket to where people want to watch TV in their homes. According to a joint Corinex/Telefónica press release "A major obstacle for telecoms delivering IPTV has been sending the signal from the ADSL modem to other rooms in the home...[but] the new generation of powerline technology offers the speed and quality of service required to distribute video within the home."
The release went on to say: "Telefónica went through extensive efforts to research and evaluate all the different technology options available today and those still in development. Corinex's AV Powerline product was the only commercially viable solution enabling Telefónica to deploy their [IPTV] service anywhere in the home. Neither wireless nor other powerline technologies could meet their needs."
In Hong Kong in mid 2006, BPL technology developer Intellon announced that PCCW was using its products to "capture broadband customers unwilling to install new wiring in their homes". PCCW had at the time over 800,000 subscribers, of which 550,000 had also signed up nowTV, then the world largest IPTV service.
Also last year, Belgian carrier Belgacom, had a bet each way choosing both broadband over powerline technology from Corinex and Ruckus' Wireless enhanced WiFi technology to enable customers of its IPTV service to distribute signals to devices around their homes."
Thursday, 13 December 2007
Page 1 of 2
!Market research firm In-Stat claims that broadband over powerline (BPL) networking is emerging as a winner in the race for multimedia home networking worldwide, and the leading manufacturer of BPL chipsets agrees, naturally.
In-Stat says that the 2007 market for broadband powerline networking equipment will be almost double its value in 2006. And according to BPL chip maker DS2, the demand is being driven by a growing realisation that wireless technologies in the home, even those using the new high speed 802.11n standard, just won't be able to deliver the bandwidth needed by current and emerging multimedia applications.
DS2 - a Spanish company that claims to have delivered the first 200Mbps powerline communications chip - says that, by 2009, average bandwidth demands to support the home user could be three times the amount actually available. "Many of the wireless and wired technology options available today are not robust enough to meet the performance demands of expanding network use within the home."
According to DS2, whose current chipsets are used in equipment able to deliver up to 400Mbps over in-home wiring, "Consumer bandwidth needs will start facing increased pressures in the next 12 months and research from the NPD Group forecasts that lower speed powerline chipsets will be replaced during 2008 - they will simply be unable to cope with the pressures of home networking - just as DS2's 400Mbps technology is set to increase at a rapid pace from 2009 onwards."
The latest generation of WiFi technology, conforming to the IEE802.11n standard, is claimed to offer up to 600Mbps but in practice these speeds are likely to be achieved only over short distances with few obstacles such as walls.
Jorge Blasco, DS2 CEO, commented: "Consumer demand for increased bandwidth is rising as people add more applications and consumer electronics products to the network. More homes are becoming digital networks and more service providers are extending their IPTV and entertainment offerings - such as British Telecom, with its BT-Vision service, Telefónica, with its Imagenio, and Verizon, which this month announced a five-fold increase in the amount of HD channels to be available on FiOS TV next year. The growth and popularity of the home network is fantastic and we now have to ensure that the bandwidth available to consumers is able to sustain the multimedia applications of the future."
Earlier this year DS2 was pushing the, claimed, lower latency of its BPL technology against WiFi as a key benefit to online gamers.
In recent years a number of service providers had adopted and supplied to end users both BPL and wireless technologies to ensure that users are able to distribute their bandwidth intensive multimedia services around their homes.
In 2005 Spanish carrier, Telefónica, purchased 30,000 powerline ethernet adaptors from Corinex (which used DS2 chips) to solve the problem of getting its DSL triple play service from the phone socket to where people want to watch TV in their homes. According to a joint Corinex/Telefónica press release "A major obstacle for telecoms delivering IPTV has been sending the signal from the ADSL modem to other rooms in the home...[but] the new generation of powerline technology offers the speed and quality of service required to distribute video within the home."
The release went on to say: "Telefónica went through extensive efforts to research and evaluate all the different technology options available today and those still in development. Corinex's AV Powerline product was the only commercially viable solution enabling Telefónica to deploy their [IPTV] service anywhere in the home. Neither wireless nor other powerline technologies could meet their needs."
In Hong Kong in mid 2006, BPL technology developer Intellon announced that PCCW was using its products to "capture broadband customers unwilling to install new wiring in their homes". PCCW had at the time over 800,000 subscribers, of which 550,000 had also signed up nowTV, then the world largest IPTV service.
Also last year, Belgian carrier Belgacom, had a bet each way choosing both broadband over powerline technology from Corinex and Ruckus' Wireless enhanced WiFi technology to enable customers of its IPTV service to distribute signals to devices around their homes."
Wednesday, December 12, 2007
PSE&G Seeks Approval to Test Advanced Metering Technologies as Additional Way to Combat Climate Change
New technology would empower customers to save money as they conserve energy
December 12, 2007: 10:00 AM EST
NEWARK, N.J., Dec. 12 /PRNewswire-FirstCall/ -- Public Service Electric and Gas Company (PSE&G) today announced it has requested approval from state regulators to deploy and test advanced metering infrastructure (AMI) technologies, capable of enabling customers to monitor energy use, conserve energy and lower their costs during periods of peak electric demand. The technologies will also be useful in reducing carbon emissions that contribute to global climate change.
Speaking at the New Jersey Energy Summit in New Brunswick today, PSE&G President and COO Ralph LaRossa said the company has filed a petition with the New Jersey Board of Public Utilities (BPU) that could pave the way for the AMI deployment and test as early as next summer. AMI has these key elements: smart meters that collect interval meter data, a two-way communications component that transmits information to and from the utility, and a meter data management system that stores and manages the information received.
"We are proposing this program to explore yet another way for the company and its customers to contribute to the state's aggressive energy conservation and carbon reduction goals," LaRossa said. "The key to conservation is enabling customers to have the information they need to make the right choices. AMI provides real-time information, and can reduce energy usage remotely during times of peak demand. In the absence of an industry standard, this technology deployment will enable us to evaluate the most appropriate AMI strategy to pursue."
PSE&G will compare performance and cost differences of three AMI technologies under different operating conditions. The technologies are: Mesh Network, Radio Frequency (RF) Hybrid (Point-to Point) and Broadband over Power Line (BPL). The company will install 32,500 advanced meters in homes and businesses of customers in the Passaic County towns of Wayne, Paterson, and Totowa. In addition to testing the technology in residential, commercial and industrial settings, the initiative will determine how the various systems perform in urban, suburban and sparsely populated neighborhoods that have a mix of indoor and outdoor meters, a high rate of radio frequency interference and varied terrain.
PSE&G has requested expedited approval from the BPU to install equipment in customers' homes and businesses and begin transmitting customer data beginning next summer. If approved by the BPU, PSE&G proposes to spend about $15 million to install the advanced metering infrastructure for this deployment in Passaic County.
As proposed in the filing, the results of the deployment will be analyzed by PSE&G and representatives of BPU staff, the Division of Rate Counsel, large industrial users, and members of environmental, consumer advocate and academic groups. A report outlining the strategic and public policy benefits will be provided to the BPU to assist the Board in evaluating and establishing a universal AMI approach in NJ. If the initial testing proves successful, LaRossa said PSE&G would expand the technology trial to a larger number of customers in its service territory.
In a recently concluded trial of a related initiative, called myPower, PSE&G tested customers' reactions to a pricing program and technology that told them when energy was most expensive so that they could reduce their energy consumption and shift their energy usage to lower price periods. Customers participating in the program who were provided with thermostats that responded automatically to pricing signals were able to reduce summer peak electric demand by about 47 percent on peak days, and achieved a 3-4 percent energy savings during the summer months. A majority of participants saved money on their energy bills. Participants also reported that they believe that utilities should offer more programs like myPower, would recommend the program to a friend or relative, and believe that programs like this will benefit the environment.
In addition to helping customers conserve energy and money, AMI provides a host of other benefits to the utility and its customers, including faster and more accurate detection of power outages, faster activation of electric service, and no estimated bills.
Public Service Electric and Gas Company (PSE&G) is New Jersey's oldest and largest regulated gas and electric delivery utility, serving nearly three- quarters of the state's population. PSE&G is the winner of the ReliabilityOne Award for superior electric system reliability. PSE&G is a subsidiary of Public Service Enterprise Group Incorporated (PSEG) , a diversified energy company (www.pseg.com).
December 12, 2007: 10:00 AM EST
NEWARK, N.J., Dec. 12 /PRNewswire-FirstCall/ -- Public Service Electric and Gas Company (PSE&G) today announced it has requested approval from state regulators to deploy and test advanced metering infrastructure (AMI) technologies, capable of enabling customers to monitor energy use, conserve energy and lower their costs during periods of peak electric demand. The technologies will also be useful in reducing carbon emissions that contribute to global climate change.
Speaking at the New Jersey Energy Summit in New Brunswick today, PSE&G President and COO Ralph LaRossa said the company has filed a petition with the New Jersey Board of Public Utilities (BPU) that could pave the way for the AMI deployment and test as early as next summer. AMI has these key elements: smart meters that collect interval meter data, a two-way communications component that transmits information to and from the utility, and a meter data management system that stores and manages the information received.
"We are proposing this program to explore yet another way for the company and its customers to contribute to the state's aggressive energy conservation and carbon reduction goals," LaRossa said. "The key to conservation is enabling customers to have the information they need to make the right choices. AMI provides real-time information, and can reduce energy usage remotely during times of peak demand. In the absence of an industry standard, this technology deployment will enable us to evaluate the most appropriate AMI strategy to pursue."
PSE&G will compare performance and cost differences of three AMI technologies under different operating conditions. The technologies are: Mesh Network, Radio Frequency (RF) Hybrid (Point-to Point) and Broadband over Power Line (BPL). The company will install 32,500 advanced meters in homes and businesses of customers in the Passaic County towns of Wayne, Paterson, and Totowa. In addition to testing the technology in residential, commercial and industrial settings, the initiative will determine how the various systems perform in urban, suburban and sparsely populated neighborhoods that have a mix of indoor and outdoor meters, a high rate of radio frequency interference and varied terrain.
PSE&G has requested expedited approval from the BPU to install equipment in customers' homes and businesses and begin transmitting customer data beginning next summer. If approved by the BPU, PSE&G proposes to spend about $15 million to install the advanced metering infrastructure for this deployment in Passaic County.
As proposed in the filing, the results of the deployment will be analyzed by PSE&G and representatives of BPU staff, the Division of Rate Counsel, large industrial users, and members of environmental, consumer advocate and academic groups. A report outlining the strategic and public policy benefits will be provided to the BPU to assist the Board in evaluating and establishing a universal AMI approach in NJ. If the initial testing proves successful, LaRossa said PSE&G would expand the technology trial to a larger number of customers in its service territory.
In a recently concluded trial of a related initiative, called myPower, PSE&G tested customers' reactions to a pricing program and technology that told them when energy was most expensive so that they could reduce their energy consumption and shift their energy usage to lower price periods. Customers participating in the program who were provided with thermostats that responded automatically to pricing signals were able to reduce summer peak electric demand by about 47 percent on peak days, and achieved a 3-4 percent energy savings during the summer months. A majority of participants saved money on their energy bills. Participants also reported that they believe that utilities should offer more programs like myPower, would recommend the program to a friend or relative, and believe that programs like this will benefit the environment.
In addition to helping customers conserve energy and money, AMI provides a host of other benefits to the utility and its customers, including faster and more accurate detection of power outages, faster activation of electric service, and no estimated bills.
Public Service Electric and Gas Company (PSE&G) is New Jersey's oldest and largest regulated gas and electric delivery utility, serving nearly three- quarters of the state's population. PSE&G is the winner of the ReliabilityOne Award for superior electric system reliability. PSE&G is a subsidiary of Public Service Enterprise Group Incorporated (PSEG) , a diversified energy company (www.pseg.com).
In-Stat: Broadband Over Powerline a Home Networking Winner
Wednesday December 12, 10:33 am ET
SCOTTSDALE, Ariz.--(BUSINESS WIRE)--Broadband over Powerline (BPL) has been emerging steadily over the past several years for in-home networking, access/utility company applications, and the technology is continuing strong growth, reports In-Stat (http://www.in-stat.com). With no new cabling needed, broadband powerline networking is emerging as a winner in the race for multimedia home networking worldwide, the high-tech market research firm says.
“Management and conservation of energy has become the overriding driver for smart grid, utility applications, where both broadband and low-speed powerline communications will play a roll. As a result, we expect solutions using HomePlug Command and Control solutions to emerge in a big way, although we envision many combination solutions evolving including powerline and low-speed wireless technologies,” says Joyce Putscher, In-Stat analyst.
Recent research by In-Stat found the following:
Surpassing the inflection point in 2006, worldwide broadband powerline equipment based on HomePlug, CEPCA and UPA powerline reached 5.4 million.
Global growth for broadband powerline networking equipment will approach 100% in 2007.
Although broadband has gained most of the attention, the HomePlug Command & Control (HPCC) low-speed specification has recently been approved with meaningful shipments expected in 2008. Worldwide market acceptance is expected to be strong over the next five years, driven by many regional mandates for energy management and savings.
The research, “Powerline Home Networking 2007 Update: Gaining Power in the Global Market” (#IN0703456RC), covers the worldwide market for home networking over powerline. It provides service providers, equipment and semiconductor vendors valuable guidance on market trends and expected progress, opportunities, segmentations, and market sizing. Worldwide forecasts include unit segmentation by geographic region, product categories, technology, bandwidth, PHY/MAC chipset ASP, retail vs. service provider channel, and in-home networking vs. access and utility use. This research also references data from In-Stat’s annual home networking consumer survey, conducted during the first half of 2007.
SCOTTSDALE, Ariz.--(BUSINESS WIRE)--Broadband over Powerline (BPL) has been emerging steadily over the past several years for in-home networking, access/utility company applications, and the technology is continuing strong growth, reports In-Stat (http://www.in-stat.com). With no new cabling needed, broadband powerline networking is emerging as a winner in the race for multimedia home networking worldwide, the high-tech market research firm says.
“Management and conservation of energy has become the overriding driver for smart grid, utility applications, where both broadband and low-speed powerline communications will play a roll. As a result, we expect solutions using HomePlug Command and Control solutions to emerge in a big way, although we envision many combination solutions evolving including powerline and low-speed wireless technologies,” says Joyce Putscher, In-Stat analyst.
Recent research by In-Stat found the following:
Surpassing the inflection point in 2006, worldwide broadband powerline equipment based on HomePlug, CEPCA and UPA powerline reached 5.4 million.
Global growth for broadband powerline networking equipment will approach 100% in 2007.
Although broadband has gained most of the attention, the HomePlug Command & Control (HPCC) low-speed specification has recently been approved with meaningful shipments expected in 2008. Worldwide market acceptance is expected to be strong over the next five years, driven by many regional mandates for energy management and savings.
The research, “Powerline Home Networking 2007 Update: Gaining Power in the Global Market” (#IN0703456RC), covers the worldwide market for home networking over powerline. It provides service providers, equipment and semiconductor vendors valuable guidance on market trends and expected progress, opportunities, segmentations, and market sizing. Worldwide forecasts include unit segmentation by geographic region, product categories, technology, bandwidth, PHY/MAC chipset ASP, retail vs. service provider channel, and in-home networking vs. access and utility use. This research also references data from In-Stat’s annual home networking consumer survey, conducted during the first half of 2007.
Saturday, December 08, 2007
NY Governor Calls for Statewide Broadband
Richard Koman, newsfactor.com
Fri Dec 7, 11:22 PM ET
Fri Dec 7, 11:22 PM ET
Friday, December 07, 2007
Comtrend Takes Home Networking to 400Mbps Over Powerlines !!!
Comtrend Takes Home Networking to 400Mbps Over Powerlines
DEC. 7, 2007
Telecom Equipment Manufacturer announces the PowerGrid 904, First Ethernet-Powerline Adaptor That Enables 400Mbps Speeds Over Existing Wiring in a Home.
Comtrend Corp., a leading supplier of broadband, VoIP and data networking equipment, today announced it has become the first manufacturer to offer home networking adaptors that support speeds up to 400 Mbps. Comtrend's new product, called the PowerGrid 904, is an Ethernet Powerline adaptor that plugs into any standard power plug in a home. The PowerGrid 904 provides Comtrend's carrier customers a competitive advantage in deploying next-generation triple-play services, particularly IPTV, over existing wiring within a consumer's home.
The PowerGrid 904 plugs into any power outlet into the home and has Ethernet plugs to turn the home power lines into a network. Each power outlet in the home with a PowerGrid 904 becomes a place to connect Residential Gateways, IP Set Top Boxes and Computers to join a high speed home network. The PowerGrid 904 uses advanced chipset technology from DS2 to achieve speeds up to 400Mbps. The PowerGrid 904 and DS2 chipset technology are compliant with the Universal Powerline Alliance (UPA) standard.
With this new technology carriers have a simple way to deploy IPTV and other advanced triple-play services within a consumer's home. The use of power lines within a home to create a network eliminates the costly and time consuming need to re-wire with Ethernet cables. With such features as Quality of Service (QoS), remote management and repeater function, the PowerGrid 904 is capable of distributing reliable services in any home environment.
"Reliable distribution of high speed data within a consumer's home is the key to a successful deployment of triple play services," said Andrew Morton, Comtrend's General Manager. "With the advancement of high speed technologies over copper or fiber for carrier deployment the home network is the final frontier to deliver such services as High Definition TV to any room in a home. The PowerGrid 904 meets our telco customers' current and future needs with breakneck speeds up to 400Mbps over power lines, QoS, remote management and repeater function."
The PowerGrid 904 will start shipping early next year.
www.comtrend.com
DEC. 7, 2007
Telecom Equipment Manufacturer announces the PowerGrid 904, First Ethernet-Powerline Adaptor That Enables 400Mbps Speeds Over Existing Wiring in a Home.
Comtrend Corp., a leading supplier of broadband, VoIP and data networking equipment, today announced it has become the first manufacturer to offer home networking adaptors that support speeds up to 400 Mbps. Comtrend's new product, called the PowerGrid 904, is an Ethernet Powerline adaptor that plugs into any standard power plug in a home. The PowerGrid 904 provides Comtrend's carrier customers a competitive advantage in deploying next-generation triple-play services, particularly IPTV, over existing wiring within a consumer's home.
The PowerGrid 904 plugs into any power outlet into the home and has Ethernet plugs to turn the home power lines into a network. Each power outlet in the home with a PowerGrid 904 becomes a place to connect Residential Gateways, IP Set Top Boxes and Computers to join a high speed home network. The PowerGrid 904 uses advanced chipset technology from DS2 to achieve speeds up to 400Mbps. The PowerGrid 904 and DS2 chipset technology are compliant with the Universal Powerline Alliance (UPA) standard.
With this new technology carriers have a simple way to deploy IPTV and other advanced triple-play services within a consumer's home. The use of power lines within a home to create a network eliminates the costly and time consuming need to re-wire with Ethernet cables. With such features as Quality of Service (QoS), remote management and repeater function, the PowerGrid 904 is capable of distributing reliable services in any home environment.
"Reliable distribution of high speed data within a consumer's home is the key to a successful deployment of triple play services," said Andrew Morton, Comtrend's General Manager. "With the advancement of high speed technologies over copper or fiber for carrier deployment the home network is the final frontier to deliver such services as High Definition TV to any room in a home. The PowerGrid 904 meets our telco customers' current and future needs with breakneck speeds up to 400Mbps over power lines, QoS, remote management and repeater function."
The PowerGrid 904 will start shipping early next year.
www.comtrend.com
Subscribe to:
Posts (Atom)