Voices: Chano Gómez on powerline networking's "universal" hope
By Brian Dipert, Senior Technical Editor -- 12/14/2007
EDN
A profusion of incompatible "standards," the lingering memory of poor initial products, and the sheer technical challenge of the application have thus far retarded the adoption of powerline networking. Here, Chano Gómez, vice president of technology and strategic partnerships with chipmaker DS2, offers technical and strategic insight into the UPA (Universal Powerline Association) technology his company champions. A future installment of Voices will feature an interview with Andraes Melder of Intellon, which leads the opposing HomePlug Powerline Alliance.
OVERVIEW
I'd like to begin by asking you to provide an introductory summary of the historical development and current status of UPA technology, as it applies to LANs, to broadband Internet-access service distribution (WANs), and to other past and present applications, such as power-meter monitoring.
First, let me clarify that I'm not a UPA official, or even DS2's representative at UPA, so what I'll explain here is my personal view of the historical developments, and not UPA's or DS2's official position.
To understand why UPA's technology is the way it is, we have to go back to 2000, many years before UPA was founded. In 2000, a few months after I joined the company as a junior system architecture engineer, DS2 was focused on developing technology for broadband-access applications that would allow power companies to provide Internet access and VOIP services to their energy customers. At that time, some companies were already working on applying powerline technology for home-networking applications, but as far as I remember, we were 100% focused on access.
When we started working on the next-generation platform (200-Mbps data rate), DS2 realized that we had an opportunity to address the needs of additional market segments, in particular higher speeds for whole-home multimedia networking, so the architecture was changed to accommodate requirements for future home-networking usage scenarios. That turned out to be a pretty good idea, which allowed us to reuse the same platform to deliver different products for different market segments and differentiate ourselves from other players in the in-home powerline market who were offering lower-speed product, basically for data. Both markets are very interesting and complementary: Broadband access so far is a low-volume, high-margin segment, while home networking is a high-volume, low-margin segment.
Why am I providing all these historic details? Because they explain very well DS2's unique vision of the powerline-networking market. We really believe that access applications and home-networking applications represent legitimate uses of powerline technology and both have legitimate requirements that must be addressed by vendors and industry standards. For a long time, there were companies in our industry that thought that only home-networking applications had the right to use the powerline medium, while other companies had the opposite view. Over time, those positions have become less radical when discussed in public forums, but still many companies have that bias in their DNA. ("Home networks must take 90% of the bandwidth, and access should just get the remains." Or "Access is the really critical application, and home networks can use WiFi if needed.") DS2 has a large number of customers and partners in both camps, so we really believe that we must create standards that allow both applications to share bandwidth in a fair way. It's in our DNA. Our engineering resources are split 50-50 between both markets.
For many years, we were members of an organization [HomePlug] that did not share that vision. We tried to change that, unsuccessfully. In December 2004 we left that organization and, along with a group of partners who shared our vision, founded the Universal Powerline Association. The original goal of UPA was to create standards for coexistence between access and home networks. However, feedback from the market pointed to the need for real interoperability standards, so UPA decided to extend its scope in order to create a standard called UPA DHS (Digital Home System) and also to certify product compliance to that standard to deliver on interoperability where others had failed.
That effort has been quite successful. The first 200-Mbps product introduced in the US consumer market was based on the UPA standard, and according to The NPD Group, more than 50% of 200-Mbps products sold in the US retail market are based on the UPA specification and have the UPA logo. Right now, UPA includes members from very varied backgrounds: Companies from North America, Europe, and Japan; companies developing access products and home-networking products; semiconductor companies, power companies, and service providers. There are specific groups focused on high-speed applications, while others are developing standards for low-speed control applications.
In your mind, how do UPA technology's attributes enable it to coexist, supplement, and/or supplant other traditional data-distribution technologies, for both LAN and WAN applications, and both today and in the future?
From the very beginning, the foundation for UPA technology had to be very flexible, because it had to provide solutions for many different markets. Initially that represented more work than what would be required for a technology focused on a single narrow application, but at the end, we think the effort paid off, because the technology is now being used in many different environments, as a complement to many different LAN/WAN technologies.
I'll just give some examples of how our customers and partners are using UPA technology as a complement to other technologies. The most popular scenario is service providers using powerline technology as an extension of DSL, DSL2+, VDSL, and FFTH for IPTV distribution inside customers' homes. UPA technology's flexibility in terms of frequency band is very useful here, as it allows device manufacturers to tune the spectrum used by the powerline transmitter in order to ensure coexistence with other technologies (such as VDSL) whose spectrum partially overlaps the one used by powerline technology.
Another very popular scenario is using powerline to provide Internet and VOIP access to individual apartments in MDUs [multidwelling units] in FTTB [fiber-to-the-building] deployments.
In the consumer space, many combined products are already in demand by users: using powerline to extend the range of existing wireless networks or using powerline as a backbone for interconnecting wireless access points in enterprise or commercial environments. Also, using powerline as a backbone for short-range UWB [ultrawideband] networks could be an interesting application once UWB becomes more popular.
In general, as more applications and services converge to IP-based protocols, it becomes easier for manufacturers, consumers, and service providers to interface them with powerline networks. You should expect to see powerline technology as an extension of WiMax in high-rise buildings, or as a backbone connection for nanocells and femtocells in cellular networks.
TECHNOLOGY SPECIFICS
How does UPA modulate data on the ac power signal, and how does it handle detection, correction, and/or retransmission of errors in that data?
UPA's physical layer is based on OFDM [orthogonal frequency division multiplexing] modulation. OFDM was chosen as the modulation technique because of its inherent adaptability in the presence of frequency-selective channels, its resilience to jamming signals, its robustness to impulsive noise, and its capacity of achieving high spectral efficiency.
Detection and correction of errors is achieved by a concatenation of four-dimensional trellis codes and Reed-Solomon forward error correction, specially tuned to cope with the very special powerline channel impairments. For those cases in which packets become so corrupted by noise that they cannot be recovered, a retransmission mechanism is used. Packet fragments are numbered individually, and each pair of transmitter and receiver keep track of which one has been received correctly and which one needs to be retransmitted, using an ACK protocol.
How does UPA compensate for varying noise levels on the power grid, caused by fluorescent lights and motor-driven products such as vacuum cleaners, hair dryers, and heating and air-conditioning fans? How do you educate consumers on the potential need to install noise filters on the power inputs of these interference sources, in order to ensure reliable powerline-network operation?
The interesting feature of noise found in powerline is that it's not like the famous "white Gaussian noise" found in any digital-communications textbook. It's "colored" noise (stronger in some frequencies and weaker in others), non-Gaussian (you have very strong peaks that do not follow a normal distribution), and nonstatic (you have shorts periods of silence followed by shorts periods of strong noise). So, a powerline device has to find out which are the "clean" time/frequency slots and make sure to avoid the noisy ones. And once this is done, somebody will plug/unplug something in a room nearby, and you have to repeat the time/frequency analysis all over again, in only a few milliseconds, to ensure that the user does not experience any service interruption.
Fortunately, advances in DSP and ASIC technology provide enough computing power to perform a pretty accurate time/frequency analysis of the communication medium, and we are able to ascertain which are the "slots" where we can transmit with efficiencies of up to 10 bits/second/Hz and which are the ones where maximum efficiency is lower (or even zero).
Regarding the issue of how to educate consumers about "best practices" for maximizing the performance of their networks, we work on two different fronts. On one hand, we work with our partners with more experience in the consumer market (companies such as Netgear, D-Link, and Buffalo Technology) to ensure that their product packaging and user manuals explain the best way to use the product (for example, always recommending users to connect the adapter directly into a wall socket and not into a surge-protected power strip). Additionally, we try to provide useful feedback to users so that they can easily recognize which is the best way to use the product. In January 2007 we launched a reference design (code name DH10PF) with multicolored LEDs so that users can easily see if the network is operating at full performance (green for excellent performance, yellow for good performance, red for bad performance). The feedback we have received so far, both from consumers and service providers, is that the system is very intuitive and achieves the goal for which it was designed. Since then, other powerline vendors have started to "borrow" the idea, so it must be good.
How does UPA handle the reality that two power outlets (either in close proximity or not) that a consumer may want to interconnect via UPA may be fed by different circuit breakers, and (even more challenging) may derive from opposite phases of the 220V (US) source feed?
First, let me clarify that the fact that two outlets are in different circuits does not necessarily mean that powerline technology won't work there. In general, there is sufficient signal coupling between the wires (due to capacitance) to ensure that a connection can be done with adequate performance. My impression is that a lot of people have bad memories from legacy home-control powerline technologies that operated at much lower frequencies (kHz instead of MHz) and that didn't work very well when sockets were in different circuits and/or phases.
Having said that, it's also true that having signals "jumping" from circuit to circuit means that signal strength is slightly lower, so there is usually a certain amount of performance degradation in that situation. If you combine this with additional factors like a damaged wire and a strong noise source and low impedance due to several devices being connected in parallel in the same socket, you may find sockets where a connection cannot be established.
There are cases (in extremely large homes, or industrial/commercial environments) where a combination of long distance and circuit/phase change could result in bad performance in some socket pairs. UPA provides a very elegant and efficient way of solving this problem using repeaters. The user just needs to connect an additional powerline adapter in a socket close to the electrical switch panel (which is the electrical "center of gravity" of the building), and that adapter will automatically become a signal regenerator. As far as we know, this feature is unique to UPA technology. The solution is simple and elegant, because no configuration and no special hardware or firmware is required (any powerline adapter can be a repeater).
UPA is specified as a "200-Mbps" technology, but testing results suggest that it delivers only a limited percentage of that speed in real-life usage. Why was the peak PHY rate chosen as the technology designator? What range of TCP and UDP speeds do you believe most consumers will experience in real-life settings, and what kinds of applications are supportable (and conversely unsupportable) by those speeds, both in single- and multiple-coincident-data-stream situations? And do you address potential consumer confusion and frustration, when they don't get the performance results that the "200-Mbps" stamp on the outside of the product box might otherwise suggest they'll achieve?
If you take a look at the 802.11g specification, it's supposed to provide a data rate of up to 54 Mbps. Most of the 802.11g products have a label somewhere on the box that mentions "54 Mbps." If you measure TCP/IP throughput yourself, using standard tools such as "iperf," you'll see that the maximum you'll get (with all possible optimizations enabled) will be around 25 Mbps if you test between your access point and an end point, and around 12 Mbps if you test between two end points. That's in the best possible conditions (short distance) and it will degrade as you increase distance or obstacles between devices or in case you have other wireless networks or cordless phones in your vicinity.
If you take two UPA powerline adapters and perform the same test, in the same conditions, you'll get a maximum speed of 95 Mbps. Most UPA-based powerline products for the consumer market have a Fast-Ethernet interface, which is the reason why you cannot get beyond 95 Mbps. UPA products for access applications that have a Gigabit-Ethernet interface can provide up to 120 Mbps. Like in wireless networks, this data rate will decrease as you increase transmission distance and introduce noise sources (exactly in the same way as with cordless phones and 802.11g networks). So, the ratio between maximum throughput and PHY data rate in UPA powerline technology (50% to 60% depending on the test equipment) is not worse than in 802.11g technology (50% to 25% depending on test set up).
Does this mean that it's OK for the powerline industry to use the "200 Mbps" as the technology designator? To be honest, if I had the opportunity to start all over again, without the historic context provided by other networking technologies, I wouldn't have used the 200-Mbps label. I'd rather use a label that describes the expected application of the technology, without references to specific data rates. But given that we had to introduce a new technology in a market where consumers already knew that 802.11b worked at 11 Mbps, 802.11g at 54 Mbps, and HomePlug 1.0 at 14 Mbps, we needed to provide a reference that consumers could use for performing an apples-to-apples comparison.
UPA technology seems from my standpoint to focus on optimizing UDP performance, versus TCP. Assuming you agree with my perspective, why and how was UDP prioritized in the definition and implementation of the technology? And how does UPA stack up (from cost and other standpoints) against other powerline technologies that offer consumers more limited (or nonexistent) TCP-versus-UDP prioritization capabilities?
UPA technology itself was not designed to optimize UDP versus TCP. I think that some test results may create that perception because, in general, TCP performance is more sensitive to "events" in the communication channel than UDP. Most UDP test tools will generate packets as fast as possible and will flood all the bandwidth provided by the network. On the other hand, TCP has built-in mechanisms that reduce the data rate at which the transmitter generates packets when the protocol "thinks" that the network is congested. For example, if a packet experiences higher latency than the rest (for example, if the packet had to be retransmitted because of channel noise), the TCP stack may think that the network is congested and will reduce the data rate. Also, most TCP implementations limit the amount of packets (TCP window size) that can be transmitted before an acknowledgment (ACK) is received. This can also artificially limit the data rate obtained by TCP.
In my tests at home, the parameter that has most impact in performance is TCP window size. This is due to the fact that the "bandwidth × latency" product in powerline networks is higher than in Fast-Ethernet or Gigabit-Ethernet networks, so unless you have a large TCP window, your PC won't be able to fill the pipe with enough data. End users can change that in their computers by changing a registry value.
Another thing to consider is whether specific implementations of UPA products provide default prioritization schemes that give higher priority to UDP streams than TCP streams in case both are present and the network is congested. I know of at least one vendor who provides this default scheme, justified because in most cases UDP streams are used for multimedia applications (VOIP, video streaming) and it could make sense to prioritize those instead of TCP flows for nonmultimedia applications. Other UPA vendors provide different default priority schemes, based on 802.1p tags or on specific UDP port numbers.
UPA allows for QOS (quality of service) prioritization of particular types of data streams. How do you balance the desire for a robust consumer experience from a QOS (or other) standpoint against consumers' desire for a robust out-of-box experience (one that doesn't require tedious and confusing calibration and customization of QOS and other technology parameters)?
I don't expect consumers to perform any QOS configuration at all. Most of the QOS configuration options you may have seen in products that use DS2 silicon are there for other purposes: in some cases, the same product sold in retail is also used for the service-provider market. Those QOS options are there to allow service providers to customize prioritization for their specific service requirements. You need to be familiar with IEEE 802 standards and IETF RFC document in order to do anything useful with most of those QOS settings.
In my hands-on experience, powerline often exhibits extended latencies as compared with other networking technologies. What (if any) applications are therefore not candidates for using powerline as their transport scheme? And how can applications (and the operating systems they run on top of) compensate for powerline networks' extended latencies?
According to my experience, the effect of extra latency can be compensated for by increasing the maximum TCP window size used by applications and the operating system. This can be done easily with a registry setting in Windows. I understand that the configuration tool that comes with the CD provided by most vendors already does this registry change automatically, as it does not have any negative effect in any other application.
I'm glad to see that UPA appears to support consumer-upgradeable firmware. But, perhaps obviously, a no-upgrade-required scenario is even more preferable. What did (and does) the UPA standards body do to ensure robust out-of-box technology support, both for various network protocols and for interoperability of various manufacturers' UPA-cognizant equipment?
For me, these are totally unrelated things. Regardless of whether the final product is upgradeable or not, UPA has certification procedures in place to ensure that any product that has the UPA logo interoperates with any other products with the same logo. If you go to Best Buy and get a Netgear HDX101 unit and a D-Link DHP-300 unit and put them in your network, they will communicate with each other, regardless of the firmware version.
Firmware upgradeability is not a mechanism for vendors to ship nonstandard products with the hope of fixing problems later. Firmware upgradeability is a mechanism to add new features for installed products. If we look beyond the consumer market and consider also the service provider market, the requirements are even stronger. Most of our customers wouldn't even consider installing a single box that was not firmware upgradeable from a remote-management center using standard TCP/IP protocols.
Actually, this single feature is one of the main reasons why UPA technology is so popular with IPTV service providers. Right now, only UPA products provide remote management using an embedded TCP/IP stack. No other powerline product provides that today. They don't even have an IP address, let alone a complete TCP/IP stack.
How much of a concern is data security across a shared power-distribution topology, both in a multiresident neighborhood environment and in a multiapartment, single-premises setting? And how do you consequently educate consumers on the potential need to change the default encryption password and make other security adjustments? What encryption scheme(s) does UPA use, is encryption enabled by default (and at what performance impact versus an encryption-disabled alternative configuration), and can the encryption protocol be upgraded or otherwise enhanced on a situation-by-situation basis?
The latest AITANA chipset announced by DS2 at IDF Fall 2007 provides 256-bit AES encryption, which as far as I know, is the strongest encryption available in any powerline product today. The encryption engine is hardware-based, and the system has been designed in way that provides full performance regardless of whether encryption is used or not—unlike wireless systems, which usually have degraded performance when encryption is enabled. AITANA also supports 156-bit Triple-DES encryption for backwards compatibility with previous products. Key-exchange protocols are software-based, which means that they can be upgraded easily if better protocols are created.
One aspect on which we have spent a lot of time and engineering resources is finding ways in which users can enable security as easily as possible, even without using a computer. This no-computer goal may seem extreme, but it's important to note that one of the most popular applications of UPA technology is for in-home distribution of IPTV content (service providers like British Telecom, Telefonica, and Portugal Telecom are good examples). In many cases, subscribers to IPTV services may not even have a computer. They just signed up for the service because it was cheaper than regular cable or maybe had better content than satellite. They have no idea that their TV service is delivered via an ADSL2+ modem and don't even know what IPTV stands for.
For our DH10PF reference design, we came up with a feature called OBUS (One-BUtton Security), which basically allows the user to set up an encrypted network just by pressing a button on each powerline adapter within 30 seconds of each other. No computer required. No passwords to remember. If the LED is green, your secure network is up and running.
Other vendors tried similar ideas in the past, but with severe limitations: the units had to be connected physically close to each other for initial setup, and complex switches had to be configured for the system to work. We think our approach is the most user-friendly, and the fact that other vendors have started to "borrow" the idea seems to validate that.
The amateur (aka "ham") radio community has been quite vocal with its concerns regarding potential interference in the presence of an active powerline network, due to radiated powerline noise and consequent inductive coupling to the amateur radio setup's broadcast and reception antenna. Are the concerns valid, and if so, how has UPA technology been architected to mitigate these concerns (via notch filters or other schemes)? And what potential impact does such mitigation have on powerline performance and other robustness measures? What other potential destructive interference scenarios (wireless keyboards and mice, for example) exist?
I was asked about this specific topic more than a year ago. [Editor's note: Gomez here provided a link to this August 2006 interview with Computing Unplugged, and specifically called our attention to the following quotes from that interview.]
When the first trials of BPL [broadband over powerline] technology started, around seven or eight years ago, BPL systems transmitted high power levels and did not have special mechanisms to protect radio services. As the industry has learned more about the problems found with real installations, it has improved the technology, reducing power levels and providing sophisticated notching techniques to avoid interference.
In 2003 (three years ago), DS2 introduced its second Generation powerline chipset, which was the first in the industry to provide speeds up to 200 Mbps, and 40 dB programmable notches. These chips have been designed to allow BPL vendors to design equipment that meets FCC requirements, to adequately protect ham radio bands and to provide additional mitigation mechanism in case any isolated interference case is detected in a BPL network. The ARRL lab tested this technology in April this year and issued a favorable review.
You can see ARRL's view of DS2's second generation technology here.
As I understand it, powerline technology is unable to work through a surge protector or UPS (battery-backed uninterruptable power supply). How do you educate consumers about dealing with this issue? And how do you deal with the fact that surge-protection filters are increasingly being built directly into ac outlets?
It's true that most surge protector and UPS devices will block the frequencies used by powerline technology. We have done two things to avoid this problem: 1) Reduce the number of cases in which the user is forced to connect the powerline adapter to a power strip, and 2) Make it obvious for the user that connecting to a surge-protected power strip is a "bad idea."
The first problem is avoided by one of the most visible features of our DH10PF reference design: a passthrough socket on the powerline adapter, which allows the user to connect the adapter directly to a wall socket, while a power strip can still be connected to the passthrough socket. With this design, there is no reason why a user could not connect the adapter directly to a wall socket.
The second problem is solved with visual indicators, so that the user can immediately see that connecting the adapter in the wall socket provides a solid green LED, while connecting to a surge-protected power strip gives you a yellow or red LED.
Regarding surge-protection filters built directly into ac outlets, so far we have not seen them a lot. It's possible that they have become more popular in recently built homes, but these are coincidentally the kind of homes where CAT5 wiring is also built in, so this probably is not the target market for powerline technology anyway.
COMPETITION, COMPATIBILITY, AND STANDARDIZATION
In contrast to HomePlug, UPA (along with the other standards and certification bodies) seems from my viewpoint to be heavily DS2-influenced. Pragmatically, are these true standards bodies, open to influence from numerous industry participants, or are they "standards bodies" in name only? And how do DS2 and the UPA plan to evolve and mature in the future?
I don't agree with that view. I don't like talking about my competition, but I'll answer your question. You give the example of HomePlug versus UPA. I think UPA is as influenced by DS2 as HomePlug is by Intellon or HD-PLC is by Panasonic. We all like to call ourselves "standard" and call everybody else "proprietary," but in practice, if you are a device manufacturer and you want to buy HomePlug AV silicon, right now you can only buy from one vendor (Intellon), if you want to buy HD-PLC silicon you can only buy from Panasonic and if you want to buy from UPA you can only buy from DS2. As an industry, we now have the chance to solve this problem at IEEE P1901. This is the only opportunity we have to make this industry grow instead of recreating a Blu-Ray versus HD-DVD battle (but worse, with three competing specifications).
We still have a huge challenge in front of us. We need to work together as an industry to improve the quality of the existing proposal so that it can provide a technical solution to meet the needs of the BPL market. A large part of the consumer-electronics industry has been reluctant to integrate powerline technology in its products because of a lack of a single global standard. We now have an excellent opportunity to remove that obstacle by working together to create a single PHY/MAC specification that ensures complete interoperability between silicon vendors.
But we really need to make an effort to make it work, and so far the proposal on the table does not seem to achieve the most important goal: interoperability. The latest "2 PHY, 1 MAC" proposal discussed at IEEE P1901, if left unchanged, will create a situation where two products supposedly compliant with the IEEE P1901 specification may not be interoperable, because one of them is based on an OFDM PHY while the other is based on a Wavelet PHY.
Does the UPA specification provide room for proprietary (and backward-compatible) standards-based enhancements, as was the case with HomePlug 1.0 Turbo versus HomePlug 1.0? If so, how does the standards body plan to handle cases of companies that decide to implement such enhancements?
Yes, it does. I would even say that any standard that is well designed will always provide room for performance enhancements. DS2 recently announced the development of 400-Mbps powerline technology that is backward compatible with the existing UPA 200-Mbps products. So far, this is a technology exclusively developed by DS2, although I don't see any reason why UPA could not extend the specification to make 400 Mbps standard.
Your two primary competitors, as I view the marketplace, are the Intellon-championed HomePlug technology and the Panasonic-developed HD-PLC approach. Do you agree, or are there other powerline schemes that I've overlooked? How do you position yourself, as a technology and as a company creating products based on that technology, against your competitors? And how do you see the competitive market both today (worldwide and geography specific) and as it will evolve over time?
I think you made a pretty accurate description of the competitive landscape. There are other companies developing powerline technology, but either they are in very early development stage or are just focused on specific niche applications.
The main positioning difference between UPA and other organizations, such as HomePlug and/or HD-PLC, is that while the other organizations are mainly focused on developing specifications and products for the home-networking market (with BPL access as a second-category "afterthought" in the case of HomePlug or simply ignored in the case of HD-PLC), UPA has always been focused on developing the best possible technology that can satisfy the needs of both markets simultaneously.
So the summary is, unlike HomePlug and HD-PLC, which are focused on home networks only, UPA provides universal solutions for all markets.
Now, once the positioning of UPA is clear, what is the DS2 position in the framework of UPA? DS2 is a company that has consistently been the technology leader in powerline technology: We work hard to always be the first company to introduce the next performance level or the next key feature. We were the first company to introduce 45-Mbps products (at a time when the state-of-the-art in powerline was 14 Mbps), the first to introduce 200 Mbps (when, again, the competition was stuck at 14 Mbps), and we are now the first to introduce 400-Mbps technology. From the point of view of features, we were the first company to introduce elements that are now the standard reference in the industry: programmable notches, TDMA MAC, programmable QOS, single-chip repeaters, frequency-division repeaters, IP-based remote management, and more.
Currently, as I understand it (please confirm), multiple coincidently operating "200-Mbps" powerline technologies will actually degrade each other—far from coexisting (or, ideally, interoperating). Could you go into more detail regarding the coexistence and interoperability work going on in the IEEE?
Let me clarify a key issue regarding the different coexistence proposals discussed at IEEE P1901. As of today, the most mature proposal for solving the coexistence issue has been authored by engineers of DS2, Panasonic, and several other members of UPA and CEPCA [Consumer Electronics Powerline Communication Alliance] like Mitsubishi Electric and SiConnect. This proposal is the result of almost two years of hard work between all these companies. It includes well-defined mechanisms for achieving coexistence between access and in-home systems, and between different in-home systems. The proposal includes well-defined common signals that can be understood by otherwise incompatible OFDM and Wavelet systems. It also includes mechanisms for coexistence with non-P1901 and legacy devices. It is a very good specification, and we are very proud of the work we have done along with the rest of the industry.
Currently, multiple incompatible "200-Mbps" products sit side-by-side on retailers' shelves, and some networking vendors even offer incompatible powerline technologies within the same product line. How much consumer confusion currently exists, and how are you working with your customers (the networking vendors, along with their customers, the retailers) to minimize it? Until either a standards body such as the IEEE mandates interoperability, or might-makes-right market pressures cull out technology alternatives, won't consumer frustration inevitably grow as the powerline-networking market grows? And won't this incompatibility frustration put an unfortunate cap on powerline networking's market-growth potential?
I completely agree with you on the analysis of the current fragmentation of the market. The powerline market is growing fast, but nowhere as fast as it could in case a single standard existed.
Our position here is clear: We need a single standard, with a single PHY and a single MAC, so that all products are interoperable. We are willing to do whatever is needed to achieve this, even if this means re-designing silicon and departing from our current PHY and/or MAC. IEEE P1901 represents an excellent opportunity to achieve this, but so far it looks like we as an industry may miss this opportunity again, at least based on the current proposals being discussed at P1901, which will allow non-interoperable products to be labeled as P1901-compliant. Users will buy those products only to find later that they don't interoperate.
The current situation in the market is like having three groups of people speaking in three different languages, say German, French, and Chinese.
Ideally, a reasonable solution would mean choosing one the competing languages as the standard (say, Chinese), choosing a "neutral" fourth language as a standard (say, Greek), or creating a new "best-of-breed" language with the best element of all them (say, Esperanto). Any of those solutions would be acceptable, although the last one is probably the best for the industry.
The problem is that the scheme being proposed at IEEE P1901 consists of arbitrarily labeling German and French (but not Chinese) as "the standard." Obviously, this does not solve the problem, but allows the Germans and French to get rid of the Chinese. As presenting an idea like this would be laughable, this is being presented as the idea of "dual PHY, single MAC," which basically means that although German speakers won't talk to French speakers, both will have some common elements (words made of letters, spaces between words, etc.). The value for consumers is essentially zero, but still the Germans and French can put a P1901 logo in their boxes.
FUTURE DIRECTIONS
What, in your mind, are current shortcomings of UPA technology that the standards body (and/or proprietary UPA-based enhancements) plan to address in the future, and what are the timeframes for these enhancements?
I wouldn't use the word "shortcomings" here. In general, everybody wants products that work faster, better and cost less. This is something that can be stated about any technology or industry. In general, we (not only UPA, but also HomePlug and HD-PLC) need to address the interoperability issue. This is the more important shortcoming and a historic one. If we can now move forward and address the interoperability issue with a good solution at IEEE P1901, all players will be successful in a market that will be orders of magnitude larger than it is today.
Getting back to DS2's "400-Mbps" announcement, what kind of performance can users reasonably expect, what is the timeframe for the technology's high-volume production implementation, and will it be backward-compatible with today's UPA?
From the performance point of view, you can expect roughly twice the performance of current 200-Mbps systems. We are not providing product details yet, but if you take a look at our press release, we say: "DS2 400-Mbps technology will be available in next-generation products from DS2 on time to satisfy the demands for extra bandwidth in the digital home and last-mile applications that most analysts predict will happen from 2009 onwards." That's all the information we can provide now, but you'll probably see more details at CES in January.
The technology will be backward-compatible (in the sense of "fully interoperable") with existing 200-Mbps UPA-compliant products, thus offering an easy migration path to our current customers. In the past, other vendors broke backward-interoperability when they introduced new performance levels (that's the case with HomePlug AV products, which are not interoperable with HomePlug 1.0 or HomePlug Turbo products). We want to make sure we don't make that mistake in this case.
While working on a recent home-automation project (please see "Homeland security: monitoring and manipulating remote residences") I've discovered that current powerline home-control technologies, such as X10 and Insteon, have numerous shortcomings. The recently ratified HomePlug Command and Control 1.0 specification is therefore admittedly of great interest to me. Please describe any UPA work with respect to augmenting today's specifications with command-and-control capabilities, including anticipated product-availability timeframes.
UPA recently announced the creation of a working group with the purpose of addressing the needs of that market. The effort started in September 2007 and the working group plans to publish a specification in nine months.
Currently, powerline transceivers locate external to system power supplies, but AMD and Intel have both demonstrated systems containing powerline-networking-cognizant power supplies. When will integrated powerline networking be widely available for PC and other applications? And should EDN's readers anticipate cost savings, and/or other benefits, resulting from this integration?
Although the integration of powerline technology inside power supplies will bring some cost savings, I think the main benefit will come from simplifying the user experience, so that they will just need to plug a single power cord to get everything interconnected.
Right now, most PC vendors are waiting for the standards situation to settle before making significant investments in this kind of application. Hopefully, once IEEE P1901 finishes its work somewhere in 2009, and if the end result is a single-PHY, single-MAC standard, PC manufacturers will start to demand this kind of integrated product.
WRAPUP
Thanks for your time. In closing, what topics have we not yet covered in the above questions that you'd like to briefly comment on?
I'd like to finish again with the issue of standards. This is really the single most significant issue faced by our industry now. IEEE P1901 is the only chance we have to solve it, but we must solve it with real solutions, not shortsighted solutions that perpetuate the existence of non-interoperable products (such as Wavelet-based and OFDM-based devices) that keep manufacturers and consumers locked up with the same vendor forever.
Friday, December 14, 2007
Voices: Chano Gómez on powerline networking's "universal" hope
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment