Could the Global Network Be a Thousand Times More Efficient?

By Ron Wilson, Executive editor — EDN, 1/12/2010

A research project that apparently began as a speculation among Bell Labs engineers in Ireland and the UK has grown into a global consortium determined to reduce energy consumption in the global information network. The Green Touch initiative, announced Monday, includes service providers AT&T, China Mobile, Portugal Telecom, Swisscom, and Telefonica; academic research labs The Massachusetts Institute of Technology (MIT) Research Laboratory for Electronics (RLE), Stanford University Wireless Systems Lab (WSL), and the University of Melbourne  Institute for a Broadband-Enabled Society (IBES); government and nonprofit research institutions CEA-LETI Applied Research Institute for Microelectronics (Grenoble, France), IMEC (headquarters: Leuven, Belgium), and The French National Institute for Research in Computer Science and Control (INRIA); and industrial labs Bell Labs, Samsung Advanced Institute of Technology (SAIT), and Freescale Semiconductor.

These disparate organizations have joined the cause to find three orders of magnitude in energy savings in global communications, even as the demand for bandwidth continues to grow exponentially.

“In the next five years we hope to identify the enabling technologies for three orders of magnitude improvement in efficiency,” explained Sam Samuel, Bell Labs executive director for Ireland and the UK. “The actual equipment to implement those savings will come later, after the technologies are identified.”

Samuel said the project began within the Labs as a theoretical question: Just how much more efficient could the global network be? The researchers started with an idealized model: a network in which each endpoint was connected to every other by an ideal link, and each endpoint contained an ideal, massive switch to deal with all those connections. In order to minimize the number of switching events there was no hierarchy” just a single, flat, fully populated mesh. In this case, Samuel said, physical limits on transmission efficiency and switching energy would allow the network to be 1012 times more efficient than today’s technology.

That study proved that the question was worth pursuing. But it was quantitatively almost meaningless, since nothing approaching the model could actually be built. So the researchers moved their modeling closer to reality. The next step was to isolate the core technologies in today’s network and explore what savings might be physically possible for them.

The researchers found that without breakthroughs in fundamental physics, wireless transmission could be 103, and fixed fiber transmission 108 times more efficient than it is now. These assessments assumed today’s physics, but massive deployment of as yet unidentified technology to reduce energy. Samuel cited such ideas as near-threshold power supply voltages, adiabatic circuit designs, drastically smaller cell sizes for wireless networks, and holographic coding for optical transmission as the sorts of technologies that might be needed. But he said the program at this point has not begun to identify actual target technologies.

Next, the researchers looked at projections. Anywhere they looked, Samuel said, it appeared that data rates were growing at 30 to 50% annual rates. But wireless data rates were an exception, growing much faster. “It appears from the growth rates as if wireless networking is in its infancy and just experiencing its first growth spurt,” Samuel said. “Our model suggests that by somewhere between 2011 and 2013, wireless networks will dominate energy consumption.” A shift to smaller cell sizes, in effect moving traffic from the air into fiber, could delay that crossover.

Such studies, correlated and combined with models of network growth, led to the conclusion that an aggregate thousand-fold improvement in network efficiency is feasible. But the effort will require new technologies in computing, switching, and transmission. As well, the goal will demand new ways of thinking about how the network is structured. For example, computing is always expensive. You want to minimize it whenever possible. That may mean thinking very differently about how to employ the computing cloud in tandem with local computing resources. It suggests embedding provisions for quality-of-service and security deeply in the protocol, rather than trying to implement these functions by applying intensive computing to a packet stream that was never intended to enable such things. And the goal could mean emphasizing optical transmission, which can be made nearly lossless, over wireless transmission, which cannot.

Yet Samuel cautioned that the enabling technologies the consortium identified had to be reachable by an evolution of today’s physical network. “No one is going to throw out their hardware and software,” he observed. “And if you come up with an idea that no one adopts, you haven’t solved the problem.”

So why is the beginning of 2010 the right time for a global 14-organization consortium? Because there is no time for inaction, Samuel says. “As an engineer, how often is it that I work on a problem that not only helps my company, but could change the lives of my children? I can’t walk away from this sort of problem.”