Lynne Kiesling
Today in Technology Review, Jonathan Koomey has an interesting analysis of computational energy efficiency. We’re all familiar with Moore’s Law — Gordon Moore’s prediction that the number of transistors on a chip will double approximately every two years — but I did not realize that Moore’s Law is also borne out in improvements in the electrical efficiency of computation. Not only do we have more and more computational capacity per unit of area, each of those increased computations is performed with less electricity per computation. Koomey’s graphic showing this result over time is striking:
If this trend continues, Koomey claims, ” the power needed to perform a task requiring a fixed number of computations will continue to fall by half every 1.5 years (or a factor of 100 every decade). As a result, even smaller and less power-intensive computing devices will proliferate, paving the way for new mobile computing and communications applications that vastly increase our ability to collect and use data in real time.”
The ability to do more work with less effort is one of the most meaningful consequences of technological change, whether we’re talking about horse harnesses, water wheels, diesel engines, or digital sensors. One of the fascinating aspects of this improvement in computational electrical efficiency is that it opens up the feasibility of lots of distributed low-power sensors that get enough electricity to operate by harvesting “background energy flows”; Koomey’s example is small weather sensors that harvest stray energy from television and radio signals to send weather condition updates every five seconds. Imagine how a distributed network of such sensors could improve severe weather preparation, for example.
In the rest of this very interesting article, Koomey discusses the research and design efforts going into achieving such energy efficiency in data transmission and taking a system-level perspective on the electricity use of an entire network of devices. He also claims, and I think he’s right, that without such energy efficiency the “Internet of things” cannot become a reality.
The “Internet of things” framing of the Internet envisions interconnected networks of devices able to communicate their states, generate more granular information, and/or trigger tasks autonomously, without human intervention. For example, right now the water filter in my refrigerator needs to be replaced, which means I go down to the basement to see if I have one (which I do), and if using it reduces my filter inventory to one, I get online and order three more. It would economize on the most scarce resource in this supply chain — my time — if the filters had RFIDs and the refrigerator had an algorithm that would implement the inventory query and ordering process for me. I still have to install the new filter, but if that installation triggered an automated query and order, I’d come home from work in a few days to find a box of three water filters, with little effort on my part. That’s an example of the potential of the Internet of things; I’m sure you can come up with more examples that you would find valuable in your own work or personal lives, and I know you can see where this IoT framework intersects with consumer-focused smart grid networks.
Of course, details matter, such as getting the interoperability rules and security right so that only refrigerators can query the filter inventory in the house (no infiltrators, including the government), and so that the refrigerator’s connection to order replacements is secure. The same applies to electricity devices in the home and the digital meter, which is why one of the important phases in the process of smart grid development is laws protecting consumer privacy and property rights in data. Innovation in both computational power and computational energy efficiency have created this potential to create more value while economizing on the scarce resources of human time and attention.
UPDATE: And check this out: carbon nanotubes that can dump heat separately from current into a separate device, which should contribute to continued gains in computational energy efficiency.