Data Center Power Use, Storage, and Smart Grid

Lynne Kiesling

Data center power use has become substantial over the past decade, as computing has increased and computing power has intensified, so we have smaller processors doing ever more work, requiring ever more electricity. These processors also give off more waste heat, requiring more cooling, which requires more electricity; indeed, data center cooling has become a hot (pun intended) topic for enterprises, IT professionals, and building managers and engineers. This eWeek article from August 2006 highlights some of the data center power use issues:

“The people who spec and build the data centers are not the ones who pay the electric bill,” said Neil Rasmussen, chief technology officer and co-founder of American Power Conversion, in West Kingston, R.I. “Many of them didn’t know what it was or even who paid it.”

As a result, data center managers are doubling as HVAC (heating, ventilating and air conditioning) experts as well as certified IT administrators.

In their efforts to “green” the data center, they are learning to unlearn a lot of data center architecture design that has been handed down over the years.

Any data center, but especially one crammed with servers stacked in compact chassis, is “a radical consumption of power, and the exhaust of power is heat; there is no way you can consume one without the other,” Oliver said.

But as the typical server unit has shrunk from a stand-alone pedestal the size of a filing cabinet to 2U (3.5-inch) stackables, 1U (1.75-inch) pizza boxes and even blades, both power and heat cause problems.

“The whole industry has gotten hotter and more power-hungry. Within the last five years, servers went from using around 30 watts per processor to now more like 135 watts per processor,” Oliver said. “You used to be able to put in up to six servers per rack; now it’s up to 42.”

Every kilowatt burned by those servers requires another 1 to 1.5 kW to cool and support them, according to Jon Koomey, a staff scientist at Berkeley National Laboratory, in Berkeley, Calif., and a consulting professor at Stanford University.

In early 2007 Congress authorized the EPA to evaluate data center power use and cooling, and a separate industry report found that data center power consumption doubled between 2000 and 2005. Yes, doubled.

Meanwhile, a data center power outage in July 2007 disrupted Internet activity at such popular sites as LiveJournal, Craigslist, Technorati, and TypePad. This very interesting analysis from O’Reilly discusses the continuous power system (CPS) flywheel backup system that the data center had in place:

The advantage of a CPS over a battery-based system is that the power going to the datacenter is decoupled from the utility power. This eliminates the complex electrical switching required from most battery-based systems, making many CPS systems simpler and sometimes more reliable.

In this incident, latent defects caused three generators to fail during start-up. No customers were affected until a fourth generator failed 30 seconds later, which overloaded the surviving backup system and caused power failures to 3 of 8 customer areas.

That failure was an interesting example of a cascading failure occurring in the backup system (but cascading failures are a subject for another post!). The large power use of data centers necessitates more sophisticated, or more complex, backup systems, so reducing datacenter power use could take some strain off of the redundant backup systems as well as reducing overall resource use.

More recently, this SmartCool blog discusses IBM’s efforts to “green” their data centers, and this post makes a hugely important point:

The other aspect of the greening of datacenters is going to be green building techniques themselves. Intelligent management solutions like a SmartCool system or smart grid technology will go a long way to reducing the datacenter’s infrastructure electricity demands, which makes up a considerable portion of the usage. Some suggestions exist out there for building more robust hardware that can withstand higher temperatures, but aside from a concern over expenses that do not offset, there’s still going to be a need for air conditioning no matter how hardy the servers are; that kind of thing must be handled at the building infrastructure level.

I think data centers, and the enterprises that establish them, run them, and pay the power bills for them, should be and are on the vanguard of synthesizing hardware design and building design that take electricity prices and use patterns into account, that get the most “bang for the buck” out of each kilowatt consumed, and that will push the development of smart grid capabilities at the customer level.

To build a little bit on my criticism of Duke’s Jim Rogers’ top-down approach to energy efficiency last week, customer response, action, and innovation with respect to the data center power use issue illustrates how price signals and the dynamics of economic change create incentives for such customers to invest in energy efficiency technologies, in a decentralized and distributed way, that in aggregate can contribute substantially to reducing overall energy and resource use. When the incentives are there, presented through the transparency of true costs, customers will act.

2 thoughts on “Data Center Power Use, Storage, and Smart Grid

  1. This exemplifies a problem I’ve been thinking about lately: the “invisible” nature of power supply and demand, and how that invisibility has become a barrier to progress.
    The average person takes electricity for granted. In large part because of the regulatory compact, people have come to think of electricity as a fundamental right rather than a service or a convenience. The utility industry has done a great job of hiding from consumers the technical challenges involved in generating and delivering power reliably and cheaply. So if you’re designing a data center, you don’t really worry about having enough energy supply from the grid, or about the cost of that supply. Instead, you worry about cooling capacity and backup power.
    Now we’ve come to a crossroads where that model doesn’t work as well as it used to work, and financing and regulatory structures discourage utilities from pursuing any different models. As the data-center example illustrates, the old model may not be sustainable. Moore’s law means power demands for IT systems effectively will double every few years. If utilities continue insulating consumers from the real costs of electricity, the system eventually will become unable to support spiraling growth in peak-demand–or to address climate-change concerns, for that matter.
    The time has come for the utility industry to emerge from under its invisibility cloak–the regulatory compact and cost-of-service ratemaking. How it will accomplish this transformation in the coming years and decades will be one of the most interesting stories in the history of the U.S. economy.

  2. the problem is, like you said, the people who buy IT gear don’t pay the power bill. Data centers will not “green up” on a large scale until companies begin to change that.

    (I sell enterprise hardware & software for a living)

Comments are closed.