Knowledge Problem

Just How “Wasteful” Are Data Centers?

Lynne Kiesling

You may have seen the article in Sunday’s New York Times on how “wasteful” data centers are — they use large amounts of electricity to enable the level of redundancy required to achieve the degree of reliability and uptime that consumers expect from their Internet activities. I put the word “waste” in quotes because I think Don Boudreaux has a point, in his letter to the editor in response to the article: where’s the line between “waste” and “use”? The NYT article presents data center power use as wasteful, implying that the author thinks that they should either figure out ways to deliver the same reliability with less electricity, or that we consumers should change our preferences so we don’t place as much value on reliability. I’ll argue later that data center operators have high-powered incentives to do the former, and as for the latter, I invite the NYT author to imagine how he thinks NYT readers would respond to a slowdown or lack of server availability that made it hard for them to access NYT articles.

Of course the undercurrent here is the argument that the price of our Internet activity does not include the environmental cost associated with power use, and consequently we should use public policy to impose a price on data centers, or on Internet use, to reflect that cost. The article isn’t explicit about carbon policy, but that’s the implication.

My initial reaction to the article was that it was biased and somewhat inaccurate, and that it overlooked a wide array of innovations that chip manufacturers, data center operators, and architects have created over the past few years to reduce power use per calculation as well as overall power use. Fortunately, Katie Fehrenbacher (who is more knowledgeable than I in these matters) had a similar reaction, and wrote up her assessment:

As my headline suggests they sound like the author, who spent over a year reporting out the series, jumped into a time machine and did his reporting a couple years ago. One of the reasons is that both articles so far start with anecdotes from 2006 about Microsoft and Facebook. The data centers that Facebook recently built in Forest City, North Carolina and Prineville, Oregon, are industry pioneers in terms of energy efficiency and openness. Microsoft, too, has more recently pledged to get rid of its diesel generators for its facilities, and has been using less air conditioning in its new data centers.

The data center operators at the largest Internet companies like Google, Facebook, Apple, Microsoft, Yahoo and eBay are so focused on energy efficiency of their newest data centers that new designs are starting to be widely adopted with technologies like open air cooling (getting rid of the huge air conditioning units in these things). New types of low power chips and servers are being developed by startups like SeaMicro, which was recently bought by AMD. The articles so far don’t mention these innovations.

She does, though, think that there’s value in the NYT series because it will shine some light on data center operators who aren’t thinking about energy efficiency and power use. She wrote a 4-part series on data center power use, which I recommend and to which she links in her article.

From a policy perspective, is there an “externality” here to be addressed? Data centers are expensive and take up a lot of space, and if you are incurring the cost of a data center, electricity is your top expense item. Thus firms have strong incentives to minimize those costs while still delivering the services and degree of reliability that they have promised to their customers. That’s a high-powered incentive to pursue energy efficiency innovations with a policy intervention, and that incentive has been inducing those innovations over the past 5 years, as Fehrenbacher notes in her article and her data center series. Companies like Google, Amazon, Apple, Microsoft, and Facebook have been driving those innovations, are in aggregate the largest data center operators, and thus are driving the majority of data center server traffic in a more energy-efficient direction. As is typical with innovation and new technology adoption, others will follow as the innovations are refined and made easier to implement.

Another important innovation that has implications for energy efficiency, but has the Bastiat-esque problem of being unseen, is the dramatic move toward server virtualization in data centers. With server virtualization, data center operators can essentially run several virtual servers off of one physical server. Obviously this increases the computing and storage capacity of the data center without increasing the physical assets, and on balance this means an increase in computing and storage capacity without an appreciable change in power use — more computing per watt of power consumed. In the absence of virtualization, to achieve that same capacity would have required a dramatic increase in physical server capacity, and in electricity use to power those servers. Neither the NYT article nor Fehrenbacher’s series address the role that virtualization has played in enabling capacity optimization and high reliability at lower power use levels. Here’s a concise Green Grid white paper on the subject.

Yes, there is some energy wasted in data center operations, just as there is in every single way that we use energy — we won’t be repealing the laws of thermodynamics any time soon. But data center operators have economic incentives to pursue energy efficiency, and a wide array of inventors, architects, and other entrepreneurs see opportunities in those incentives. We are seeing this process play out before our eyes.