Five years ago this week, a large blackout in North America affected 50 million people and cost $6 billion (see thKP archive from August 2003 for posts from that time). The enormity of the blackout led to a large forensic report and lots of follow-up, including recommendations designed to solidify and reinforce reliability regulations in the industry. These recommendations included making NERC’s reliability standards mandatory and legally enforceable, a recommendation that has been implemented.
Not surprisingly, this 5-year blackout-a-versary is leading to retrospective analysis: have we made progress? Are such large blackouts less likely than they were then? Has infrastructure investment in the electric power network improved? In general, I think the answer to those questions is yes, no, and no.
The best retrospective article I’ve seen is in the Scientific American. It summarizes the event, the post-event analysis, and the various activities that have occurred since then. It usefully points to time-series research from Carnegie-Mellon on blackout incidence:
If the standards have reduced the number of blackouts, the evidence has yet to bear it out. A study of NERC blackout data by researchers at Carnegie Mellon University in Pittsburgh found that the frequency of blackouts affecting more than 50,000 people has held fairly constant at about 12 per year from 1984 to 2006. Co-author Paul Hines, now assistant professor of engineering at the University of Vermont in Burlington, says current statistics indicate that a 2003-level blackout will occur every 25 years.
He says many researchers believe that cascading blackouts may be inherent in the grid’s complexity, but he still sees room for improvement. “I think we can definitely make it less frequent than once every 25 years.”
The Scientific American article also talks about the development and installation of phasor measurement units, which provide distributed monitoring of voltage and current and use GPS to generate reliable time-stamped data on physical network conditions. It talks about these PMUs in the context of discussing how smart grid technologies are the “holy grail” of reducing the incidence and magnitude of future blackouts. That is certainly correct; both transmission-level PMUs and distribution-level technologies for distribution automation and sensing will reduce the incidence and magnitude of blackouts. Most blackouts occur in the local distribution network, not in the larger high-voltage transmission network.
While I think this Scientific American article is pretty good, it does focus disproportionately on the use of smart grid technology for distributed sensing and monitoring, particularly in the transmission network. Sure, that will generate benefits. But the real value, the real promise of smart grid technology with respect to reducing the incidence and magnitude of blackouts is how smart grid technologies work together to enable consumers to change their electricity use, particularly in respond to price signals, when strain on the network is highest. Decentralized coordination of individual demand via price signals and automation technology promotes reliability, and it does so in a very granular way that disrupts the everyday activities of consumers only minimally (and gives them economic incentives to change their behavior, so they benefit from the disruption!).
Too much of the focus on electricity policy and network management is on centralized control. Smart grid technologies, when coupled with economically sensible regulatory policies that allow dynamic pricing and retail consumer choice, make decentralized coordination possible.
UPDATE: Keith Johnson has a similarly retrospective post at WSJ’s Enivronmental Capital.