Lynne Kiesling
For the past seven years or so, the phrase “resource adequacy” has received increasing attention in electricity policy. The basic idea is this: before the Energy Policy Act of 1992, vertically-integrated utilities met their future regulatory “obligation to serve” mandate through integrated resource planning (IRP). Customers paid fixed, average, regulated retail rates, and utilities ensured reliability entirely through focusing on supply-side adequacy, particularly generation adequacy. But technological change and the incentives to over-invest in order to get more assets in the utility’s rate base (aka Averch-Johnson) meant that generation no longer had economies of scale as in the past, and the cost of providing such a high level of reliability had gone up.
Thus the development of independent generation and the sale of power through wholesale markets over the past decade has raised this question: if utilities are not doing IRP any more, how do we ensure future reliability?
There are a lot of dysfunctional dimensions to this question, and the question has led to similarly dysfunctional policy responses. In most vertical supply chains, when vertical integration is no longer the cheapest organizational approach and it becomes efficient to transact through markets instead (thank you, Mr. Coase), we use contracts to stipulate terms, and that is the foundation of the reliability of supply chain delivery. In this sense, electricity and future commitments to deliver it are no different from any other service. Sometimes people invoke the long construction time for new generation or transmission assets as a problem, but is that more of a problem in electricity than in other service industries? I don’t think so, particularly when you consider that transactions are two-sided, and there is a demand side here, and developing that capability is unlikely to take as much time as building new generation or transmission.
I also bristle when I hear the “ensure reliability” language. What is the cost of such a high guarantee? Would some of us be willing to accept lower reliability and pay less for the service? Probably. Right now that’s still a technological hurdle, but we’re getting closer to being able to sell reliability as a differentiated product to customers with heterogeneous preferences.
But this discussion of contracts, demand, and product differentiation is very different from the policy discussion of resource adequacy for the past five or so years, in which the discussion and action have all been on the supply side, and have involved construction of elaborate capacity markets as a substitute for forward contracts in financial markets. In electricity, policy has not applied the lessons and tools of other industries, the most relevant of which being that integrated spot and forward markets provide the most robust and fluid way to send those investment signals that lead to network reliability. The focus on building generation also retains the narrow physical asset definition of the “electric power network”; it does not acknowledge that the network is actually composed of assets and humans, and the network is a function of the interaction of physical assets and human actions. Is it any surprise that when you ignore humans you end up with policy focused on building more assets?
This is helping to clarify my thinking, but one thing really puzzles me about electricity. Look at all of the other network industries that have been liberalized over the past two-plus decades: airlines, railroads, trucking, natural gas pipelines, telecommunications. None of these industries has had the hand-wringing, the anxiety, and the dirigiste policy commanding a particular approach to resource adequacy. Yet in each of these industries, investment occurs and resources are generally deemed adequate (in some instances, like telecom in the 1990s, more than adequate!). Why are our policymakers so terrified, so risk averse, about applying the lessons of these other industries in electricity? The need for real-time balancing in electricity does not imply that investment dynamics we see in other industries will not apply here. The fear and the inertia are massively costly, but much of that cost is Bastiat’s unseen cost. How can we make that cost seen? Will that be enough to move policy?
In most states, electricity regulators are statutorily charged with ensuring that ratepayers receive reliable, adequate electricity at the most reasonable cost — not with ensuring that market forces are allowed to work. In the long term, a deregulated electricity market might be able to provide reliable, reasonably priced electricity, but if there are any “bumps in the road” as market forces balance, those regulators will be bumped out of their jobs.
I am way behind on my reading of this blog, and on this subject more generally. I may have several posts related to it over the coming week or so. But first, let me quibble with the oversimplifcations of the first paragraph.
I should say that I have never believed in Averch-Johnson. As a regulated system planner from the 70s into the 90s, I can say that building rate base was never a planning criterion or a serious concern for planners. Everything I did and saw was guided by the planners’ objective function: minimizing the present value of revenue requirements over the long term. If this methodology was biased toward capital investment, it was because of its long-term analytics, not because of a preference for investing money so we could earn a return on it. I still hits me as an insulting suggestion. I was doing the work and supporting the decisions of a large utility. What do I know?
Next I will offer that IRP was a buzzword that had barely come into full stride by 1992. It was a term of the late 80s. IRP was marked by the explicit consideration of demand-side along with supply-side alternatives. Nevertheless, IRP was driven by the same overall planning objective function, even if the demand-side alternatives altered it a bit.
The high cost of reliability is here confused with the high cost of nuclear plants, especially as many were caught mid-stream by the TMI accident in 1979. It was the nuclear plants and the latter day mammoth coal plants that had the high cost, not reliability. Reliability was used as an excuse (and you fell for that?). Reliability has never been that expensive, but reliability never called for coal plants and nuclear plants. We put those in because the combined total cost of our systems over the long term were lower with a large slug of them at the base. Hell, diesels can provide reliability for those rare times that they might be needed! I’m sure that every over/under study that was ever done has shown that total cost rises slowly as you increase your target reliability, but as you decrease it below the “optimal” point, outage costs dominate very quickly.
So, how did the late 70s and 80s happen? Look at the growth rates beginning in the 50s. The US electricity system was doubling every 7 years, and power plants were taking longer and longer to build. In 1974, more capacity was added to the system than ever before in history. But system planners in 1974 were very concerned about having to double the entire system again over the next 7 years. So in 1974 there was a doubling of the system… under construction! But what had appeared to be a Red Queen scenario was wrong. Demand growth dropped suddenly and for a long time. Those plants that were under construction were not needed for many years.
What happened next is not driven by Averch-Johnson. It was much more basic. Utilities were faced with completing their plants or forever losing their sunk investments. In regulated utilities, sunk cost is not sunk until it’s in rate base. It’s at risk until it’s in rate base. So, with all of those investment dollars hanging in the balance, the economic equation changes. It wasn’t “I want to build a plant so I can earn a return on the investment.” It was much more like “If I don’t complete this plant and get it in rate base, we’ll lose millions and I’ll lose my job.” So, those committed plants were declared inviolate and they were built and crammed into ratebases. It was never for reliability. It was for providing energy for a future that didn’t happen.
It was this overhang from the 70s that caused the regulatory compact to fracture, combined with the regulatory response. The regulatory response was our own doing. We convinced everybody that capacity cost billions and that it took 12 years to build. That was just so we could finish spending our billions over long periods of time, get them into rate base, and justify the ample reserve levels that resulted. It came back to bite us in some states such as CA and NY where avoided-cost PURPA prices were administratively set at nuclear-level prices. These are among the reasons that those two states were and are among the highest cost states, and why they were the states where the regulatory compact broke down first and worst.
I AM D.O.U.G. The Dumb Old Utility Guy!