Elementary error misleads APPA on electricity pricing in states with retail electric choice

The American Public Power Association (APPA) recently published an analysis of retail power prices, but it makes an elementary mistake and gets the conclusion wrong.

The APPA analysis, “2014 Retail Electric Rates in Deregulated and Regulated States,” uses U.S. Energy Information Administration data to compare retail electric prices in “deregulated” and “regulated” states. The report itself presents its analysis without much in the way of evaluation, but the APPA blog post accompanying its release was clear on the message:

after nearly two decades of retail and wholesale electric market restructuring, the promise of reduced rates has failed to materialize. In fact, customers in states with retail choice programs located within RTO-operated markets are now paying more for their electricity.

In 1997, the retail electric rate in deregulated states — the ones offering retail choice and located within an RTO — was 2.8 cents per kilowatt-hour (kWh) higher than rates in the regulated states with no retail choice. The gap has increased over the last two decades. In 2014, customers in deregulated states paid, on average, 3.3 cents per kWh more than customers in regulated states.

But the APPA neglects the effects of inflation over the 17 year period of analysis. It is an elementary mistake. Merely adjusting for inflation from 1997 to 2014 reverses the conclusion.

The elementary mistake is easily corrected: Inflation data can be found at the St. Louis Fed site. Using the 2014 value of the dollar, average prices per kwh in the APPA-regulated states were 8.4 cents in 1997 and 9.4 cents in 2014. In the APPA-deregulated states the average prices per kwh were 12.5 cents in 1997 and 12.7 cents in 2014.

Prices were up for both groups after adjusting for inflation, but prices increased more in their regulated states (1 cent per kwh, so up about 11.3 percent) than in their deregulated states (0.2 cents; up about 1.4 percent). The inflation-adjusted “gap” fell from nearly 4.1 cents in 1997 to 3.3 cents in 2014.


Surprisingly, the APPA knows that an inflation adjustment would change their answer. The report ignores the issue completely; the APPA blog said:

For example, a recent analysis by the Compete Coalition finds that, after accounting for inflation, rates in restructured states decreased by 1.3 percent and increased by 9.8 percent in regulated states since 1997. The data in the APPA study, which does not account for inflation, show that rates in the deregulated states grew by 48 percent compared to a 62 percent increase for the regulated states.

However, a percentage-based comparison obscures the important fact that the 1997 rates in deregulated states were much greater than those in regulated states.

The Compete Coalition report is not linked in the APPA post, but the data points mentioned are here: “Consumers Continue To Fare Better With Competitive Markets, Both at Retail and Wholesale.”

The remaining differences between my inflation-adjusted APPA values and those of the Compete Coalition likely arise because Texas is in the Compete Coalition’s restructured states category, but not in the APPA’s deregulated states category. Seems an odd omission given that most power in Texas is sold in a quite competitive retail power market. APPA does not say why Texas is excluded from their deregulated category.

According to EIA data [XLS], average power prices in Texas were 9 cents per kwh in 1997, but in 2013 had fallen to 8.7 cents. Both numbers have been adjusted for inflation using CPI-U values from the St. Louis Fed website and reported using the 2014 value of a dollar. The 2013 numbers were the latest shown in the EIA dataset.

How can the market price of oil fall so far so fast?

If the oil market is reasonably efficient, then the price of a barrel of oil should reflect something like the cost of production of the highest-cost barrel of oil needed to just satisfy demand. In other words, the market price of oil should reflect the marginal cost of production.

The price of oil on the world market was about $110 per barrel in June 2014 and now sits just under $50 per barrel. Can it be possible that the marginal cost of producing oil was $110 per barrel in June 2014 and is only $50 per barrel in January 2015?


Here is how: in the first half of June 2014 oil consumption was very high relative to the then-existing world oil production capability. In addition, existing oil production capability is always declining as producing fields deplete. The marginal cost of a barrel of oil under such tight market conditions has to cover the capital cost of developing new resources as well as the operating costs.

Toward the end of 2014 additions to world oil production capability exceeded growth in consumption, meaning additions to production capability were no longer necessary, meaning the marginal cost of producing the last barrel of oil no longer needed to cover that capital cost. Sure, some oil company somewhere had to make the capital investment necessary to develop the resource, but most of those costs are sunk and competition in the market means they cannot make some consumer cover those costs. The market price under today’s looser market conditions only needs to cover the operating costs of production.

Given the large sunk cost component of investment in developing oil production capability, it is quite possible that the oil market was efficient at $110 per barrel and remains operating efficiently today with prices under $50 per barrel.

NOTE: Related data on world oil production and consumption is available in the U.S. Department of Energy’s Short Term Energy Outlook. Commentary prompting this explainer comes from the UC-Berkeley Energy Institute at Haas blog.

When does state utility regulation distort costs?

I suspect the simplest answer to the title question is “always.” Maybe the answer depends on your definition of “distort,” but both the intended and generally expected consequences of state utility rate regulation has always been to push costs to be something other than what would naturally emerge in the absence of rate regulation.

More substantive, though, is the analysis provided in Steve Cicala’s article in the January 2015 American Economic Review, “When Does Regulation Distort Costs? Lessons from Fuel Procurement in US Electricity Generation.” (here is an earlier ungated version of the paper.)

Here is a summary from the University of Chicago press release:

A study in the latest issue of the American Economic Review used recent state regulatory changes in electricity markets as a laboratory to evaluate which factors can contribute to a regulation causing a bigger mess than the problem it was meant to fix….

Cicala used data on almost $1 trillion worth of fuel deliveries to power plants to look at what happens when a power plant becomes deregulated. He found that the deregulated plants combined save about $1 billion a year compared to those that remained regulated. This is because a lack of transparency, political influence and poorly designed reimbursement rates led the regulated plants to pursue inefficient strategies when purchasing coal.

The $1 billion that deregulated plants save stems from paying about 12 percent less for their coal because they shop around for the best prices. Regulated plants have no incentive to shop around because their profits do not depend on how much they pay for fuel. They also are looked upon more favorably by regulators if they purchase from mines within their state, even if those mines don’t sell the cheapest coal. To make matters worse, regulators have a difficult time figuring out if they are being overcharged because coal is typically purchased through confidential contracts.

Although power plants that burned natural gas were subject to the exact same regulations as the coal-fired plants, there was no drop in the price paid for gas after deregulation. Cicala attributed the difference to the fact that natural gas is sold on a transparent, open market. This prevents political influences from sneaking through and allows regulators to know when plants are paying too much.

What’s different about the buying strategy of deregulated coal plant operators? Cicala dove deep into two decades of detailed, restricted-access procurement data to answer this question. First, he found that deregulated plants switch to cheaper, low-sulfur coal. This not only saves them money, but also allows them to comply with environmental regulations. On the other hand, regulated plants often comply with regulations by installing expensive “scrubber” technology, which allows them to make money from the capital improvements.

“It’s ironic to hear supporters of Eastern coal complain about ‘regulation’: they’re losing business from the deregulated plants,” said Cicala, a scholar at the Harris School of Public Policy.

Deregulated plants also increase purchases from out-of-state mines by about 25 percent. As mentioned, regulated plants are looked upon more favorably if they buy from in-state mines. Finally, deregulated plants purchase their coal from more productive mines (coal seams are thicker and closer to the surface) that require about 25 percent less labor to extract from the ground and that pay 5 percent higher wages.

“Recognizing that there are failures in financial markets, health care markets, energy markets, etc., it’s critical to know what makes for ‘bad’ regulations when designing new ones to avoid making the problem worse,” Cicala said. [Emphasis added.]

Moody’s concludes: mass grid defection not yet on the horizon

Yes, solar power systems are getting cheaper and battery storage is improving. The combination has many folks worried (or elated) about the future prospects of grid-based electric utilities when consumers can get the power they want at home. (See Lynne’s post from last summer for background.)

An analysis by Moody’s concludes that battery storage remains an order of magnitude too high, so grid defections are not yet a demonstrable threat. Analysis of consumer power use data leads them to project a need for a larger home system than other analysts have used. Moody’s further suggests that consumers will be reluctant to make the lifestyle changes–frequent monitoring of battery levels, forced conservation during extended low-solar resource periods–so grid defection may be yet slower than the simple engineering economics computation would suggest.

COMMENT: I’ll project that in a world of widespread consumer power defections, we will see two developments to help consumers avoid the need to face forced conservation. Nobody will have to miss watching Super Bowl LXXX because it was cloudy the week before in Boston. First, plug-in hybrid vehicles hook-ups so the home batteries can be recharged by the consumer’s gasoline or diesel engine. Second, home battery service companies will provide similar mobile recharge services (or hot-swapping home battery systems, etc.) Who knows, in a world of widespread defection, maybe the local electric company will offer spot recharge services at a market-based rate?

[HT to Clean Beta]

Should regulated utilities participate in the residential solar market?

I recently argued that the regulated utility is not likely to enter a “death spiral”, but that the regulated utility business model is indeed under pressure, and the conversation about the future of that business model is a valuable one.

One area of pressure on the regulated utility business model is the market for residential solar power. Even two years hence, this New York Times Magazine article on the residential solar market is fresh and relevant, and even more so given the declining production costs of solar technologies: “Thanks to increased Chinese production of photovoltaic panels, innovative financing techniques, investment from large institutional investors and a patchwork of semi-effective public-policy efforts, residential solar power has never been more affordable.” In states like California, a combination of plentiful sun and state policies designed to induce more use of renewables brought growth in the residential solar market starting in the 1980s. This growth was also grounded in the PURPA (1978) federal legislation (“conservation by decree”) that required regulated utilities to buy some of their generated energy from renewable and cogeneration providers at a price determined by the state public utility commission.

Since then, a small but growing independent solar industry has developed in California and elsewhere, and the NYT Magazine article ably summarizes that development as well as the historical disinterest of regulated utilities in getting involved in renewables themselves. Why generate using a fuel and enabling technology that is intermittent, for which economical storage does not exist, and that does not have the economies of scale that drive the economics of the regulated vertically-integrated cost-recovery-based business model? Why indeed.

Over the ensuing decades, though, policy priorities have changed, and environmental quality now joins energy security and the social objectives of utility regulation. Air quality and global warming concerns joined the mix, and at the margin shifted the policy balance, leading several states to adopt renewable portfolio standards (RPSs) and net metering regulations. California, always a pioneer, has a portfolio of residential renewables policies, including net metering, although it does not have a state RPS. Note, in particular, the recent changes in California policy regarding residential renewables:

The CPUC’s California Solar Initiative (CPUC ruling – R.04-03-017) moved the consumer renewable energy rebate program for existing homes from the Energy Commission to the utility companies under the direction of the CPUC. This incentive program also provides cash back for solar energy systems of less than one megawatt to existing and new commercial, industrial, government, nonprofit, and agricultural properties. The CSI has a budget of $2 billion over 10 years, and the goal is to reach 1,940 MW of installed solar capacity by 2016.

The CSI provides rebates to residential customers installing solar technologies who are retail customers of one of the state’s investor-owned utilities. Each IOU has a cap on the number of its residential customers who can receive these subsidies, and PG&E has already reached that cap.

Whether the policy is rebates to induce the renewables switch, allowing net metering, or a state RPS (or feed-in tariffs such as used in Spain and Germany), these policies reflect a new objective in the portfolio of utility regulation, and at the margin they have changed the incentives of regulated utilities. Starting in 2012 when residential solar installations increased, regulated utilities increased their objections to solar power both on reliability grounds and based on the inequities and existing cross-subsidization built in to regulated retail rates (in a state like California, the smallest monthly users of electricity pay much less than their proportional share of the fixed costs of what they consume). My reading has also left me with the impression that if the regulated utilities are going to be subject to renewables mandates to achieve environmental objectives, they would prefer not to have to compete with the existing, and growing, independent producers operating in the residential solar market. The way a regulated monopolist benefits from environmental mandates is by owning assets to meet the mandates.

While this case requires much deeper analysis, as a first pass I want to step back and ask why the regulated distribution utility should be involved in the residential solar market at all. The growth of producers in the residential solar market (Sungevity, SunEdison, Solar City, etc.) suggests that this is a competitive or potentially competitive market.

I remember asking that question back when this NYT Magazine article first came out, and I stand by my observation then:

Consider an alternative scenario in which regulated distribution monopolists like PG&E are precluded from offering retail services, including rooftop solar, and the competing firms that Himmelman profiled can compete both in how they structure the transactions (equipment purchase, lease, PPA, etc.) and in the prices they offer. One of Rubin’s complaints is that the regulated net metering rate reimburses the rooftop solar homeowner at the full regulated retail price per kilowatt hour, which over-compensates the homeowner for the market value of the electricity product. In a rivalrous market, competing solar services firms would experiment with different prices, perhaps, say, reimbursing the homeowner a fixed price based on a long-term contract, or a varying price based on the wholesale market spot price in the hours in which the homeowner puts power back into the grid. Then it’s up to the retailer to contract with the wires company for the wires charge for those customers — that’s the source of the regulated monopolist’s revenue stream, the wires charge, and it can and should be separated from the net metering transaction and contract.

The presence of the regulated monopolist in that retail market for rooftop solar services is a distortion in and of itself, in addition to the regulation-induced distortions that Rubin identified.

The regulated distribution utility’s main objective is, and should be, reliable delivery of energy. The existing regulatory structure gives regulated utilities incentives to increase their asset base to increase their rate base, and thus when a new environmental policy objective joins the exiting ones, if regulated utilities can acquire new solar assets to meet that objective, then they have an incentive to do so. Cost recovery and a guaranteed rate of return is a powerful motivator. But why should they even be a participant in that market, given the demonstrable degree of competition that already exists?

Energy poverty and clean technology

For the past three years, I’ve team-taught a class that’s part of our Institute for Energy and Sustainability at Northwestern (ISEN) curriculum. It’s an introductory class, primarily focused on ethics and philosophy. One of my earth science colleagues kicks us off with the carbon cycle, the evidence for anthropogenic global warming, and interpretations of that evidence. Then one of my philosophy colleagues presents moral theories that we can use to think about the morality of our relationship with nature, environmental ethics, moral obligations to future generations, and so on. Consequentialism, Kantian ethics, virtue ethics. I learn so much from my colleagues every time!

Then I, the social scientist, come in and throw cold water on everyone’s utopias and dystopias — “no, really, this is how people really are going to behave, and the likely outcomes we’ll see from political processes.” Basic economic principles (scarcity, opportunity cost, tradeoffs, incentives, property rights, intertemporal substitution, discounting), tied in with the philosophical foundations of these principles, and then used to generate an economic analysis of politics (i.e., public choice). We finish up with a discussion of technological dynamism and the role that human creativity and innovation can play in making the balance of economic well-being and environmental sustainability more aligned and harmonious.

Energy poverty emerges as an overarching theme in the course — long-term environmental sustainability is an important issue to bear in mind when we think about consumption, investment, and innovation actions we take in the near term … but so are living standards, human health, and longevity. If people in developing countries have the basic human right to the liberty to flourish and to improve their living standards, then energy use is part of that process.

Thus when I saw this post from Bill Gates on the Gates Foundation blog it caught my attention, particularly where he says succinctly that

But even as we push to get serious about confronting climate change, we should not try to solve the problem on the backs of the poor. For one thing, poor countries represent a small part of the carbon-emissions problem. And they desperately need cheap sources of energy now to fuel the economic growth that lifts families out of poverty. They can’t afford today’s expensive clean energy solutions, and we can’t expect them wait for the technology to get cheaper.

Instead of putting constraints on poor countries that will hold back their ability to fight poverty, we should be investing dramatically more money in R&D to make fossil fuels cleaner and make clean energy cheaper than any fossil fuel.


In it Gates highlights two short videos from Bjorn Lomborg that emphasize two things: enabling people in poverty to get out of poverty using inexpensive natural gas rather than expensive renewables will improve the lives of many millions more people, and innovation and new ideas are the processes through which we will drive down the costs of currently-expensive clean energy. The first video makes the R&D claim and offers some useful data for contextualizing the extent of energy poverty in Africa. The second video points out that 3 billion people burn dung and twigs inside their homes as fuel sources, and that access to modern energy (i.e., electricity) would improve their health conditions.

The post and videos are worth your time. I would add one logical step in the chain, to make the economics-sustainability alignment point even more explicit — the argument that environmental quality is a normal good, and that as people leave poverty and their incomes rise, at the margin they will shift toward consumption bundles that include more environmental quality. At lower income increases there may still be incrementally more emissions (offset by the reduction in emissions from dung fires in the home), but if environmental quality is a normal good, as incomes continue to rise, consumption bundles will shift. If you know the economics literature on the environmental Kuznets curve, this argument sounds familiar. One of the best summary articles on the EKC is David Stern (2004), and he shows that there is little statistical evidence for a simple EKC, although better models have been developed and if we tell a more nuanced story and use better statistical techniques we may be able to decompose all of the effects.

Gates is paying more attention to energy because he thinks the anti-poverty agenda should include a focus on affordable energy, and energy that’s cleaner than what’s currently being used indoors for cooking in many places.

“Grid defection” and the regulated utility business model

The conversations about the “utility death spiral” to which I alluded in my recent post have included discussion of the potential for “grid defection”. Grid defection is an important phenomenon in any network industry — what if you use scarce resources to build a network that provides value for consumers, and then over time, with innovation and dynamism, what if they can find alternative ways of capturing that value (and/or more or different value)? Whether it’s a public transportation network, a wired telecommunications network, a water and sewer network, or a wired electricity distribution network, consumers can and do exit when they perceive the alternatives available to them as being more valuable than the network alternative. Of course, those four cases differ because of differences in transaction costs and regulatory institutions — making exit from a public transportation network illegal (i.e., making private transportation illegal) is much less likely, and less valuable, than making private water supply in a municipality illegal. But two of the common elements across these four infrastructure industries are interesting: the high fixed costs nature of the network infrastructure and the resulting economies of scale, and the potential for innovation and technological change to change the relative value of the network.

The first common element in network industries is the high fixed costs associated with constructing and maintaining the network, and the associated economies of scale typically found in such industries. This cost structure has long been the justification for either economic regulation or municipal supply in the industry — the cheapest per-unit way to provide large quantities is to have one provider and not to build duplicate networks, and to stipulate product quality and degrees of infrastructure redundancy to provide reliable service at the lowest feasible cost.

What does that entail? Cost-based regulation. Spreading those fixed costs out over as many consumers as possible to keep the product’s regulated price as low as feasible. If there are different consumers that can be categorized into different customer classes, and if for economic or political reasons the utility and/or the regulator have an incentive to keep prices low for one class (say, residential customers), then other types of consumers may bear a larger share of the fixed costs than they would if, for example, the fixed costs were allocated according to share of the volume of network use (this is called cross-subsidization). Cost-based regulation has been the typical regulatory approach in these industries, and cross-subsidization has been a characteristic of regulated rate structures. The classic reference for this analysis is Faulhaber American Economic Review (1975).

Both in theory and in practice these institutions can work as long as the technological environment is static. But the technological environment is anything but static; it has had periods of stability but has always been dynamic, the dynamism of which is the foundation of increased living standards over the past three centuries. Technological dynamism creates new alternatives to the existing network industry. We have seen this happen in the past two decades with mobile communications eroding the value of wired communications at a rapid rate, and that history animates the concern in electricity that distributed generation will make the distribution network less valuable and will disintermediate the regulated distribution utility, the wires owner, which relies on the distribution transaction for its revenue. It also traditionally relies on the ability to cross-subsidize across different types of customers, by charging different portions of that fixed costs to different types of customers, and that’s a pricing practice that mobile telephony also made obsolete in the communications market.

Alternatives to the network grid may have higher value to consumers in their estimation (never forget that value is subjective), and they may be willing to pay more to achieve that value. This is why most of us now pay more per month for communications services than we did pre-1984 in our monthly phone bill. As customers leave the traditional network to capture that value, though, those network fixed costs are now spread over fewer network customers. That’s the Achilles heel of cost-based regulation. And that’s a big part of what drives the “death spiral” concern — if customers increasingly self-generate and leave the network, who will pay the fixed costs? This question has traditionally been the justification for regulators approving utility standby charges, so that if a customer self-generates and has a failure, that customer can connect to the grid and get electricity. Set those rates too high, and distributed generation’s economic value falls; set those rates too low, and the distribution utility may not cover the incremental costs of serving that customer. That range can be large.

This is not a new conversation in the industry or among policy makers and academics. In fact, here’s a 2003 Electricity Journal article arguing against standby charges by friend-of-KP Sean Casten, who works in recycled energy and combined heat and power (CHP). In 2002 I presented a paper at the International Association of Energy Economics annual meetings in which I argued that distributed generation and storage would make the distribution network contestable, and after the Northeast blackout in 2003 Reason released a version of the paper as a policy study. One typical static argument for a single, regulated wires network is to eliminate costly duplication of infrastructure in the presence of economies of scale. But my argument is dynamic: innovation and technological change that competes with the wires network need not be duplicative wires, and DG+storage is an example of innovation that makes a wires network contestable.

Another older conversation that is new again was the DISCO of the Future Forum, hosted over a year or so in 2001-2002 by the Center for the Advancement of Energy Markets. I participated in this forum, in which industry, regulators, and researchers worked together to “game out” different scenarios for the distribution company business model in the context of competitive wholesale and retail markets. This 2002 Electric Light & Power article summarizes the effort and the ultimate report; note in particular this description of the forum from Jamie Wimberly, then-CAEM president (and now CEO of EcoAlign):

“The primary purpose of the forum was to thoroughly examine the issues and challenges facing distribution companies and to make consensus-based recommendations that work to ensure healthy companies and happy customers in the future,” he said. “There is no question much more needs to be discussed and debated, particularly the role of the regulated utility in the provision of new product offerings and services.”

Technological dynamism is starting to make the distribution network contestable. Now what?