Should regulated utilities participate in the residential solar market?

I recently argued that the regulated utility is not likely to enter a “death spiral”, but that the regulated utility business model is indeed under pressure, and the conversation about the future of that business model is a valuable one.

One area of pressure on the regulated utility business model is the market for residential solar power. Even two years hence, this New York Times Magazine article on the residential solar market is fresh and relevant, and even more so given the declining production costs of solar technologies: “Thanks to increased Chinese production of photovoltaic panels, innovative financing techniques, investment from large institutional investors and a patchwork of semi-effective public-policy efforts, residential solar power has never been more affordable.” In states like California, a combination of plentiful sun and state policies designed to induce more use of renewables brought growth in the residential solar market starting in the 1980s. This growth was also grounded in the PURPA (1978) federal legislation (“conservation by decree”) that required regulated utilities to buy some of their generated energy from renewable and cogeneration providers at a price determined by the state public utility commission.

Since then, a small but growing independent solar industry has developed in California and elsewhere, and the NYT Magazine article ably summarizes that development as well as the historical disinterest of regulated utilities in getting involved in renewables themselves. Why generate using a fuel and enabling technology that is intermittent, for which economical storage does not exist, and that does not have the economies of scale that drive the economics of the regulated vertically-integrated cost-recovery-based business model? Why indeed.

Over the ensuing decades, though, policy priorities have changed, and environmental quality now joins energy security and the social objectives of utility regulation. Air quality and global warming concerns joined the mix, and at the margin shifted the policy balance, leading several states to adopt renewable portfolio standards (RPSs) and net metering regulations. California, always a pioneer, has a portfolio of residential renewables policies, including net metering, although it does not have a state RPS. Note, in particular, the recent changes in California policy regarding residential renewables:

The CPUC’s California Solar Initiative (CPUC ruling – R.04-03-017) moved the consumer renewable energy rebate program for existing homes from the Energy Commission to the utility companies under the direction of the CPUC. This incentive program also provides cash back for solar energy systems of less than one megawatt to existing and new commercial, industrial, government, nonprofit, and agricultural properties. The CSI has a budget of $2 billion over 10 years, and the goal is to reach 1,940 MW of installed solar capacity by 2016.

The CSI provides rebates to residential customers installing solar technologies who are retail customers of one of the state’s investor-owned utilities. Each IOU has a cap on the number of its residential customers who can receive these subsidies, and PG&E has already reached that cap.

Whether the policy is rebates to induce the renewables switch, allowing net metering, or a state RPS (or feed-in tariffs such as used in Spain and Germany), these policies reflect a new objective in the portfolio of utility regulation, and at the margin they have changed the incentives of regulated utilities. Starting in 2012 when residential solar installations increased, regulated utilities increased their objections to solar power both on reliability grounds and based on the inequities and existing cross-subsidization built in to regulated retail rates (in a state like California, the smallest monthly users of electricity pay much less than their proportional share of the fixed costs of what they consume). My reading has also left me with the impression that if the regulated utilities are going to be subject to renewables mandates to achieve environmental objectives, they would prefer not to have to compete with the existing, and growing, independent producers operating in the residential solar market. The way a regulated monopolist benefits from environmental mandates is by owning assets to meet the mandates.

While this case requires much deeper analysis, as a first pass I want to step back and ask why the regulated distribution utility should be involved in the residential solar market at all. The growth of producers in the residential solar market (Sungevity, SunEdison, Solar City, etc.) suggests that this is a competitive or potentially competitive market.

I remember asking that question back when this NYT Magazine article first came out, and I stand by my observation then:

Consider an alternative scenario in which regulated distribution monopolists like PG&E are precluded from offering retail services, including rooftop solar, and the competing firms that Himmelman profiled can compete both in how they structure the transactions (equipment purchase, lease, PPA, etc.) and in the prices they offer. One of Rubin’s complaints is that the regulated net metering rate reimburses the rooftop solar homeowner at the full regulated retail price per kilowatt hour, which over-compensates the homeowner for the market value of the electricity product. In a rivalrous market, competing solar services firms would experiment with different prices, perhaps, say, reimbursing the homeowner a fixed price based on a long-term contract, or a varying price based on the wholesale market spot price in the hours in which the homeowner puts power back into the grid. Then it’s up to the retailer to contract with the wires company for the wires charge for those customers — that’s the source of the regulated monopolist’s revenue stream, the wires charge, and it can and should be separated from the net metering transaction and contract.

The presence of the regulated monopolist in that retail market for rooftop solar services is a distortion in and of itself, in addition to the regulation-induced distortions that Rubin identified.

The regulated distribution utility’s main objective is, and should be, reliable delivery of energy. The existing regulatory structure gives regulated utilities incentives to increase their asset base to increase their rate base, and thus when a new environmental policy objective joins the exiting ones, if regulated utilities can acquire new solar assets to meet that objective, then they have an incentive to do so. Cost recovery and a guaranteed rate of return is a powerful motivator. But why should they even be a participant in that market, given the demonstrable degree of competition that already exists?

Energy poverty and clean technology

For the past three years, I’ve team-taught a class that’s part of our Institute for Energy and Sustainability at Northwestern (ISEN) curriculum. It’s an introductory class, primarily focused on ethics and philosophy. One of my earth science colleagues kicks us off with the carbon cycle, the evidence for anthropogenic global warming, and interpretations of that evidence. Then one of my philosophy colleagues presents moral theories that we can use to think about the morality of our relationship with nature, environmental ethics, moral obligations to future generations, and so on. Consequentialism, Kantian ethics, virtue ethics. I learn so much from my colleagues every time!

Then I, the social scientist, come in and throw cold water on everyone’s utopias and dystopias — “no, really, this is how people really are going to behave, and the likely outcomes we’ll see from political processes.” Basic economic principles (scarcity, opportunity cost, tradeoffs, incentives, property rights, intertemporal substitution, discounting), tied in with the philosophical foundations of these principles, and then used to generate an economic analysis of politics (i.e., public choice). We finish up with a discussion of technological dynamism and the role that human creativity and innovation can play in making the balance of economic well-being and environmental sustainability more aligned and harmonious.

Energy poverty emerges as an overarching theme in the course — long-term environmental sustainability is an important issue to bear in mind when we think about consumption, investment, and innovation actions we take in the near term … but so are living standards, human health, and longevity. If people in developing countries have the basic human right to the liberty to flourish and to improve their living standards, then energy use is part of that process.

Thus when I saw this post from Bill Gates on the Gates Foundation blog it caught my attention, particularly where he says succinctly that

But even as we push to get serious about confronting climate change, we should not try to solve the problem on the backs of the poor. For one thing, poor countries represent a small part of the carbon-emissions problem. And they desperately need cheap sources of energy now to fuel the economic growth that lifts families out of poverty. They can’t afford today’s expensive clean energy solutions, and we can’t expect them wait for the technology to get cheaper.

Instead of putting constraints on poor countries that will hold back their ability to fight poverty, we should be investing dramatically more money in R&D to make fossil fuels cleaner and make clean energy cheaper than any fossil fuel.


In it Gates highlights two short videos from Bjorn Lomborg that emphasize two things: enabling people in poverty to get out of poverty using inexpensive natural gas rather than expensive renewables will improve the lives of many millions more people, and innovation and new ideas are the processes through which we will drive down the costs of currently-expensive clean energy. The first video makes the R&D claim and offers some useful data for contextualizing the extent of energy poverty in Africa. The second video points out that 3 billion people burn dung and twigs inside their homes as fuel sources, and that access to modern energy (i.e., electricity) would improve their health conditions.

The post and videos are worth your time. I would add one logical step in the chain, to make the economics-sustainability alignment point even more explicit — the argument that environmental quality is a normal good, and that as people leave poverty and their incomes rise, at the margin they will shift toward consumption bundles that include more environmental quality. At lower income increases there may still be incrementally more emissions (offset by the reduction in emissions from dung fires in the home), but if environmental quality is a normal good, as incomes continue to rise, consumption bundles will shift. If you know the economics literature on the environmental Kuznets curve, this argument sounds familiar. One of the best summary articles on the EKC is David Stern (2004), and he shows that there is little statistical evidence for a simple EKC, although better models have been developed and if we tell a more nuanced story and use better statistical techniques we may be able to decompose all of the effects.

Gates is paying more attention to energy because he thinks the anti-poverty agenda should include a focus on affordable energy, and energy that’s cleaner than what’s currently being used indoors for cooking in many places.

“Grid defection” and the regulated utility business model

The conversations about the “utility death spiral” to which I alluded in my recent post have included discussion of the potential for “grid defection”. Grid defection is an important phenomenon in any network industry — what if you use scarce resources to build a network that provides value for consumers, and then over time, with innovation and dynamism, what if they can find alternative ways of capturing that value (and/or more or different value)? Whether it’s a public transportation network, a wired telecommunications network, a water and sewer network, or a wired electricity distribution network, consumers can and do exit when they perceive the alternatives available to them as being more valuable than the network alternative. Of course, those four cases differ because of differences in transaction costs and regulatory institutions — making exit from a public transportation network illegal (i.e., making private transportation illegal) is much less likely, and less valuable, than making private water supply in a municipality illegal. But two of the common elements across these four infrastructure industries are interesting: the high fixed costs nature of the network infrastructure and the resulting economies of scale, and the potential for innovation and technological change to change the relative value of the network.

The first common element in network industries is the high fixed costs associated with constructing and maintaining the network, and the associated economies of scale typically found in such industries. This cost structure has long been the justification for either economic regulation or municipal supply in the industry — the cheapest per-unit way to provide large quantities is to have one provider and not to build duplicate networks, and to stipulate product quality and degrees of infrastructure redundancy to provide reliable service at the lowest feasible cost.

What does that entail? Cost-based regulation. Spreading those fixed costs out over as many consumers as possible to keep the product’s regulated price as low as feasible. If there are different consumers that can be categorized into different customer classes, and if for economic or political reasons the utility and/or the regulator have an incentive to keep prices low for one class (say, residential customers), then other types of consumers may bear a larger share of the fixed costs than they would if, for example, the fixed costs were allocated according to share of the volume of network use (this is called cross-subsidization). Cost-based regulation has been the typical regulatory approach in these industries, and cross-subsidization has been a characteristic of regulated rate structures. The classic reference for this analysis is Faulhaber American Economic Review (1975).

Both in theory and in practice these institutions can work as long as the technological environment is static. But the technological environment is anything but static; it has had periods of stability but has always been dynamic, the dynamism of which is the foundation of increased living standards over the past three centuries. Technological dynamism creates new alternatives to the existing network industry. We have seen this happen in the past two decades with mobile communications eroding the value of wired communications at a rapid rate, and that history animates the concern in electricity that distributed generation will make the distribution network less valuable and will disintermediate the regulated distribution utility, the wires owner, which relies on the distribution transaction for its revenue. It also traditionally relies on the ability to cross-subsidize across different types of customers, by charging different portions of that fixed costs to different types of customers, and that’s a pricing practice that mobile telephony also made obsolete in the communications market.

Alternatives to the network grid may have higher value to consumers in their estimation (never forget that value is subjective), and they may be willing to pay more to achieve that value. This is why most of us now pay more per month for communications services than we did pre-1984 in our monthly phone bill. As customers leave the traditional network to capture that value, though, those network fixed costs are now spread over fewer network customers. That’s the Achilles heel of cost-based regulation. And that’s a big part of what drives the “death spiral” concern — if customers increasingly self-generate and leave the network, who will pay the fixed costs? This question has traditionally been the justification for regulators approving utility standby charges, so that if a customer self-generates and has a failure, that customer can connect to the grid and get electricity. Set those rates too high, and distributed generation’s economic value falls; set those rates too low, and the distribution utility may not cover the incremental costs of serving that customer. That range can be large.

This is not a new conversation in the industry or among policy makers and academics. In fact, here’s a 2003 Electricity Journal article arguing against standby charges by friend-of-KP Sean Casten, who works in recycled energy and combined heat and power (CHP). In 2002 I presented a paper at the International Association of Energy Economics annual meetings in which I argued that distributed generation and storage would make the distribution network contestable, and after the Northeast blackout in 2003 Reason released a version of the paper as a policy study. One typical static argument for a single, regulated wires network is to eliminate costly duplication of infrastructure in the presence of economies of scale. But my argument is dynamic: innovation and technological change that competes with the wires network need not be duplicative wires, and DG+storage is an example of innovation that makes a wires network contestable.

Another older conversation that is new again was the DISCO of the Future Forum, hosted over a year or so in 2001-2002 by the Center for the Advancement of Energy Markets. I participated in this forum, in which industry, regulators, and researchers worked together to “game out” different scenarios for the distribution company business model in the context of competitive wholesale and retail markets. This 2002 Electric Light & Power article summarizes the effort and the ultimate report; note in particular this description of the forum from Jamie Wimberly, then-CAEM president (and now CEO of EcoAlign):

“The primary purpose of the forum was to thoroughly examine the issues and challenges facing distribution companies and to make consensus-based recommendations that work to ensure healthy companies and happy customers in the future,” he said. “There is no question much more needs to be discussed and debated, particularly the role of the regulated utility in the provision of new product offerings and services.”

Technological dynamism is starting to make the distribution network contestable. Now what?

The spin on wind, or, an example of bullshit in the field of energy policy

The Wall Street Journal recently opined against President Obama’s nominee for Federal Energy Regulatory Commission chairman, Norman Bay, and in the process took a modest swipe at subsidies for wind energy.

The context here is Bay’s action while leading FERC’s enforcement division, and in particular his prosecution of electric power market participants who manage to run afoul of FERC’s vague definition for market manipulation even though their trading behavior complied with all laws, regulations, and market rules.

So here the WSJ‘s editorial board pokes a little at subsidized wind in the process of making a point about reckless prosecutions:

As a thought experiment, consider the production tax credit for wind energy. In certain places at certain times, the subsidy is lucrative enough that wind generators make bids at negative prices: Instead of selling their product, they pay the market to drive prices below zero or “buy” electricity that would otherwise go unsold to qualify for the credit.

That strategy harms unsubsidized energy sources, distorts competition and may be an offense against taxpayers. But it isn’t a crime in the conventional legal sense because wind outfits are merely exploiting the subsidy in the open. The rational solution would be to end the subsidies that create negative bids, not to indict the wind farms. But for Mr. Bay, the same logic doesn’t apply to FERC.

The first quoted paragraph seems descriptive of reality and doesn’t cast wind energy in any negative light. The second quoted paragraph suggests the subsidy harms unsubsidized competitors, also plainly true, and that it “distorts competition” and “may be an offense against taxpayers.” These last two characterizations also strike me as fair descriptions of current public policy, and perhaps as mildly negative in tone.

Of course folks at the wind industry’s lobby shop are eager to challenge any little perceived slight, so the AWEA’s Michael Goggin sent a letter to the editor:

Your editorial “Electric Prosecutor Acid Test” (May 19) ignores wind energy’s real consumer benefits by mentioning the red herring of negative electricity prices. Negative prices are extremely rare and are usually highly localized in remote areas where they have little to no impact on other power plants, are caused by inflexible nuclear power plants much of the time, and are being eliminated as long-needed grid upgrades are completed.

Wind energy’s real impact is saving consumers money by displacing more expensive forms of energy, which is precisely why utilities bought wind in the first place. This impact is entirely market-driven, occurs with or without the tax credit, and applies to all low-fuel-cost sources of energy, including nuclear.

The tax relief provided to wind energy more than pays for itself by enabling economic development that generates additional tax revenue and represents a small fraction of the cumulative incentives given to other energy sources.

Michael Goggin
American Wind Energy Association
Washington, DC

Let’s just say I’ll believe the “impact is entirely market-driven” when someone produces a convincing study that shows the exact same wind energy capacity build-out would have happened over the last 20 years in the absence of the U.S. federal Production Tax Credit and state renewable energy purchase mandates. Without the tax credit, the wind energy industry likely would be (I’m guessing) less than one-tenth of its current size and without a big tax credit wouldn’t be the target of much public policy debate.

Of course, without much public policy debate, the wind energy industry wouldn’t need to hire so many lobbyists. Hence the AWEA’s urge to jump on any perceived slight, stir the pot, and keep debate going.

MORE on the lobbying against the Bay nomination. See also this WSJ op-ed.


Did ERCOT’s shift from zonal to nodal market design reduce electric power prices?

Jay Zarnikau, C.K. Woo, and Ross Baldick have examined whether the shift from a zonal to nodal market design in the ERCOT power market had a noticeable effect on electric energy prices. The resulting article, published in the Journal of Regulatory Economics, and this post may be a bit geekier than we usually get around here. I’ll try to tone it down and explain the ERCOT change and the effect on prices as clearly as I can.

The topic is important because the shift from zonal to nodal market structure was controversial, complicated, expensive, and took longer than expected. Problems had emerged shortly after launch of the initial zonal-based market and the nodal approach was offered as a solution. Some market participants had their doubts, but rather quickly ERCOT began the move to a nodal design. Note that phrasing: “rather quickly ERCOT began the move.” It took several years for ERCOT to actually complete the process.

In part the shift was promoted as a more efficient way to run the market. Zarnikau, Woo, and Baldick looked at the effect on prices as one way to assess whether or not the resulting market has worked more efficiently. They conclude energy prices are about 2 percent lower because of the nodal market design.

Don’t get hung up on the 2 percent number itself, but think of the shift as having a modest downward pressure on prices.

The result is consistent with an understanding one would gain from the study of power systems engineering as well as with what power system simulations showed. The point of the Zarnikau et al. study was to investigate whether data analysis after the fact supported expectations offered by theory and simulation. Because there is no better empirical study (so far as I am aware) and because their results are consistent with well-founded expectations, I have no reason to doubt their result. I will contest one interpretation they offer concerning the current resource adequacy debate in Texas.

Some background (which beginners should read and others can skip).

The delivery of electric energy to consumers is a joint effort between the generators that create the power and the wires that bring it to the consumer. The wires part of the system are not simple links between generators and consumers, but rather complicated network of wires in which consumers and generators are connected in multiple ways. The added flexibility that comes with networking helps the system work more reliably and at lower cost.

The network also comes with a big coordination problem, too. Power flows on the network are not individually controllable. With many generators producing power for many consumers, parts of the power grid may become overloaded. One key job of the power system operator is to watch the power flows on the electric grid and intervene as needed to prevent a transmission line from being overloaded. The intervention generally takes the form of directing a generator (or generators) contributing to the potential overload to reduce output and directing other generators to increase output. In areas outside of regional system operators, this function is done on a piecemeal basis as problems arise. A significant benefit coming from full-scale regional power markets integrated with system operations (such as ERCOT in Texas after the switch to a nodal market and in other similar ISO/RTO markets) is that such coordination can be done in advance, with more information, mostly automatically, and more efficiently than piecemeal adjustments.

Described in simpler terms, the regional power system operator helps generators and consumers coordinate use of the power grid in the effort to efficiently satisfy consumer demands for electric energy. A zonal market design, like ERCOT started with, did minimal advance coordination. The nodal market design and related changes implemented by ERCOT allowed the market to do more sophisticated and efficient coordination of grid use.

About data challenges.

In order to assess the effects on prices, the authors couldn’t simply average prices before and after the December 1, 2010 change in the market. The power system is a dynamic thing, and many other factors known to affect electric power prices changed between the two periods. Most significantly, natural gas prices were much lower on average after the market change than during the years before. Other changes include growing consumer load, higher offer caps, and increasing amounts of wind energy capacity. In addition, the prices are generated by the system has been changed, making simple before and after comparisons insufficient. For example, rather than four zonal prices produced every 15 minutes, the nodal market yields thousands of prices every 5 minutes.

One potentially significant data-related decision was a choice to omit “outliers,” prices that were substantially higher or lower than usual. The authors explain that extreme price spikes were much more frequent in 2011, after the change, but largely due to the summer of 2011 being among the hottest on record. At the same time the offer caps had been increased, so that prices spiked higher than they could have before, but not because of the zonal-to-nodal market shift. Omitting outliers reduces the impact of these otherwise confounding changes and should produce a better sense of the effect of the market change during more normal conditions.

Their conclusion and a mistaken interpretation.

Zarnikau, Woo, and Baldick conducted their price analysis on four ERCOT sub-regions separately so as to see if the change had differing impacts resulting from the changeover. The West zone stood out in the analysis, largely because that zone has seen the most significant other changes in the power system. The two main changes: continued sizable wind energy capacity additions in the zone, and more substantially, dramatic electrical load growth in the region due to the recent oil and gas drilling boom in west Texas. Because the West results were a bit flaky, they based their conclusions on results from the other three zones. Across a number of minor variations in specifications, the authors found a price suppression effect ranging from 1.3 and 3.3 percent, the load-weighted average of which is right around 2 percent.

So far, so good.

But next they offered what is surely a misinterpretation of their results. They wrote:

[T]he reduction in wholesale prices from the implementation of the nodal market might be viewed by some as a concern. In recent years, low natural gas prices and increased wind farm generation have also reduced electricity prices in ERCOT which has, in turn, impaired the economics of power plant construction. … It appears as though the nodal market’s design may have contributed to the drop in prices that the PUCT has now sought to reverse.

Strictly speaking, the goal of the Public Utility Commission of Texas hasn’t been to reverse the drop in prices, but to ensure sufficient investment in supply resources to reliably meet projected future demand. Lower prices appear to be offer smaller investment incentives than higher prices, but there is a subtle factor in play.

The real incentive to investment isn’t higher prices, it is higher profits. Remember, one of the most important reasons to make the switch from a zonal to a nodal market is that the nodal market is supposed to operate more efficiently. Zarnikau, Woo, and Baldick notice that marginal heat rates declined after the shift, evidence consistent with more efficient operations. The efficiency gain suggests generators are operating at an overall lower cost, which means even with lower prices generator profits could be higher now than they would have been. It all depends on whether the drop in cost was larger or smaller than the drop in prices.

The cost and profit changes will be somewhat different for generators depending on where they are located, what fuel they use, and how they typically operated. I’ll hazard the guess that relatively efficient natural gas plants have seen their profits increased a bit whereas less efficient gas plants, nuclear plants, and coal plants have likely seen profits fall a little.

FULL CITE: Zarnikau, J., C. K. Woo, and R. Baldick. “Did the introduction of a nodal market structure impact wholesale electricity prices in the Texas (ERCOT) market?.”Journal of Regulatory Economics 45.2 (2014): 194-208.

Here is a link to a non-gated preliminary version if you don’t have direct access to the Journal of Regulatory Economics.

AN ASIDE: One modest irony out of Texas–the multi-billion dollar CREZ transmission line expansion, mostly intended to support delivery of wind energy from West Texas into the rest of the state, has turned out to be used more to support the import of power from elsewhere in the state to meet the demands of a rapidly growing Permian Basin-based oil and gas industry.

Court says no to FERC’s negawatt payment rule

Jeremy Jacobs and Hannah Northey at Greenwire report “Appeals court throws out FERC’s demand-response order“:

A federal appeals court today threw out a high-profile Federal Energy Regulatory Commission order that provided incentives for electricity users to consume less power, a practice dubbed demand response.

In a divided ruling, the U.S. Court of Appeals for the District of Columbia Circuit struck a blow to the Obama administration’s energy efficiency efforts, vacating a 2011 FERC order requiring grid operators to pay customers and demand-response providers the market value of unused electricity.

Among environmentalists this demand-response enabled “unused electricity” is sometimes described as negawatts. FERC’s rule required FERC-regulated wholesale electric power markets to pay demand-response providers the full market price of electricity. It is, of course, economic nonsense pursued in the effort to boost demand response programs in FERC-regulated markets.

The court held that FERC significantly overstepped the commission’s authority under the Federal Power Act.

The Federal Power Act assigns most regulatory authority over retail electricity prices to the states, and the court said FERC’s demand response pricing rule interfered with state regulators’ authority.

Personally, I would have dinged FERC’s rule for economic stupidity, but maybe that isn’t the court’s job. Actually, I did ding the FERC’s rule for its economic stupidity. I was one of twenty economists joining in a amicus brief in the case arguing that the FERC pricing rule didn’t make sense. The court’s decision gave our brief a nod:

Although we need not delve now into the dispute among experts, see, e.g., Br. of Leading Economists as Amicus Curiae in Support of Pet’rs, the potential windfall  to demand response resources seems troubling, and the Commissioner’s concerns are certainly valid.  Indeed, “overcompensation cannot be just and reasonable,” Order 745-A, 2011 WL 6523756, at *38 (Moeller, dissenting), and the Commission has not adequately explained how their system results in just compensation.

But if this negawatt-market price idea survives the appeals court rejection and takes off in the energy policy area, I have the following idea: I’d really like a Tesla automobile, but the current price indicates that Teslas are in high demand so I’m going to not buy one today. Okay, now who is going to pay me $90,000 for the nega-Tesla I just made?



The case for allowing negative electricity prices – Benedettini and Stagnaro

Simona Benedettini and Carlo Stagnaro make the case for allowing negative prices in electric power markets in Europe. A few of the larger power markets in Europe allow prices to go negative, but others retain a zero price lower limit. Benedettini and Stagnaro explain both why it is reasonable, economically speaking, to allow electricity prices to go negative and the hazards of retaining a zero-price minimum in a market which is interconnected to markets allowing the more efficient negative prices.

It is all good, but I can’t resist quoting this part:

Negative prices are not just the result of some abstruse algorithm underlying the power exchange and the functioning of the power system. They are also, and more fundamentally, the way in which the market conveys the decentralized information that is distributed among all market participants, and that cannot be centralized in one single brain, as Nobel-prize winner Friederich Hayek would say. That information is translated into two major market signals, which are embodied in negative prices.

In the short run, negative prices show that there is a local condition of oversupply under which electricity is not an economic good which society is willing to pay for, but an economic bad for which consumers should be compensated. Therefore, negative prices create an economic incentive for consumers to shift their consumption patterns so as to capture the opportunity of being paid, instead of paying, to receive energy….

However, in the long run, negative prices talk to energy producers, not to energy consumers. The emergence of negative prices, although strongly conditioned by demand-side constraints, shows that the generating fleet encompasses too much “rigid” capacity (i.e. too much nuclear and coal-fuelled plants) and too little “flexible” capacity (for example CCGTs or turbo-gas power plants); or that grid interconnections are insufficient to properly exploit the spare, flexible capacity available within a market area.

So far as I know, all of the regional power markets in the United States now allow prices to go negative. The connections between wind power policy and negative prices have politicized the issue a bit in the United States. Benedettini and Stagnaro explain in a straightforward manner why, no matter what you think of renewable energy policies, you ought to favor allowing wholesale power market prices to go negative.