The conversations about the “utility death spiral” to which I alluded in my recent post have included discussion of the potential for “grid defection”. Grid defection is an important phenomenon in any network industry — what if you use scarce resources to build a network that provides value for consumers, and then over time, with innovation and dynamism, what if they can find alternative ways of capturing that value (and/or more or different value)? Whether it’s a public transportation network, a wired telecommunications network, a water and sewer network, or a wired electricity distribution network, consumers can and do exit when they perceive the alternatives available to them as being more valuable than the network alternative. Of course, those four cases differ because of differences in transaction costs and regulatory institutions — making exit from a public transportation network illegal (i.e., making private transportation illegal) is much less likely, and less valuable, than making private water supply in a municipality illegal. But two of the common elements across these four infrastructure industries are interesting: the high fixed costs nature of the network infrastructure and the resulting economies of scale, and the potential for innovation and technological change to change the relative value of the network.
The first common element in network industries is the high fixed costs associated with constructing and maintaining the network, and the associated economies of scale typically found in such industries. This cost structure has long been the justification for either economic regulation or municipal supply in the industry — the cheapest per-unit way to provide large quantities is to have one provider and not to build duplicate networks, and to stipulate product quality and degrees of infrastructure redundancy to provide reliable service at the lowest feasible cost.
What does that entail? Cost-based regulation. Spreading those fixed costs out over as many consumers as possible to keep the product’s regulated price as low as feasible. If there are different consumers that can be categorized into different customer classes, and if for economic or political reasons the utility and/or the regulator have an incentive to keep prices low for one class (say, residential customers), then other types of consumers may bear a larger share of the fixed costs than they would if, for example, the fixed costs were allocated according to share of the volume of network use (this is called cross-subsidization). Cost-based regulation has been the typical regulatory approach in these industries, and cross-subsidization has been a characteristic of regulated rate structures. The classic reference for this analysis is Faulhaber American Economic Review (1975).
Both in theory and in practice these institutions can work as long as the technological environment is static. But the technological environment is anything but static; it has had periods of stability but has always been dynamic, the dynamism of which is the foundation of increased living standards over the past three centuries. Technological dynamism creates new alternatives to the existing network industry. We have seen this happen in the past two decades with mobile communications eroding the value of wired communications at a rapid rate, and that history animates the concern in electricity that distributed generation will make the distribution network less valuable and will disintermediate the regulated distribution utility, the wires owner, which relies on the distribution transaction for its revenue. It also traditionally relies on the ability to cross-subsidize across different types of customers, by charging different portions of that fixed costs to different types of customers, and that’s a pricing practice that mobile telephony also made obsolete in the communications market.
Alternatives to the network grid may have higher value to consumers in their estimation (never forget that value is subjective), and they may be willing to pay more to achieve that value. This is why most of us now pay more per month for communications services than we did pre-1984 in our monthly phone bill. As customers leave the traditional network to capture that value, though, those network fixed costs are now spread over fewer network customers. That’s the Achilles heel of cost-based regulation. And that’s a big part of what drives the “death spiral” concern — if customers increasingly self-generate and leave the network, who will pay the fixed costs? This question has traditionally been the justification for regulators approving utility standby charges, so that if a customer self-generates and has a failure, that customer can connect to the grid and get electricity. Set those rates too high, and distributed generation’s economic value falls; set those rates too low, and the distribution utility may not cover the incremental costs of serving that customer. That range can be large.
This is not a new conversation in the industry or among policy makers and academics. In fact, here’s a 2003 Electricity Journal article arguing against standby charges by friend-of-KP Sean Casten, who works in recycled energy and combined heat and power (CHP). In 2002 I presented a paper at the International Association of Energy Economics annual meetings in which I argued that distributed generation and storage would make the distribution network contestable, and after the Northeast blackout in 2003 Reason released a version of the paper as a policy study. One typical static argument for a single, regulated wires network is to eliminate costly duplication of infrastructure in the presence of economies of scale. But my argument is dynamic: innovation and technological change that competes with the wires network need not be duplicative wires, and DG+storage is an example of innovation that makes a wires network contestable.
Another older conversation that is new again was the DISCO of the Future Forum, hosted over a year or so in 2001-2002 by the Center for the Advancement of Energy Markets. I participated in this forum, in which industry, regulators, and researchers worked together to “game out” different scenarios for the distribution company business model in the context of competitive wholesale and retail markets. This 2002 Electric Light & Power article summarizes the effort and the ultimate report; note in particular this description of the forum from Jamie Wimberly, then-CAEM president (and now CEO of EcoAlign):
“The primary purpose of the forum was to thoroughly examine the issues and challenges facing distribution companies and to make consensus-based recommendations that work to ensure healthy companies and happy customers in the future,” he said. “There is no question much more needs to be discussed and debated, particularly the role of the regulated utility in the provision of new product offerings and services.”
Technological dynamism is starting to make the distribution network contestable. Now what?
The first step should be for regulators to properly allocate utilities’ fixed and variable costs to the fixed and variable portions of their rates, rather than requiring that a significant portion of the fixed costs be recovered through the variable portion of their rates. That step would largely solve the problem of overcompensation of self generators under net metering. It would also largely eliminate the variations in utility earnings which result from significant seasonal weather variations or changes in the level of economic activity.
Ed, I couldn’t agree more.
And I remember that the CAEM DISCO of the Future Forum is where you and I met! You, Tom Casten, and several other people whose thinking has really contributed to mine.
Lynne, eveything is fine, but to what extent the death spiral is due to a combination of taxes on conventional energy and subsidies to self consumed renewables? That’s certainly the major issue in the EU. What we see is not a genuine market dynamics but a policy-driven mess.
Carlo,
The situation in the US is much the same. Utilities are taxed. Utilities are required to meet Renewable Portfolio Standards, which increase wholesale power costs. Utilities are required to take all available wholesale renewable power, regardless of cost or operating penalties. Wholesale renewable power is subsidized.
Also, on-site renewable power installations are subsidized. Utilities are required to use net metering, netting full retail price to generators, which includes a portion of the utilities’ fixed costs and return on ratebase. The fixed cost and return fractions must eventually be passed on to non-generating customers, raising their rates and increasing the appeal of self-generation.
@firetoice2014 @Lynne : The fact that a significant portion of fixed costs is recovered through the variable rates means that large consumers subsidize the fixed part of the consumption of small consumers. The trouble with changing that is that electricity would become unaffordable for many small consumers, given the staggering price relative to the amount effectively consumed.
As a result, many small consumers would be pushed outside the market which would be negative even for the large ones who would end up with a larger part of the fixed cost to pay.
So this system is optimal by getting as many people as possible on the fixed costs boat and not at all obvious to change.
@jmdesp
The costs are what they are; and, they should be born by those who impose them. The overriding issues are: that the allocation of these costs varies from state to state, at the whim of the state regulators; and, that these allocations create cross-subsidies which distort the rational function of the markets. Net metering merely adds to these distortions, to the detriment of the utilities and their non-generating customers.
I seriously doubt that you could demonstrate that “this system is optimal”. Actually, I doubt that you could demonstrate that current utility regulation is optimal.