Charging for non-customer-specific fixed costs

UC Berkeley economist Severin Borenstein has a really, really great post at the Energy at Haas blog on utility fixed charges to recoup system fixed costs. If you want a primer on volumetric versus two-part pricing, this is a good one. After a very clear and cogent explanation and illustration of the differences among variable costs, customer-specific fixed costs, and system fixed costs, he says

Second, as everyone who studies electricity markets knows (and even much of the energy media have grown to understand), the marginal cost of electricity generation goes up at higher-demand times, and all generation gets paid those high peak prices.  That means extra revenue for the baseload plants above their lower marginal cost, and that revenue that can go to pay the fixed costs of those plants, as I discussed in a paper back in 1999. …

The same is not true, however, for distribution costs.  Retail prices don’t rise at peak times and create extra revenue that covers fixed costs of distribution.  That creates a revenue shortfall that has to be made up somewhere. Likewise, the cost of customer-specific fixed costs don’t get compensated in a system where the volumetric charge for electricity reflects its true marginal cost.

He continues with a good discussion of the lack of a theoretical economic principle informing distribution fixed costs.

I want to take it in another, complementary, direction. The asymmetry he points out is, of course, an artifact of cost-based regulated rate recovery, which means that even under retail competition this challenge will arise, even though his explanation of it is articulated under fixed, regulated rates. And the fact that late night regulated rates are higher than energy costs may not generate a revenue excess that would be sufficient to pay the system fixed costs portion in the way he describes as happening in wholesale markets and transmission fixed costs. This is a thorny problem of cost-based regulation.

Consider a regulated, vertically-integrated distribution utility. This utility offers a menu of contracts — a fixed price, a TOU price, and a real-time price (the attentive among you will notice that this setup approximates what we studied in the GridWise Olympic Peninsula Project). It’s possible, as David Chassin and Ross Guttromson demonstrated, for the utility to find an efficient frontier among these three contract types to maximize expected revenue in aggregate across the groups of customers choosing among those contracts. That’s a situation in which retail revenue does vary, driven especially by the RTP customers, and revenue can be higher to the extent that there’s a core of inelastic retail demand. But they still have to figure out a principle, a rule, a metric, an algorithm for sharing those distribution system fixed costs, or for taking them into account when setting their fixed and TOU prices. And then to be non-discriminatory, they’d probably have to allocate the same system fixed costs to the RTP customers too. So we’re back where we started.

And this is also the case under retail competition. Take, for example, this table of delivery charges in Texas, where the regulated utilities are transmission and distribution wires companies.  It breaks them down between customer fixed charges and system fixed charges, but it’s still the same type of scenario as Severin describes.

As long as there’s a component of the value chain that’s cost-recovery regulated, and as long as that component has system-specific and customer-specific fixed costs, this question will have to be part of the analysis.

A related question is whether, or how, the regulated utility will be permitted to provide services that generate new revenue streams that will allow them to cover those costs. That’s a thicket I’ll crawl into another day.

Platform economics and “unscaling” the electricity industry

A few weeks ago I mused over the question of whether there would ever be an Uber or AirBnB for the electricity grid. This question is a platform question — both Uber and AirBnB have business models in which they bring together two parties for mutual benefit, and the platform provider’s revenue stream can come from charging one or both parties for facilitating the transaction (although there are other means too). I said that a “P2P platform very explicitly reduces transaction costs that prevent exchanges between buyer and seller”, and that’s really the core of a platform business model. Platform providers exist to make exchanges feasible that were not before, to make them easier, and ultimately to make them either cheaper or more valuable (or some combination of the two).

In this sense the Nobel Prize award to Jean Tirole (pdf, very good summary of his work) this week was timely, because one of the areas of economics to which he has contributed is the economics of two-sided platform markets. Alex Tabarrok wrote an excellent summary of Tirole’s platform economics work. As Alex observes,

Antitrust and regulation of two-sided markets is challenging because the two sets of prices [that the platform firm charges to the two parties] may look discriminatory or unfair even when they are welfare enhancing. … Platform markets mean that pricing at marginal cost can no longer be considered optimal in every market and pricing above marginal cost can no longer be considered as an indication of monopoly power.

One aspect of platform firms is that they connect distinct users in a network. Platform firms are network firms. Not all network firms/industries operate or think of their business models as platform firms, though. That will change.

What role does a network firm provide? It’s connection, facilitating exchange between two parties. This idea is not novel, not original in the digital age. Go back in economic history to the beginnings of canals, say, or rail networks. Transportation is a quintessential non-digital network platform industry. I think you can characterize all network infrastructure industries as having some aspects of platform or two-sided markets; rail networks bring together transportation providers and passengers/freight, postal networks bring together correspondents, pipeline networks bring together buyers and sellers of oil or natural gas, electric wires networks bring together generators and consumers.

What’s novel in the digital age is that by changing transaction costs, the technology changes the transactional boundary of the firm and reduces the economic impetus for vertical integration. A digital platform firm, like Google or Uber, is not vertically integrated upstream or downstream in any of the value chains that its platform enables (although some of Google’s acquisitions are changing that somewhat), whereas historically, railroads and gas companies and electric companies started out vertically integrated. Rail network owners were vertically integrated upstream into train ownership and transportation provision, and electric utilities were integrated upstream into generation. In network infrastructure industries, the platform is physical, and firms bundled the network service into their offering. But they have not been seen or thought of as platforms in the sense that we are coming to understand as such firms and industries emerge; I suspect that’s because of the economic benefit and the historical path dependence of the vertical integration.

Another distinguishing feature of platforms and two-sided markets is that the cost-revenue relationship is not uni-directional, a point summarized well in this Harvard Business Review article overview from 2006:

Two-sided networks can be found in many industries, sharing the space with traditional product and service offerings. However, two-sided networks differ from other offerings in a fundamental way. In the traditional value chain, value moves from left to right: To the left of the company is cost; to the right is revenue. In two-sided networks, cost and revenue are both to the left and the right, because the platform has a distinct group of users on each side. The platform incurs costs in serving both groups and can collect revenue from each, although one side is often subsidized, as we’ll see.

In this sense, I still think that the electricity network and its transactions has platform characteristics — the wires firm incurs costs to deliver energy from generators to consumers, and those costs arise in serving both distinct groups.

As I apply these concepts to the electricity industry, I think digital technologies have two platform-related types of effects. The first is the reduction in transaction costs that were a big part of the economic drive for vertical integration in the first place — digital technologies make distributed digital sensing, monitoring, and measurement of energy flow and system status possible in ways that were inconceivable or impossibly costly before the invention of the transistor.

The second is the ability that digital technologies create for the network firm to handle more diverse and heterogenous types of agents in a two-sided market. For example, digital sensors and automated digital switches make it possible to automate rules for the interconnection of distributed generation, electric vehicles, microgrids, and other diverse users into the distribution grid in ways that can be mutually beneficial in a two-sided market sense. The old electro-mechanical sensors could not do that.

This is the sense in which I think a lot of tech entrepreneurs talk about “unscaling the electricity industry”:

If we want secure, clean and affordable energy, we can’t continue down this path. Instead, we need to grow in a very different way, one more akin to the Silicon Valley playbook of unscaling an industry by aggregating individual users onto platforms.

Digitally-enabled distributed resources are becoming increasingly economical at smaller scales, and some of these types of resources — microgrids, electric vehicles — can either be producers or consumers, each having associated costs and revenues and with their identities changing depending on whether they are selling excess energy or buying it.

This is a substantive, meaningful sense in which the distribution wires firm can, and should, operate as a platform and think about platform strategies as the utility business model evolves. An electric distribution platform facilitates exchange in two-sided electricity and energy service markets, charging a fee for doing so. In the near term, much of that facilitation takes the form of distribution, of the transportation and delivery. As distributed resources proliferate, the platform firm must rethink how it creates value, and reaps revenues, by facilitating beneficial exchange in two-sided markets.

Solar generation in key states

I’ve been playing around with some ownership type and fuel source data on electricity generation, using the EIA’s annual data going back to 1990. I looked at solar’s share of the total MWH of generated electricity in eight states (AZ CA IL NC NJ NY OH TX), 1990-2012, and express it as a percentage of that total, here’s what I got:

solar share since 1990

In looking at the data and at this graph, a few things catch my attention. California (the green line) clearly has an active solar market throughout the entire period, much of which I attribute to the implementation of PURPA qualifying facilities regulations starting in 1978 (although I’m happy to be corrected if I’m mistaken). The other seven states here have little or no solar market until the mid-2010s; Arizona (starts having solar in 2001) and Texas (some solar before restructuring, then none, then an increase) are exceptions to the general pattern.

Of course the most striking pattern in these data is the large uptick in solar shares in 2011 and 2012. That uptick is driven by several factors, both economic and regulatory, and trying to distentangle that is part of what I’m working on currently. I’m interested in the development and change in residential solar market, and how the extent and type of regulatory policy influences the extent and type of innovation and changing market boundaries that ensue. Another way to parse the data is by ownership type, and how that varies by state depending on the regulatory institutions in place. In a state like North Carolina (teal), still vertically-integrated, both the regulated utility and independent power producers own solar. The path to market, and indeed whether or not you can actually say that a residential solar market qua market exists, differs in a vertically-integrated state from, say, New Jersey (orange) or Illinois (purple, but barely visible), where thus far the residential solar market is independent, and the regulated utility does not participate (again, please correct me if I’m mistaken).

It will be interesting to see what the 2013 data tell us, when the EIA release it in November. But even in California with that large uptick, solar’s share of total MWH generated does not go above 2 percent, and is substantially smaller in other states.

What do you see here? I know some of you will want to snark about subsidies for the uptick, but please keep it substantive :-).

Why does a theory of competition matter for electricity regulation?

For the firms in regulated industries, for the regulators, for their customers, does the theory underlying the applied regulation matter? I think it matters a lot, even down in the real-world trenches of doing regulation, because regulation’s theoretical foundation influences what regulators and firms do and how they do it. Think about a traditional regulated industry like electricity — vertically integrated because of initial technological constraints, with technologies that enable production of standard electric power service at a particular voltage range with economies of scale over the relevant range of demand.

When these technologies were new and the industry was young, the economic theory of competition underlying the form that regulation took was what we now think of as a static efficiency/allocation-focused model. In this model, production is represented by a known cost function with a given capital-labor ratio; that function is the representation of the firm and of its technology (note here how the organization of the firm fades into the background, to be re-illuminated starting in the mid-20th century by Coase and other organizational and new institutional economists). In the case of a high fixed cost industry with economies of scale, that cost function’s relevant characteristic is declining long-run average cost as output produced increases. On the demand side, consumers have stable preferences for this well-defined, standard good (electric power service at a particular voltage range).

In this model, the question is how to maximize total surplus given the technology, cost function, and preferences. This is the allocation question, and it’s a static question, because the technology, cost function, and preferences are given. The follow-on question in an industry with economies of scale is whether or not competition, rivalry among firms, will yield the best possible allocation, with the largest total surplus. The answer from this model is no: compared to the efficient benchmark where firms compete by lowering price to marginal cost, a “natural monopoly” industry/firm/cost structure cannot sustain P=MC because of the fixed costs, but price equal to average cost (where economic profits are “normal”) is not a stable equilibrium. The model indicates that the stable equilibrium is the monopoly price, with associated deadweight loss. But that P=AC point yields the highest feasible total surplus given the nature of the cost function. Thus this static allocative efficiency model is the justification for regulation of prices and quantities in this market, to make the quantity at which P=AC a stable outcome.

The theory of competition underlying this regulatory model is the static efficiency model, that competition is beneficial because it enables rival firms to bid prices down to P=MC, simultaneously maximizing firm profits, consumer surplus, and output produced (all the output that’s worth producing gets produced). Based on this model, legislators, regulators, and industry all influenced the design of regulation’s institutional details — rate-of-return regulation to target firm profits at “normal” levels, deriving retail prices from that, and erecting an entry barrier to exclude rivals while requiring the firm to serve all customers.

So what? I’ve just argued that regulatory institutional design is grounded in a theory of competition. If institutional designers hold a particular theory about what competition does and how it does it, that theory will inform their design to achieve their policy objectives. Institutional design is a function of the theory of competition, the policy objectives, and the ability/interest of industry to influence the design. If your theory of competition is the static allocative efficiency theory, you will design institutions to target the static efficient outcome in your model (in this case, P=AC). You start with a policy objective or a question to explore and a theory of competition, and out of that you derive an institutional design.

But what if competition is beneficial for other reasons, in other ways? What if the static allocative efficiency benefits of competition are just a single case in a larger set of possible outcomes? What if the phenomena we want to understand, the question to explore, the policy objective, would be better served by a different model? What if the world is not static, so the incumbent model becomes less useful because our questions and policy objectives have changed? Would we design different regulatory institutions if we use a different theory of competition? I want to try to treat that as a non-rhetorical question, even though my visceral reaction is “of course”.

These questions don’t get asked in legislative and regulatory proceedings, but given the pace and nature of dynamism, they should.

The sharing economy and the electricity industry

In a recent essay, the Rocky Mountain Institute’s Matthew Crosby asks “will there ever be an AirBnB or Uber for the electricity grid?” It’s a good question, a complicated question, and one that I have pondered myself a few times. He correctly identifies the characteristics of such platforms that have made them attractive and successful, and relates them to distributed energy resources (DERs):

What’s been missing so far is a trusted, open peer-to-peer (P2P) platform that will allow DERs to “play” in a shared economy. An independent platform underlies the success of many shared economy businesses. At its core, the platform monetizes trust and interconnection among market actors — a driver and a passenger, a homeowner and a visitor, and soon, a power producer and consumer — and allows users to both bypass the central incumbent (such as a taxi service, hotel, or electric utility) and go through a new service provider (Uber, Airbnb, or in the power sector, Google).

Now, as millions gain experience and trust with Airbnb, Uber and Lyft, they may likely begin to ask, “Why couldn’t I share, sell or buy the energy services of consumer-owned and -sited DERs like rooftop solar panels or smart thermostats?” The answer may lie in emerging business models that enable both peer-to-peer sharing of the benefits of DERs and the increased utilization of the electric system and DERs.

A P2P platform very explicitly reduces transaction costs that prevent exchanges between buyer and seller, earning revenue via a commission per transaction (and this is why Uber has in its sights such things as running your errands for you (video)). That reduction allows owners of underutilized assets (cars, apartments, solar panels, and who knows what else will evolve) to make someone else better off by selling them the use of that asset. Saying it that way makes the static welfare gain to the two parties obvious, but think also about the dynamic welfare gain — you are more likely, all other things equal, to invest in such an asset or to invest in a bigger/nicer asset if you can increase its capacity utilization. Deregulation catalyzed this process in the airline industry, and digital technology is catalyzing it now in rides and rooms. This prospect is exciting for those interested in accelerating the growth of DERs.

Note also that Crosby makes an insightful observation when he says that such P2P networks are more beneficial if they have access to a central backbone, which in this case would be the existing electricity distribution grid. Technologically, the edge of the network (where all of the cool distributed stuff is getting created) and the core of the network are complements, not substitutes. That is not and has not been the case in the electricity network, in large part because regulation has largely prevented “innovation at the edge of the network” since approximately the early 20th century and the creation of a standard plug for lights and appliances!

The standard static and dynamic welfare gain arguments, though, are not a deep enough analysis — we need to layer on the political economy analysis of the process of getting from here to there. As the controversies over Uber have shown, this process is often contentious and not straightforward, particularly in industries like rides and electricity, the incumbents in which have had regulatory entry barriers to create and protect regulatory rents. The incumbents may be in a transitional gains trap, where the rents are capitalized into their asset values, and thus to avoid economic losses to themselves and/or their shareholders, they must argue for the maintenance of the regulatory entry barrier even if overall social welfare is higher without it (i.e., if a Kaldor-Hicks improvement is possible). The concentration of benefits from maintaining the entry barrier may make this regulation persist, even if in aggregate the diffuse benefits across the non-incumbents is larger than the costs.

That’s one way to frame the current institutional design challenge in electricity. Given that the incumbent utility business model is a regulatory construct, what’s a useful and feasible way to adapt the regulatory environment to the new value propositions that new digital and distributed energy technologies have made possible? If it is likely that the diffuse economic and environmental benefits of P2P electricity exchange are larger than the costs, what does a regulatory environment look like that would enable P2P networks and the distribution grid to be complements and not substitutes? And how would we transfer the resources to the incumbents to get them out of the transitional gains trap, to get them to agree that they will serve as the intelligent digital platform for such innovation?

I think this is the question at the guts of all of the debate over the utility “death spiral”, the future utility business model, and other such innovation-induced dynamism in this industry. I’ve long argued that my vision of a technology-enabled value-creating electricity industry would have such P2P characteristics, with plug-level sensors that enable transactive automated control within the home, and with meshed connections that enable neighbors with electric vehicles and/or rooftop solar to exchange with each other (one place I made that argument was in my 2009 Beesley lecture at the IEA, captured in this 2010 Economic Affairs article). Crosby’s analysis here is consistent with that vision, and that future.

Should regulated utilities participate in the residential solar market?

I recently argued that the regulated utility is not likely to enter a “death spiral”, but that the regulated utility business model is indeed under pressure, and the conversation about the future of that business model is a valuable one.

One area of pressure on the regulated utility business model is the market for residential solar power. Even two years hence, this New York Times Magazine article on the residential solar market is fresh and relevant, and even more so given the declining production costs of solar technologies: “Thanks to increased Chinese production of photovoltaic panels, innovative financing techniques, investment from large institutional investors and a patchwork of semi-effective public-policy efforts, residential solar power has never been more affordable.” In states like California, a combination of plentiful sun and state policies designed to induce more use of renewables brought growth in the residential solar market starting in the 1980s. This growth was also grounded in the PURPA (1978) federal legislation (“conservation by decree”) that required regulated utilities to buy some of their generated energy from renewable and cogeneration providers at a price determined by the state public utility commission.

Since then, a small but growing independent solar industry has developed in California and elsewhere, and the NYT Magazine article ably summarizes that development as well as the historical disinterest of regulated utilities in getting involved in renewables themselves. Why generate using a fuel and enabling technology that is intermittent, for which economical storage does not exist, and that does not have the economies of scale that drive the economics of the regulated vertically-integrated cost-recovery-based business model? Why indeed.

Over the ensuing decades, though, policy priorities have changed, and environmental quality now joins energy security and the social objectives of utility regulation. Air quality and global warming concerns joined the mix, and at the margin shifted the policy balance, leading several states to adopt renewable portfolio standards (RPSs) and net metering regulations. California, always a pioneer, has a portfolio of residential renewables policies, including net metering, although it does not have a state RPS. Note, in particular, the recent changes in California policy regarding residential renewables:

The CPUC’s California Solar Initiative (CPUC ruling – R.04-03-017) moved the consumer renewable energy rebate program for existing homes from the Energy Commission to the utility companies under the direction of the CPUC. This incentive program also provides cash back for solar energy systems of less than one megawatt to existing and new commercial, industrial, government, nonprofit, and agricultural properties. The CSI has a budget of $2 billion over 10 years, and the goal is to reach 1,940 MW of installed solar capacity by 2016.

The CSI provides rebates to residential customers installing solar technologies who are retail customers of one of the state’s investor-owned utilities. Each IOU has a cap on the number of its residential customers who can receive these subsidies, and PG&E has already reached that cap.

Whether the policy is rebates to induce the renewables switch, allowing net metering, or a state RPS (or feed-in tariffs such as used in Spain and Germany), these policies reflect a new objective in the portfolio of utility regulation, and at the margin they have changed the incentives of regulated utilities. Starting in 2012 when residential solar installations increased, regulated utilities increased their objections to solar power both on reliability grounds and based on the inequities and existing cross-subsidization built in to regulated retail rates (in a state like California, the smallest monthly users of electricity pay much less than their proportional share of the fixed costs of what they consume). My reading has also left me with the impression that if the regulated utilities are going to be subject to renewables mandates to achieve environmental objectives, they would prefer not to have to compete with the existing, and growing, independent producers operating in the residential solar market. The way a regulated monopolist benefits from environmental mandates is by owning assets to meet the mandates.

While this case requires much deeper analysis, as a first pass I want to step back and ask why the regulated distribution utility should be involved in the residential solar market at all. The growth of producers in the residential solar market (Sungevity, SunEdison, Solar City, etc.) suggests that this is a competitive or potentially competitive market.

I remember asking that question back when this NYT Magazine article first came out, and I stand by my observation then:

Consider an alternative scenario in which regulated distribution monopolists like PG&E are precluded from offering retail services, including rooftop solar, and the competing firms that Himmelman profiled can compete both in how they structure the transactions (equipment purchase, lease, PPA, etc.) and in the prices they offer. One of Rubin’s complaints is that the regulated net metering rate reimburses the rooftop solar homeowner at the full regulated retail price per kilowatt hour, which over-compensates the homeowner for the market value of the electricity product. In a rivalrous market, competing solar services firms would experiment with different prices, perhaps, say, reimbursing the homeowner a fixed price based on a long-term contract, or a varying price based on the wholesale market spot price in the hours in which the homeowner puts power back into the grid. Then it’s up to the retailer to contract with the wires company for the wires charge for those customers — that’s the source of the regulated monopolist’s revenue stream, the wires charge, and it can and should be separated from the net metering transaction and contract.

The presence of the regulated monopolist in that retail market for rooftop solar services is a distortion in and of itself, in addition to the regulation-induced distortions that Rubin identified.

The regulated distribution utility’s main objective is, and should be, reliable delivery of energy. The existing regulatory structure gives regulated utilities incentives to increase their asset base to increase their rate base, and thus when a new environmental policy objective joins the exiting ones, if regulated utilities can acquire new solar assets to meet that objective, then they have an incentive to do so. Cost recovery and a guaranteed rate of return is a powerful motivator. But why should they even be a participant in that market, given the demonstrable degree of competition that already exists?

“Grid defection” and the regulated utility business model

The conversations about the “utility death spiral” to which I alluded in my recent post have included discussion of the potential for “grid defection”. Grid defection is an important phenomenon in any network industry — what if you use scarce resources to build a network that provides value for consumers, and then over time, with innovation and dynamism, what if they can find alternative ways of capturing that value (and/or more or different value)? Whether it’s a public transportation network, a wired telecommunications network, a water and sewer network, or a wired electricity distribution network, consumers can and do exit when they perceive the alternatives available to them as being more valuable than the network alternative. Of course, those four cases differ because of differences in transaction costs and regulatory institutions — making exit from a public transportation network illegal (i.e., making private transportation illegal) is much less likely, and less valuable, than making private water supply in a municipality illegal. But two of the common elements across these four infrastructure industries are interesting: the high fixed costs nature of the network infrastructure and the resulting economies of scale, and the potential for innovation and technological change to change the relative value of the network.

The first common element in network industries is the high fixed costs associated with constructing and maintaining the network, and the associated economies of scale typically found in such industries. This cost structure has long been the justification for either economic regulation or municipal supply in the industry — the cheapest per-unit way to provide large quantities is to have one provider and not to build duplicate networks, and to stipulate product quality and degrees of infrastructure redundancy to provide reliable service at the lowest feasible cost.

What does that entail? Cost-based regulation. Spreading those fixed costs out over as many consumers as possible to keep the product’s regulated price as low as feasible. If there are different consumers that can be categorized into different customer classes, and if for economic or political reasons the utility and/or the regulator have an incentive to keep prices low for one class (say, residential customers), then other types of consumers may bear a larger share of the fixed costs than they would if, for example, the fixed costs were allocated according to share of the volume of network use (this is called cross-subsidization). Cost-based regulation has been the typical regulatory approach in these industries, and cross-subsidization has been a characteristic of regulated rate structures. The classic reference for this analysis is Faulhaber American Economic Review (1975).

Both in theory and in practice these institutions can work as long as the technological environment is static. But the technological environment is anything but static; it has had periods of stability but has always been dynamic, the dynamism of which is the foundation of increased living standards over the past three centuries. Technological dynamism creates new alternatives to the existing network industry. We have seen this happen in the past two decades with mobile communications eroding the value of wired communications at a rapid rate, and that history animates the concern in electricity that distributed generation will make the distribution network less valuable and will disintermediate the regulated distribution utility, the wires owner, which relies on the distribution transaction for its revenue. It also traditionally relies on the ability to cross-subsidize across different types of customers, by charging different portions of that fixed costs to different types of customers, and that’s a pricing practice that mobile telephony also made obsolete in the communications market.

Alternatives to the network grid may have higher value to consumers in their estimation (never forget that value is subjective), and they may be willing to pay more to achieve that value. This is why most of us now pay more per month for communications services than we did pre-1984 in our monthly phone bill. As customers leave the traditional network to capture that value, though, those network fixed costs are now spread over fewer network customers. That’s the Achilles heel of cost-based regulation. And that’s a big part of what drives the “death spiral” concern — if customers increasingly self-generate and leave the network, who will pay the fixed costs? This question has traditionally been the justification for regulators approving utility standby charges, so that if a customer self-generates and has a failure, that customer can connect to the grid and get electricity. Set those rates too high, and distributed generation’s economic value falls; set those rates too low, and the distribution utility may not cover the incremental costs of serving that customer. That range can be large.

This is not a new conversation in the industry or among policy makers and academics. In fact, here’s a 2003 Electricity Journal article arguing against standby charges by friend-of-KP Sean Casten, who works in recycled energy and combined heat and power (CHP). In 2002 I presented a paper at the International Association of Energy Economics annual meetings in which I argued that distributed generation and storage would make the distribution network contestable, and after the Northeast blackout in 2003 Reason released a version of the paper as a policy study. One typical static argument for a single, regulated wires network is to eliminate costly duplication of infrastructure in the presence of economies of scale. But my argument is dynamic: innovation and technological change that competes with the wires network need not be duplicative wires, and DG+storage is an example of innovation that makes a wires network contestable.

Another older conversation that is new again was the DISCO of the Future Forum, hosted over a year or so in 2001-2002 by the Center for the Advancement of Energy Markets. I participated in this forum, in which industry, regulators, and researchers worked together to “game out” different scenarios for the distribution company business model in the context of competitive wholesale and retail markets. This 2002 Electric Light & Power article summarizes the effort and the ultimate report; note in particular this description of the forum from Jamie Wimberly, then-CAEM president (and now CEO of EcoAlign):

“The primary purpose of the forum was to thoroughly examine the issues and challenges facing distribution companies and to make consensus-based recommendations that work to ensure healthy companies and happy customers in the future,” he said. “There is no question much more needs to be discussed and debated, particularly the role of the regulated utility in the provision of new product offerings and services.”

Technological dynamism is starting to make the distribution network contestable. Now what?