Platform economics and “unscaling” the electricity industry

A few weeks ago I mused over the question of whether there would ever be an Uber or AirBnB for the electricity grid. This question is a platform question — both Uber and AirBnB have business models in which they bring together two parties for mutual benefit, and the platform provider’s revenue stream can come from charging one or both parties for facilitating the transaction (although there are other means too). I said that a “P2P platform very explicitly reduces transaction costs that prevent exchanges between buyer and seller”, and that’s really the core of a platform business model. Platform providers exist to make exchanges feasible that were not before, to make them easier, and ultimately to make them either cheaper or more valuable (or some combination of the two).

In this sense the Nobel Prize award to Jean Tirole (pdf, very good summary of his work) this week was timely, because one of the areas of economics to which he has contributed is the economics of two-sided platform markets. Alex Tabarrok wrote an excellent summary of Tirole’s platform economics work. As Alex observes,

Antitrust and regulation of two-sided markets is challenging because the two sets of prices [that the platform firm charges to the two parties] may look discriminatory or unfair even when they are welfare enhancing. … Platform markets mean that pricing at marginal cost can no longer be considered optimal in every market and pricing above marginal cost can no longer be considered as an indication of monopoly power.

One aspect of platform firms is that they connect distinct users in a network. Platform firms are network firms. Not all network firms/industries operate or think of their business models as platform firms, though. That will change.

What role does a network firm provide? It’s connection, facilitating exchange between two parties. This idea is not novel, not original in the digital age. Go back in economic history to the beginnings of canals, say, or rail networks. Transportation is a quintessential non-digital network platform industry. I think you can characterize all network infrastructure industries as having some aspects of platform or two-sided markets; rail networks bring together transportation providers and passengers/freight, postal networks bring together correspondents, pipeline networks bring together buyers and sellers of oil or natural gas, electric wires networks bring together generators and consumers.

What’s novel in the digital age is that by changing transaction costs, the technology changes the transactional boundary of the firm and reduces the economic impetus for vertical integration. A digital platform firm, like Google or Uber, is not vertically integrated upstream or downstream in any of the value chains that its platform enables (although some of Google’s acquisitions are changing that somewhat), whereas historically, railroads and gas companies and electric companies started out vertically integrated. Rail network owners were vertically integrated upstream into train ownership and transportation provision, and electric utilities were integrated upstream into generation. In network infrastructure industries, the platform is physical, and firms bundled the network service into their offering. But they have not been seen or thought of as platforms in the sense that we are coming to understand as such firms and industries emerge; I suspect that’s because of the economic benefit and the historical path dependence of the vertical integration.

Another distinguishing feature of platforms and two-sided markets is that the cost-revenue relationship is not uni-directional, a point summarized well in this Harvard Business Review article overview from 2006:

Two-sided networks can be found in many industries, sharing the space with traditional product and service offerings. However, two-sided networks differ from other offerings in a fundamental way. In the traditional value chain, value moves from left to right: To the left of the company is cost; to the right is revenue. In two-sided networks, cost and revenue are both to the left and the right, because the platform has a distinct group of users on each side. The platform incurs costs in serving both groups and can collect revenue from each, although one side is often subsidized, as we’ll see.

In this sense, I still think that the electricity network and its transactions has platform characteristics — the wires firm incurs costs to deliver energy from generators to consumers, and those costs arise in serving both distinct groups.

As I apply these concepts to the electricity industry, I think digital technologies have two platform-related types of effects. The first is the reduction in transaction costs that were a big part of the economic drive for vertical integration in the first place — digital technologies make distributed digital sensing, monitoring, and measurement of energy flow and system status possible in ways that were inconceivable or impossibly costly before the invention of the transistor.

The second is the ability that digital technologies create for the network firm to handle more diverse and heterogenous types of agents in a two-sided market. For example, digital sensors and automated digital switches make it possible to automate rules for the interconnection of distributed generation, electric vehicles, microgrids, and other diverse users into the distribution grid in ways that can be mutually beneficial in a two-sided market sense. The old electro-mechanical sensors could not do that.

This is the sense in which I think a lot of tech entrepreneurs talk about “unscaling the electricity industry”:

If we want secure, clean and affordable energy, we can’t continue down this path. Instead, we need to grow in a very different way, one more akin to the Silicon Valley playbook of unscaling an industry by aggregating individual users onto platforms.

Digitally-enabled distributed resources are becoming increasingly economical at smaller scales, and some of these types of resources — microgrids, electric vehicles — can either be producers or consumers, each having associated costs and revenues and with their identities changing depending on whether they are selling excess energy or buying it.

This is a substantive, meaningful sense in which the distribution wires firm can, and should, operate as a platform and think about platform strategies as the utility business model evolves. An electric distribution platform facilitates exchange in two-sided electricity and energy service markets, charging a fee for doing so. In the near term, much of that facilitation takes the form of distribution, of the transportation and delivery. As distributed resources proliferate, the platform firm must rethink how it creates value, and reaps revenues, by facilitating beneficial exchange in two-sided markets.

Solar generation in key states

I’ve been playing around with some ownership type and fuel source data on electricity generation, using the EIA’s annual data going back to 1990. I looked at solar’s share of the total MWH of generated electricity in eight states (AZ CA IL NC NJ NY OH TX), 1990-2012, and express it as a percentage of that total, here’s what I got:

solar share since 1990

In looking at the data and at this graph, a few things catch my attention. California (the green line) clearly has an active solar market throughout the entire period, much of which I attribute to the implementation of PURPA qualifying facilities regulations starting in 1978 (although I’m happy to be corrected if I’m mistaken). The other seven states here have little or no solar market until the mid-2010s; Arizona (starts having solar in 2001) and Texas (some solar before restructuring, then none, then an increase) are exceptions to the general pattern.

Of course the most striking pattern in these data is the large uptick in solar shares in 2011 and 2012. That uptick is driven by several factors, both economic and regulatory, and trying to distentangle that is part of what I’m working on currently. I’m interested in the development and change in residential solar market, and how the extent and type of regulatory policy influences the extent and type of innovation and changing market boundaries that ensue. Another way to parse the data is by ownership type, and how that varies by state depending on the regulatory institutions in place. In a state like North Carolina (teal), still vertically-integrated, both the regulated utility and independent power producers own solar. The path to market, and indeed whether or not you can actually say that a residential solar market qua market exists, differs in a vertically-integrated state from, say, New Jersey (orange) or Illinois (purple, but barely visible), where thus far the residential solar market is independent, and the regulated utility does not participate (again, please correct me if I’m mistaken).

It will be interesting to see what the 2013 data tell us, when the EIA release it in November. But even in California with that large uptick, solar’s share of total MWH generated does not go above 2 percent, and is substantially smaller in other states.

What do you see here? I know some of you will want to snark about subsidies for the uptick, but please keep it substantive :-).

Why does a theory of competition matter for electricity regulation?

For the firms in regulated industries, for the regulators, for their customers, does the theory underlying the applied regulation matter? I think it matters a lot, even down in the real-world trenches of doing regulation, because regulation’s theoretical foundation influences what regulators and firms do and how they do it. Think about a traditional regulated industry like electricity — vertically integrated because of initial technological constraints, with technologies that enable production of standard electric power service at a particular voltage range with economies of scale over the relevant range of demand.

When these technologies were new and the industry was young, the economic theory of competition underlying the form that regulation took was what we now think of as a static efficiency/allocation-focused model. In this model, production is represented by a known cost function with a given capital-labor ratio; that function is the representation of the firm and of its technology (note here how the organization of the firm fades into the background, to be re-illuminated starting in the mid-20th century by Coase and other organizational and new institutional economists). In the case of a high fixed cost industry with economies of scale, that cost function’s relevant characteristic is declining long-run average cost as output produced increases. On the demand side, consumers have stable preferences for this well-defined, standard good (electric power service at a particular voltage range).

In this model, the question is how to maximize total surplus given the technology, cost function, and preferences. This is the allocation question, and it’s a static question, because the technology, cost function, and preferences are given. The follow-on question in an industry with economies of scale is whether or not competition, rivalry among firms, will yield the best possible allocation, with the largest total surplus. The answer from this model is no: compared to the efficient benchmark where firms compete by lowering price to marginal cost, a “natural monopoly” industry/firm/cost structure cannot sustain P=MC because of the fixed costs, but price equal to average cost (where economic profits are “normal”) is not a stable equilibrium. The model indicates that the stable equilibrium is the monopoly price, with associated deadweight loss. But that P=AC point yields the highest feasible total surplus given the nature of the cost function. Thus this static allocative efficiency model is the justification for regulation of prices and quantities in this market, to make the quantity at which P=AC a stable outcome.

The theory of competition underlying this regulatory model is the static efficiency model, that competition is beneficial because it enables rival firms to bid prices down to P=MC, simultaneously maximizing firm profits, consumer surplus, and output produced (all the output that’s worth producing gets produced). Based on this model, legislators, regulators, and industry all influenced the design of regulation’s institutional details — rate-of-return regulation to target firm profits at “normal” levels, deriving retail prices from that, and erecting an entry barrier to exclude rivals while requiring the firm to serve all customers.

So what? I’ve just argued that regulatory institutional design is grounded in a theory of competition. If institutional designers hold a particular theory about what competition does and how it does it, that theory will inform their design to achieve their policy objectives. Institutional design is a function of the theory of competition, the policy objectives, and the ability/interest of industry to influence the design. If your theory of competition is the static allocative efficiency theory, you will design institutions to target the static efficient outcome in your model (in this case, P=AC). You start with a policy objective or a question to explore and a theory of competition, and out of that you derive an institutional design.

But what if competition is beneficial for other reasons, in other ways? What if the static allocative efficiency benefits of competition are just a single case in a larger set of possible outcomes? What if the phenomena we want to understand, the question to explore, the policy objective, would be better served by a different model? What if the world is not static, so the incumbent model becomes less useful because our questions and policy objectives have changed? Would we design different regulatory institutions if we use a different theory of competition? I want to try to treat that as a non-rhetorical question, even though my visceral reaction is “of course”.

These questions don’t get asked in legislative and regulatory proceedings, but given the pace and nature of dynamism, they should.

Technology market experimentation in regulated industries: Are administrative pilot projects bad for retail markets?

Since 2008, multiple smart grid pilot projects have been occurring in the US, funded jointly through regulated utility investments and taxpayer-funded Department of Energy cost sharing. In this bureaucratic market environment, market experimentation takes the form of the large-scale, multi-year pilot project. The regulated utility (after approval from the state public utility commission) publishes a request for proposals from smart grid technology vendors to sell devices and systems that provide a pre-determined range of services specified in the RFP. The regulated utility, not the end user, is thus the vendor’s primary customer.

When regulated incumbent distribution monopolists provide in-home technology to residential customers in states where retail markets are nominally competitive but the incumbent is the default service provider, does that involvement of the regulated incumbent have an anti-competitive effect? Does it reduce experimentation and innovation?

In markets with low entry and exit barriers, entrepreneurship drives new product creation and product differentiation. Market experimentation reveals whether or not consumers value such innovations. In regulated markets like electricity, however, this experimentation occurs in a top-down, procurement-oriented manner, without the organic evolution of market boundaries as entrants generate new products and services. Innovations do not succeed or fail based on their ability to attract end-use customers, but rather on their ability to persuade the regulated monopolist that the product is cost-reducing to the firm rather than value-creating for the consumer (and, similarly, their ability to persuade regulators).

The stated goal of many projects is installing digital technologies that increase performance and reliability of the basic provision of basic wires distribution service. For that reason, the projects emphasize technologies in the distribution wires network (distribution automation) and the digital meter at each home. The digital meter is the edge of the wires network, from the regulated utility’s perspective, and in restructured states it is the edge of its business, the edge of the regulated footprint. A secondary goal is to explore how some customers actually use technology to control and manage their own energy use; a longer-run consequence of this exploration may be consumer learning with respect to their electricity consumption, now that digital technology exists that can enable them to reduce consumption and save money by automating their actions.

In these cases, consumer technology choices are being made at the firm level by the regulated monopolist, not at the consumer level by consumers. This narrowed path to market for in-home technology changes the nature of the market experimentation – on one hand, the larger-volume purchases by regulated utilities may attract vendors and investors and increase rivalry and experimentation, but on the other hand, the margin at which the technology rivalry occurs is not at the end-user as decision-maker, but instead at the regulated utility. The objective functions of the utility and their heterogeneous residential customers differ substantially, and this more bureaucratic, narrowed experimentation path reduces the role of the different preferences and knowledge of those heterogeneous consumers. In that sense, the in-home technology choice being in the hands of the regulated utility stifles market experimentation with respect to the preferences of the heterogeneous consumers, although it increases experimentation with respect to the features that the regulated monopolist thinks that its customers want.

Focusing any burgeoning consumer demand on a specific technology, specific vendor, and specific firm, while creating critical mass for some technology entrepreneurs, rigidifies and channels experimentation into vendors and technologies chosen by the regulated monopolist, not by end-use consumers. Ask yourself this counterfactual: would the innovation and increase in features and value of mobile technologies have been this high if instead of competing for the end user’s business, Apple and Google had to pitch their offerings to a large, regulated utility?

These regulated incumbent technology choices may have anti-competitive downstream effects. They reduce the set of experimentation and commercialization opportunities available to retail entrants to provide product differentiation, product bundling, or other innovative value propositions beyond the scope of those being tested by the incumbent monopolist. Bundling and product differentiation are the dominant forms that dynamic competition take, and in this industry such retail bundling and product differentiation would probably include in-home devices. The regulated incumbent providing in-home technology to default customers participating in pilot projects reduces the scope for competing retail providers to engage in either product differentiation or bundling. That limitation undercuts their business models and is potentially anti-competitive.

The regulated incumbent’s default service provision and designation of in-home technology reduces a motive for consumers to search for other providers and other competing products and services. While they may argue that they are providing a convenience to their customers, they are substituting their judgment of what they think their customers want for the individual judgments of their customers.

By offering a competing regulated retail service and leveraging it into the provision of in-home devices for pilot projects, the incumbent reduces the set of feasible potentially valuable profit opportunities facing the potential retail competitors, thus reducing entry. They have to be that much more innovative to get a foothold in this market against the incumbent, in the face of consumer switching costs and inertia, when incumbent provision of in-home devices reduces potential demand facing potential entrants. Even if the customer pays for and owns the device, the anti-competitive effect can arise from the monopolist offering the device as a complement to their regulated default service product.

Leaving in-home technology choice to retailers and consumers contributes to healthy retail competition. Allowing the upstream regulated incumbent to provide in-home technology hampers it, to the detriment of both entrepreneurs and the residential customers who would have gotten more value out of a different device than the one provided by the regulated incumbent. By increasing the number of default service customers with in-home smart grid devices, these projects decrease the potential demand facing these independent retailers by removing or diluting one of the service dimensions on which they could compete. Their forays into in-home technology may not have anti-competitive intent, but they still may have anti-competitive consequences.

The sharing economy and the electricity industry

In a recent essay, the Rocky Mountain Institute’s Matthew Crosby asks “will there ever be an AirBnB or Uber for the electricity grid?” It’s a good question, a complicated question, and one that I have pondered myself a few times. He correctly identifies the characteristics of such platforms that have made them attractive and successful, and relates them to distributed energy resources (DERs):

What’s been missing so far is a trusted, open peer-to-peer (P2P) platform that will allow DERs to “play” in a shared economy. An independent platform underlies the success of many shared economy businesses. At its core, the platform monetizes trust and interconnection among market actors — a driver and a passenger, a homeowner and a visitor, and soon, a power producer and consumer — and allows users to both bypass the central incumbent (such as a taxi service, hotel, or electric utility) and go through a new service provider (Uber, Airbnb, or in the power sector, Google).

Now, as millions gain experience and trust with Airbnb, Uber and Lyft, they may likely begin to ask, “Why couldn’t I share, sell or buy the energy services of consumer-owned and -sited DERs like rooftop solar panels or smart thermostats?” The answer may lie in emerging business models that enable both peer-to-peer sharing of the benefits of DERs and the increased utilization of the electric system and DERs.

A P2P platform very explicitly reduces transaction costs that prevent exchanges between buyer and seller, earning revenue via a commission per transaction (and this is why Uber has in its sights such things as running your errands for you (video)). That reduction allows owners of underutilized assets (cars, apartments, solar panels, and who knows what else will evolve) to make someone else better off by selling them the use of that asset. Saying it that way makes the static welfare gain to the two parties obvious, but think also about the dynamic welfare gain — you are more likely, all other things equal, to invest in such an asset or to invest in a bigger/nicer asset if you can increase its capacity utilization. Deregulation catalyzed this process in the airline industry, and digital technology is catalyzing it now in rides and rooms. This prospect is exciting for those interested in accelerating the growth of DERs.

Note also that Crosby makes an insightful observation when he says that such P2P networks are more beneficial if they have access to a central backbone, which in this case would be the existing electricity distribution grid. Technologically, the edge of the network (where all of the cool distributed stuff is getting created) and the core of the network are complements, not substitutes. That is not and has not been the case in the electricity network, in large part because regulation has largely prevented “innovation at the edge of the network” since approximately the early 20th century and the creation of a standard plug for lights and appliances!

The standard static and dynamic welfare gain arguments, though, are not a deep enough analysis — we need to layer on the political economy analysis of the process of getting from here to there. As the controversies over Uber have shown, this process is often contentious and not straightforward, particularly in industries like rides and electricity, the incumbents in which have had regulatory entry barriers to create and protect regulatory rents. The incumbents may be in a transitional gains trap, where the rents are capitalized into their asset values, and thus to avoid economic losses to themselves and/or their shareholders, they must argue for the maintenance of the regulatory entry barrier even if overall social welfare is higher without it (i.e., if a Kaldor-Hicks improvement is possible). The concentration of benefits from maintaining the entry barrier may make this regulation persist, even if in aggregate the diffuse benefits across the non-incumbents is larger than the costs.

That’s one way to frame the current institutional design challenge in electricity. Given that the incumbent utility business model is a regulatory construct, what’s a useful and feasible way to adapt the regulatory environment to the new value propositions that new digital and distributed energy technologies have made possible? If it is likely that the diffuse economic and environmental benefits of P2P electricity exchange are larger than the costs, what does a regulatory environment look like that would enable P2P networks and the distribution grid to be complements and not substitutes? And how would we transfer the resources to the incumbents to get them out of the transitional gains trap, to get them to agree that they will serve as the intelligent digital platform for such innovation?

I think this is the question at the guts of all of the debate over the utility “death spiral”, the future utility business model, and other such innovation-induced dynamism in this industry. I’ve long argued that my vision of a technology-enabled value-creating electricity industry would have such P2P characteristics, with plug-level sensors that enable transactive automated control within the home, and with meshed connections that enable neighbors with electric vehicles and/or rooftop solar to exchange with each other (one place I made that argument was in my 2009 Beesley lecture at the IEA, captured in this 2010 Economic Affairs article). Crosby’s analysis here is consistent with that vision, and that future.

Should regulated utilities participate in the residential solar market?

I recently argued that the regulated utility is not likely to enter a “death spiral”, but that the regulated utility business model is indeed under pressure, and the conversation about the future of that business model is a valuable one.

One area of pressure on the regulated utility business model is the market for residential solar power. Even two years hence, this New York Times Magazine article on the residential solar market is fresh and relevant, and even more so given the declining production costs of solar technologies: “Thanks to increased Chinese production of photovoltaic panels, innovative financing techniques, investment from large institutional investors and a patchwork of semi-effective public-policy efforts, residential solar power has never been more affordable.” In states like California, a combination of plentiful sun and state policies designed to induce more use of renewables brought growth in the residential solar market starting in the 1980s. This growth was also grounded in the PURPA (1978) federal legislation (“conservation by decree”) that required regulated utilities to buy some of their generated energy from renewable and cogeneration providers at a price determined by the state public utility commission.

Since then, a small but growing independent solar industry has developed in California and elsewhere, and the NYT Magazine article ably summarizes that development as well as the historical disinterest of regulated utilities in getting involved in renewables themselves. Why generate using a fuel and enabling technology that is intermittent, for which economical storage does not exist, and that does not have the economies of scale that drive the economics of the regulated vertically-integrated cost-recovery-based business model? Why indeed.

Over the ensuing decades, though, policy priorities have changed, and environmental quality now joins energy security and the social objectives of utility regulation. Air quality and global warming concerns joined the mix, and at the margin shifted the policy balance, leading several states to adopt renewable portfolio standards (RPSs) and net metering regulations. California, always a pioneer, has a portfolio of residential renewables policies, including net metering, although it does not have a state RPS. Note, in particular, the recent changes in California policy regarding residential renewables:

The CPUC’s California Solar Initiative (CPUC ruling – R.04-03-017) moved the consumer renewable energy rebate program for existing homes from the Energy Commission to the utility companies under the direction of the CPUC. This incentive program also provides cash back for solar energy systems of less than one megawatt to existing and new commercial, industrial, government, nonprofit, and agricultural properties. The CSI has a budget of $2 billion over 10 years, and the goal is to reach 1,940 MW of installed solar capacity by 2016.

The CSI provides rebates to residential customers installing solar technologies who are retail customers of one of the state’s investor-owned utilities. Each IOU has a cap on the number of its residential customers who can receive these subsidies, and PG&E has already reached that cap.

Whether the policy is rebates to induce the renewables switch, allowing net metering, or a state RPS (or feed-in tariffs such as used in Spain and Germany), these policies reflect a new objective in the portfolio of utility regulation, and at the margin they have changed the incentives of regulated utilities. Starting in 2012 when residential solar installations increased, regulated utilities increased their objections to solar power both on reliability grounds and based on the inequities and existing cross-subsidization built in to regulated retail rates (in a state like California, the smallest monthly users of electricity pay much less than their proportional share of the fixed costs of what they consume). My reading has also left me with the impression that if the regulated utilities are going to be subject to renewables mandates to achieve environmental objectives, they would prefer not to have to compete with the existing, and growing, independent producers operating in the residential solar market. The way a regulated monopolist benefits from environmental mandates is by owning assets to meet the mandates.

While this case requires much deeper analysis, as a first pass I want to step back and ask why the regulated distribution utility should be involved in the residential solar market at all. The growth of producers in the residential solar market (Sungevity, SunEdison, Solar City, etc.) suggests that this is a competitive or potentially competitive market.

I remember asking that question back when this NYT Magazine article first came out, and I stand by my observation then:

Consider an alternative scenario in which regulated distribution monopolists like PG&E are precluded from offering retail services, including rooftop solar, and the competing firms that Himmelman profiled can compete both in how they structure the transactions (equipment purchase, lease, PPA, etc.) and in the prices they offer. One of Rubin’s complaints is that the regulated net metering rate reimburses the rooftop solar homeowner at the full regulated retail price per kilowatt hour, which over-compensates the homeowner for the market value of the electricity product. In a rivalrous market, competing solar services firms would experiment with different prices, perhaps, say, reimbursing the homeowner a fixed price based on a long-term contract, or a varying price based on the wholesale market spot price in the hours in which the homeowner puts power back into the grid. Then it’s up to the retailer to contract with the wires company for the wires charge for those customers — that’s the source of the regulated monopolist’s revenue stream, the wires charge, and it can and should be separated from the net metering transaction and contract.

The presence of the regulated monopolist in that retail market for rooftop solar services is a distortion in and of itself, in addition to the regulation-induced distortions that Rubin identified.

The regulated distribution utility’s main objective is, and should be, reliable delivery of energy. The existing regulatory structure gives regulated utilities incentives to increase their asset base to increase their rate base, and thus when a new environmental policy objective joins the exiting ones, if regulated utilities can acquire new solar assets to meet that objective, then they have an incentive to do so. Cost recovery and a guaranteed rate of return is a powerful motivator. But why should they even be a participant in that market, given the demonstrable degree of competition that already exists?

The “utility death spiral”: The utility as a regulatory creation

Unless you follow the electricity industry you may not be aware of the past year’s discussion of the impending “utility death spiral”, ably summarized in this Clean Energy Group post:

There have been several reports out recently predicting that solar + storage systems will soon reach cost parity with grid-purchased electricity, thus presenting the first serious challenge to the centralized utility model.  Customers, the theory goes, will soon be able to cut the cord that has bound them to traditional utilities, opting instead to self-generate using cheap PV, with batteries to regulate the intermittent output and carry them through cloudy spells.  The plummeting cost of solar panels, plus the imminent increased production and decreased cost of electric vehicle batteries that can be used in stationary applications, have combined to create a technological perfect storm. As grid power costs rise and self-generation costs fall, a tipping point will arrive – within a decade, some analysts are predicting – at which time, it will become economically advantageous for millions of Americans to generate their own power.  The “death spiral” for utilities occurs because the more people self-generate, the more utilities will be forced to seek rate increases on a shrinking rate base… thus driving even more customers off the grid.

A January 2013 analysis from the Edison Electric Institute, Disruptive Challenges: Financial Implications and Strategic Responses to a Changing Retail Electric Business, precipitated this conversation. Focusing on the financial market implications for regulated utilities of distributed resources (DER) and technology-enabled demand-side management (an archaic term that I dislike intensely), or DSM, the report notes that:

The financial risks created by disruptive challenges include declining utility revenues, increasing costs, and lower profitability potential, particularly over the long term. As DER and DSM programs continue to capture “market share,” for example, utility revenues will be reduced. Adding the higher costs to integrate DER, increasing subsidies for DSM and direct metering of DER will result in the potential for a squeeze on profitability and, thus, credit metrics. While the regulatory process is expected to allow for recovery of lost revenues in future rate cases, tariff structures in most states call for non-DER customers to pay for (or absorb) lost revenues. As DER penetration increases, this is a cost recovery structure that will lead to political pressure to undo these cross subsidies and may result in utility stranded cost exposure.

I think the apocalyptic “death spiral” rhetoric is overblown and exaggerated, but this is a worthwhile, and perhaps overdue, conversation to have. As it has unfolded over the past year, though, I do think that some of the more essential questions on the topic are not being asked. Over the next few weeks I’m going to explore some of those questions, as I dive into a related new research project.

The theoretical argument for the possibility of death spiral is straightforward. The vertically-integrated, regulated distribution utility is a regulatory creation, intended to enable a financially sustainable business model for providing reliable basic electricity service to the largest possible number of customers for the least feasible cost, taking account of the economies of scale and scope resulting from the electro-mechanical generation and wires technologies implemented in the early 20th century. From a theoretical/benevolent social planner perspective, the objective is, given a market demand for a specific good/service, to minimize the total cost of providing that good/service subject to a zero economic profit constraint for the firm; this will lead to highest feasible output and total surplus combination (and lowest deadweight loss) consistent with the financial sustainability of the firm.

The regulatory mechanism for implementing this model to achieve this objective is to erect a legal entry barrier into the market for that specific good/service, and to assure the regulated monopolist cost recovery, including its opportunity cost of capital, otherwise known as rate-of-return regulation. In return, the regulated monopolist commits to serve all customers reliably through its vertically-integrated generation, transmission, distribution, and retail functions. The monopolist’s costs and opportunity cost of capital determine its revenue requirement, out of which we can derive flat, averaged retail prices that forecasts suggest will enable the monopolist to earn that amount of revenue.

That’s the regulatory model + business model that has existed with little substantive evolution since the early 20th century, and it did achieve the social policy objectives of the 20th century — widespread electrification and low, stable prices, which have enabled follow-on economic growth and well-distributed increased living standards. It’s a regulatory+business model, though, that is premised on a few things:

  1. Defining a market by defining the characteristics of the product/service sold in that market, in this case electricity with a particular physical (volts, amps, hertz) definition and a particular reliability level (paraphrasing Fred Kahn …)
  2. The economies of scale (those big central generators and big wires) and economies of scope (lower total cost when producing two or more products compared to producing those products separately) that exist due to large-scale electro-mechanical technologies
  3. The architectural implications of connecting large-scale electro-mechanical technologies together in a network via a set of centralized control nodes — technology -> architecture -> market environment, and in this case large-scale electro-mechanical technologies -> distributed wires network with centralized control points rather than distributed control points throughout the network, including the edge of the network (paraphrasing Larry Lessig …)
  4. The financial implications of having invested so many resources in long-lived physical assets to create that network and its control nodes — if demand is growing at a stable rate, and regulators can assure cost recovery, then the regulated monopolist can arrange financing for investments at attractive interest rates, as long as this arrangement is likely to be stable for the 30-to-40-year life of the assets

As long as those conditions are stable, regulatory cost recovery will sustain this business model. And that’s precisely the effect of smart grid technologies, distributed generation technologies, microgrid technologies — they violate one or more of those four premises, and can make it not just feasible, but actually beneficial for customers to change their behavior in ways that reduce the regulation-supported revenue of the regulated monopolist.

Digital technologies that enable greater consumer control and more choice of products and services break down the regulatory market boundaries that are required to regulate product quality. Generation innovations, from the combined-cycle gas turbine of the 1980s to small-scale Stirling engines, reduce the economies of scale that have driven the regulation of and investment in the industry for over a century. Wires networks with centralized control built to capitalize on those large-scale technologies may have less value in an environment with smaller-scale generation and digital, automated detection, response, and control. But those generation and wires assets are long-lived, and in a cost-recovery-based business model, have to be paid for even if they become the destruction in creative destruction. We saw that happen in the restructuring that occurred in the 1990s, with the liberalization of wholesale power markets and the unbundling of generation from the vertically-integrated monopolists in those states; part of the political bargain in restructuring was to compensate them for the “stranded costs” associated with having made those investments based on a regulatory commitment that they would receive cost recovery on them.

Thus the death spiral rhetoric, and the concern that the existing utility business model will not survive. But if my framing of the situation is accurate, then what we should be examining in more detail is the regulatory model, since the utility business model is itself a regulatory creation. This relationship between digital innovation (encompassing smart grid, distributed resources, and microgrids) and regulation is what I’m exploring. How should the regulatory model and the associated utility business model change in light of digital innovation?