When does state utility regulation distort costs?

I suspect the simplest answer to the title question is “always.” Maybe the answer depends on your definition of “distort,” but both the intended and generally expected consequences of state utility rate regulation has always been to push costs to be something other than what would naturally emerge in the absence of rate regulation.

More substantive, though, is the analysis provided in Steve Cicala’s article in the January 2015 American Economic Review, “When Does Regulation Distort Costs? Lessons from Fuel Procurement in US Electricity Generation.” (here is an earlier ungated version of the paper.)

Here is a summary from the University of Chicago press release:

A study in the latest issue of the American Economic Review used recent state regulatory changes in electricity markets as a laboratory to evaluate which factors can contribute to a regulation causing a bigger mess than the problem it was meant to fix….

Cicala used data on almost $1 trillion worth of fuel deliveries to power plants to look at what happens when a power plant becomes deregulated. He found that the deregulated plants combined save about $1 billion a year compared to those that remained regulated. This is because a lack of transparency, political influence and poorly designed reimbursement rates led the regulated plants to pursue inefficient strategies when purchasing coal.

The $1 billion that deregulated plants save stems from paying about 12 percent less for their coal because they shop around for the best prices. Regulated plants have no incentive to shop around because their profits do not depend on how much they pay for fuel. They also are looked upon more favorably by regulators if they purchase from mines within their state, even if those mines don’t sell the cheapest coal. To make matters worse, regulators have a difficult time figuring out if they are being overcharged because coal is typically purchased through confidential contracts.

Although power plants that burned natural gas were subject to the exact same regulations as the coal-fired plants, there was no drop in the price paid for gas after deregulation. Cicala attributed the difference to the fact that natural gas is sold on a transparent, open market. This prevents political influences from sneaking through and allows regulators to know when plants are paying too much.

What’s different about the buying strategy of deregulated coal plant operators? Cicala dove deep into two decades of detailed, restricted-access procurement data to answer this question. First, he found that deregulated plants switch to cheaper, low-sulfur coal. This not only saves them money, but also allows them to comply with environmental regulations. On the other hand, regulated plants often comply with regulations by installing expensive “scrubber” technology, which allows them to make money from the capital improvements.

“It’s ironic to hear supporters of Eastern coal complain about ‘regulation’: they’re losing business from the deregulated plants,” said Cicala, a scholar at the Harris School of Public Policy.

Deregulated plants also increase purchases from out-of-state mines by about 25 percent. As mentioned, regulated plants are looked upon more favorably if they buy from in-state mines. Finally, deregulated plants purchase their coal from more productive mines (coal seams are thicker and closer to the surface) that require about 25 percent less labor to extract from the ground and that pay 5 percent higher wages.

“Recognizing that there are failures in financial markets, health care markets, energy markets, etc., it’s critical to know what makes for ‘bad’ regulations when designing new ones to avoid making the problem worse,” Cicala said. [Emphasis added.]

Moody’s concludes: mass grid defection not yet on the horizon

Yes, solar power systems are getting cheaper and battery storage is improving. The combination has many folks worried (or elated) about the future prospects of grid-based electric utilities when consumers can get the power they want at home. (See Lynne’s post from last summer for background.)

An analysis by Moody’s concludes that battery storage remains an order of magnitude too high, so grid defections are not yet a demonstrable threat. Analysis of consumer power use data leads them to project a need for a larger home system than other analysts have used. Moody’s further suggests that consumers will be reluctant to make the lifestyle changes–frequent monitoring of battery levels, forced conservation during extended low-solar resource periods–so grid defection may be yet slower than the simple engineering economics computation would suggest.

COMMENT: I’ll project that in a world of widespread consumer power defections, we will see two developments to help consumers avoid the need to face forced conservation. Nobody will have to miss watching Super Bowl LXXX because it was cloudy the week before in Boston. First, plug-in hybrid vehicles hook-ups so the home batteries can be recharged by the consumer’s gasoline or diesel engine. Second, home battery service companies will provide similar mobile recharge services (or hot-swapping home battery systems, etc.) Who knows, in a world of widespread defection, maybe the local electric company will offer spot recharge services at a market-based rate?

[HT to Clean Beta]

Charging for non-customer-specific fixed costs

UC Berkeley economist Severin Borenstein has a really, really great post at the Energy at Haas blog on utility fixed charges to recoup system fixed costs. If you want a primer on volumetric versus two-part pricing, this is a good one. After a very clear and cogent explanation and illustration of the differences among variable costs, customer-specific fixed costs, and system fixed costs, he says

Second, as everyone who studies electricity markets knows (and even much of the energy media have grown to understand), the marginal cost of electricity generation goes up at higher-demand times, and all generation gets paid those high peak prices.  That means extra revenue for the baseload plants above their lower marginal cost, and that revenue that can go to pay the fixed costs of those plants, as I discussed in a paper back in 1999. …

The same is not true, however, for distribution costs.  Retail prices don’t rise at peak times and create extra revenue that covers fixed costs of distribution.  That creates a revenue shortfall that has to be made up somewhere. Likewise, the cost of customer-specific fixed costs don’t get compensated in a system where the volumetric charge for electricity reflects its true marginal cost.

He continues with a good discussion of the lack of a theoretical economic principle informing distribution fixed costs.

I want to take it in another, complementary, direction. The asymmetry he points out is, of course, an artifact of cost-based regulated rate recovery, which means that even under retail competition this challenge will arise, even though his explanation of it is articulated under fixed, regulated rates. And the fact that late night regulated rates are higher than energy costs may not generate a revenue excess that would be sufficient to pay the system fixed costs portion in the way he describes as happening in wholesale markets and transmission fixed costs. This is a thorny problem of cost-based regulation.

Consider a regulated, vertically-integrated distribution utility. This utility offers a menu of contracts — a fixed price, a TOU price, and a real-time price (the attentive among you will notice that this setup approximates what we studied in the GridWise Olympic Peninsula Project). It’s possible, as David Chassin and Ross Guttromson demonstrated, for the utility to find an efficient frontier among these three contract types to maximize expected revenue in aggregate across the groups of customers choosing among those contracts. That’s a situation in which retail revenue does vary, driven especially by the RTP customers, and revenue can be higher to the extent that there’s a core of inelastic retail demand. But they still have to figure out a principle, a rule, a metric, an algorithm for sharing those distribution system fixed costs, or for taking them into account when setting their fixed and TOU prices. And then to be non-discriminatory, they’d probably have to allocate the same system fixed costs to the RTP customers too. So we’re back where we started.

And this is also the case under retail competition. Take, for example, this table of delivery charges in Texas, where the regulated utilities are transmission and distribution wires companies.  It breaks them down between customer fixed charges and system fixed charges, but it’s still the same type of scenario as Severin describes.

As long as there’s a component of the value chain that’s cost-recovery regulated, and as long as that component has system-specific and customer-specific fixed costs, this question will have to be part of the analysis.

A related question is whether, or how, the regulated utility will be permitted to provide services that generate new revenue streams that will allow them to cover those costs. That’s a thicket I’ll crawl into another day.

Platform economics and “unscaling” the electricity industry

A few weeks ago I mused over the question of whether there would ever be an Uber or AirBnB for the electricity grid. This question is a platform question — both Uber and AirBnB have business models in which they bring together two parties for mutual benefit, and the platform provider’s revenue stream can come from charging one or both parties for facilitating the transaction (although there are other means too). I said that a “P2P platform very explicitly reduces transaction costs that prevent exchanges between buyer and seller”, and that’s really the core of a platform business model. Platform providers exist to make exchanges feasible that were not before, to make them easier, and ultimately to make them either cheaper or more valuable (or some combination of the two).

In this sense the Nobel Prize award to Jean Tirole (pdf, very good summary of his work) this week was timely, because one of the areas of economics to which he has contributed is the economics of two-sided platform markets. Alex Tabarrok wrote an excellent summary of Tirole’s platform economics work. As Alex observes,

Antitrust and regulation of two-sided markets is challenging because the two sets of prices [that the platform firm charges to the two parties] may look discriminatory or unfair even when they are welfare enhancing. … Platform markets mean that pricing at marginal cost can no longer be considered optimal in every market and pricing above marginal cost can no longer be considered as an indication of monopoly power.

One aspect of platform firms is that they connect distinct users in a network. Platform firms are network firms. Not all network firms/industries operate or think of their business models as platform firms, though. That will change.

What role does a network firm provide? It’s connection, facilitating exchange between two parties. This idea is not novel, not original in the digital age. Go back in economic history to the beginnings of canals, say, or rail networks. Transportation is a quintessential non-digital network platform industry. I think you can characterize all network infrastructure industries as having some aspects of platform or two-sided markets; rail networks bring together transportation providers and passengers/freight, postal networks bring together correspondents, pipeline networks bring together buyers and sellers of oil or natural gas, electric wires networks bring together generators and consumers.

What’s novel in the digital age is that by changing transaction costs, the technology changes the transactional boundary of the firm and reduces the economic impetus for vertical integration. A digital platform firm, like Google or Uber, is not vertically integrated upstream or downstream in any of the value chains that its platform enables (although some of Google’s acquisitions are changing that somewhat), whereas historically, railroads and gas companies and electric companies started out vertically integrated. Rail network owners were vertically integrated upstream into train ownership and transportation provision, and electric utilities were integrated upstream into generation. In network infrastructure industries, the platform is physical, and firms bundled the network service into their offering. But they have not been seen or thought of as platforms in the sense that we are coming to understand as such firms and industries emerge; I suspect that’s because of the economic benefit and the historical path dependence of the vertical integration.

Another distinguishing feature of platforms and two-sided markets is that the cost-revenue relationship is not uni-directional, a point summarized well in this Harvard Business Review article overview from 2006:

Two-sided networks can be found in many industries, sharing the space with traditional product and service offerings. However, two-sided networks differ from other offerings in a fundamental way. In the traditional value chain, value moves from left to right: To the left of the company is cost; to the right is revenue. In two-sided networks, cost and revenue are both to the left and the right, because the platform has a distinct group of users on each side. The platform incurs costs in serving both groups and can collect revenue from each, although one side is often subsidized, as we’ll see.

In this sense, I still think that the electricity network and its transactions has platform characteristics — the wires firm incurs costs to deliver energy from generators to consumers, and those costs arise in serving both distinct groups.

As I apply these concepts to the electricity industry, I think digital technologies have two platform-related types of effects. The first is the reduction in transaction costs that were a big part of the economic drive for vertical integration in the first place — digital technologies make distributed digital sensing, monitoring, and measurement of energy flow and system status possible in ways that were inconceivable or impossibly costly before the invention of the transistor.

The second is the ability that digital technologies create for the network firm to handle more diverse and heterogenous types of agents in a two-sided market. For example, digital sensors and automated digital switches make it possible to automate rules for the interconnection of distributed generation, electric vehicles, microgrids, and other diverse users into the distribution grid in ways that can be mutually beneficial in a two-sided market sense. The old electro-mechanical sensors could not do that.

This is the sense in which I think a lot of tech entrepreneurs talk about “unscaling the electricity industry”:

If we want secure, clean and affordable energy, we can’t continue down this path. Instead, we need to grow in a very different way, one more akin to the Silicon Valley playbook of unscaling an industry by aggregating individual users onto platforms.

Digitally-enabled distributed resources are becoming increasingly economical at smaller scales, and some of these types of resources — microgrids, electric vehicles — can either be producers or consumers, each having associated costs and revenues and with their identities changing depending on whether they are selling excess energy or buying it.

This is a substantive, meaningful sense in which the distribution wires firm can, and should, operate as a platform and think about platform strategies as the utility business model evolves. An electric distribution platform facilitates exchange in two-sided electricity and energy service markets, charging a fee for doing so. In the near term, much of that facilitation takes the form of distribution, of the transportation and delivery. As distributed resources proliferate, the platform firm must rethink how it creates value, and reaps revenues, by facilitating beneficial exchange in two-sided markets.

Solar generation in key states

I’ve been playing around with some ownership type and fuel source data on electricity generation, using the EIA’s annual data going back to 1990. I looked at solar’s share of the total MWH of generated electricity in eight states (AZ CA IL NC NJ NY OH TX), 1990-2012, and express it as a percentage of that total, here’s what I got:

solar share since 1990

In looking at the data and at this graph, a few things catch my attention. California (the green line) clearly has an active solar market throughout the entire period, much of which I attribute to the implementation of PURPA qualifying facilities regulations starting in 1978 (although I’m happy to be corrected if I’m mistaken). The other seven states here have little or no solar market until the mid-2010s; Arizona (starts having solar in 2001) and Texas (some solar before restructuring, then none, then an increase) are exceptions to the general pattern.

Of course the most striking pattern in these data is the large uptick in solar shares in 2011 and 2012. That uptick is driven by several factors, both economic and regulatory, and trying to distentangle that is part of what I’m working on currently. I’m interested in the development and change in residential solar market, and how the extent and type of regulatory policy influences the extent and type of innovation and changing market boundaries that ensue. Another way to parse the data is by ownership type, and how that varies by state depending on the regulatory institutions in place. In a state like North Carolina (teal), still vertically-integrated, both the regulated utility and independent power producers own solar. The path to market, and indeed whether or not you can actually say that a residential solar market qua market exists, differs in a vertically-integrated state from, say, New Jersey (orange) or Illinois (purple, but barely visible), where thus far the residential solar market is independent, and the regulated utility does not participate (again, please correct me if I’m mistaken).

It will be interesting to see what the 2013 data tell us, when the EIA release it in November. But even in California with that large uptick, solar’s share of total MWH generated does not go above 2 percent, and is substantially smaller in other states.

What do you see here? I know some of you will want to snark about subsidies for the uptick, but please keep it substantive :-).

Why does a theory of competition matter for electricity regulation?

For the firms in regulated industries, for the regulators, for their customers, does the theory underlying the applied regulation matter? I think it matters a lot, even down in the real-world trenches of doing regulation, because regulation’s theoretical foundation influences what regulators and firms do and how they do it. Think about a traditional regulated industry like electricity — vertically integrated because of initial technological constraints, with technologies that enable production of standard electric power service at a particular voltage range with economies of scale over the relevant range of demand.

When these technologies were new and the industry was young, the economic theory of competition underlying the form that regulation took was what we now think of as a static efficiency/allocation-focused model. In this model, production is represented by a known cost function with a given capital-labor ratio; that function is the representation of the firm and of its technology (note here how the organization of the firm fades into the background, to be re-illuminated starting in the mid-20th century by Coase and other organizational and new institutional economists). In the case of a high fixed cost industry with economies of scale, that cost function’s relevant characteristic is declining long-run average cost as output produced increases. On the demand side, consumers have stable preferences for this well-defined, standard good (electric power service at a particular voltage range).

In this model, the question is how to maximize total surplus given the technology, cost function, and preferences. This is the allocation question, and it’s a static question, because the technology, cost function, and preferences are given. The follow-on question in an industry with economies of scale is whether or not competition, rivalry among firms, will yield the best possible allocation, with the largest total surplus. The answer from this model is no: compared to the efficient benchmark where firms compete by lowering price to marginal cost, a “natural monopoly” industry/firm/cost structure cannot sustain P=MC because of the fixed costs, but price equal to average cost (where economic profits are “normal”) is not a stable equilibrium. The model indicates that the stable equilibrium is the monopoly price, with associated deadweight loss. But that P=AC point yields the highest feasible total surplus given the nature of the cost function. Thus this static allocative efficiency model is the justification for regulation of prices and quantities in this market, to make the quantity at which P=AC a stable outcome.

The theory of competition underlying this regulatory model is the static efficiency model, that competition is beneficial because it enables rival firms to bid prices down to P=MC, simultaneously maximizing firm profits, consumer surplus, and output produced (all the output that’s worth producing gets produced). Based on this model, legislators, regulators, and industry all influenced the design of regulation’s institutional details — rate-of-return regulation to target firm profits at “normal” levels, deriving retail prices from that, and erecting an entry barrier to exclude rivals while requiring the firm to serve all customers.

So what? I’ve just argued that regulatory institutional design is grounded in a theory of competition. If institutional designers hold a particular theory about what competition does and how it does it, that theory will inform their design to achieve their policy objectives. Institutional design is a function of the theory of competition, the policy objectives, and the ability/interest of industry to influence the design. If your theory of competition is the static allocative efficiency theory, you will design institutions to target the static efficient outcome in your model (in this case, P=AC). You start with a policy objective or a question to explore and a theory of competition, and out of that you derive an institutional design.

But what if competition is beneficial for other reasons, in other ways? What if the static allocative efficiency benefits of competition are just a single case in a larger set of possible outcomes? What if the phenomena we want to understand, the question to explore, the policy objective, would be better served by a different model? What if the world is not static, so the incumbent model becomes less useful because our questions and policy objectives have changed? Would we design different regulatory institutions if we use a different theory of competition? I want to try to treat that as a non-rhetorical question, even though my visceral reaction is “of course”.

These questions don’t get asked in legislative and regulatory proceedings, but given the pace and nature of dynamism, they should.

Technology market experimentation in regulated industries: Are administrative pilot projects bad for retail markets?

Since 2008, multiple smart grid pilot projects have been occurring in the US, funded jointly through regulated utility investments and taxpayer-funded Department of Energy cost sharing. In this bureaucratic market environment, market experimentation takes the form of the large-scale, multi-year pilot project. The regulated utility (after approval from the state public utility commission) publishes a request for proposals from smart grid technology vendors to sell devices and systems that provide a pre-determined range of services specified in the RFP. The regulated utility, not the end user, is thus the vendor’s primary customer.

When regulated incumbent distribution monopolists provide in-home technology to residential customers in states where retail markets are nominally competitive but the incumbent is the default service provider, does that involvement of the regulated incumbent have an anti-competitive effect? Does it reduce experimentation and innovation?

In markets with low entry and exit barriers, entrepreneurship drives new product creation and product differentiation. Market experimentation reveals whether or not consumers value such innovations. In regulated markets like electricity, however, this experimentation occurs in a top-down, procurement-oriented manner, without the organic evolution of market boundaries as entrants generate new products and services. Innovations do not succeed or fail based on their ability to attract end-use customers, but rather on their ability to persuade the regulated monopolist that the product is cost-reducing to the firm rather than value-creating for the consumer (and, similarly, their ability to persuade regulators).

The stated goal of many projects is installing digital technologies that increase performance and reliability of the basic provision of basic wires distribution service. For that reason, the projects emphasize technologies in the distribution wires network (distribution automation) and the digital meter at each home. The digital meter is the edge of the wires network, from the regulated utility’s perspective, and in restructured states it is the edge of its business, the edge of the regulated footprint. A secondary goal is to explore how some customers actually use technology to control and manage their own energy use; a longer-run consequence of this exploration may be consumer learning with respect to their electricity consumption, now that digital technology exists that can enable them to reduce consumption and save money by automating their actions.

In these cases, consumer technology choices are being made at the firm level by the regulated monopolist, not at the consumer level by consumers. This narrowed path to market for in-home technology changes the nature of the market experimentation – on one hand, the larger-volume purchases by regulated utilities may attract vendors and investors and increase rivalry and experimentation, but on the other hand, the margin at which the technology rivalry occurs is not at the end-user as decision-maker, but instead at the regulated utility. The objective functions of the utility and their heterogeneous residential customers differ substantially, and this more bureaucratic, narrowed experimentation path reduces the role of the different preferences and knowledge of those heterogeneous consumers. In that sense, the in-home technology choice being in the hands of the regulated utility stifles market experimentation with respect to the preferences of the heterogeneous consumers, although it increases experimentation with respect to the features that the regulated monopolist thinks that its customers want.

Focusing any burgeoning consumer demand on a specific technology, specific vendor, and specific firm, while creating critical mass for some technology entrepreneurs, rigidifies and channels experimentation into vendors and technologies chosen by the regulated monopolist, not by end-use consumers. Ask yourself this counterfactual: would the innovation and increase in features and value of mobile technologies have been this high if instead of competing for the end user’s business, Apple and Google had to pitch their offerings to a large, regulated utility?

These regulated incumbent technology choices may have anti-competitive downstream effects. They reduce the set of experimentation and commercialization opportunities available to retail entrants to provide product differentiation, product bundling, or other innovative value propositions beyond the scope of those being tested by the incumbent monopolist. Bundling and product differentiation are the dominant forms that dynamic competition take, and in this industry such retail bundling and product differentiation would probably include in-home devices. The regulated incumbent providing in-home technology to default customers participating in pilot projects reduces the scope for competing retail providers to engage in either product differentiation or bundling. That limitation undercuts their business models and is potentially anti-competitive.

The regulated incumbent’s default service provision and designation of in-home technology reduces a motive for consumers to search for other providers and other competing products and services. While they may argue that they are providing a convenience to their customers, they are substituting their judgment of what they think their customers want for the individual judgments of their customers.

By offering a competing regulated retail service and leveraging it into the provision of in-home devices for pilot projects, the incumbent reduces the set of feasible potentially valuable profit opportunities facing the potential retail competitors, thus reducing entry. They have to be that much more innovative to get a foothold in this market against the incumbent, in the face of consumer switching costs and inertia, when incumbent provision of in-home devices reduces potential demand facing potential entrants. Even if the customer pays for and owns the device, the anti-competitive effect can arise from the monopolist offering the device as a complement to their regulated default service product.

Leaving in-home technology choice to retailers and consumers contributes to healthy retail competition. Allowing the upstream regulated incumbent to provide in-home technology hampers it, to the detriment of both entrepreneurs and the residential customers who would have gotten more value out of a different device than the one provided by the regulated incumbent. By increasing the number of default service customers with in-home smart grid devices, these projects decrease the potential demand facing these independent retailers by removing or diluting one of the service dimensions on which they could compete. Their forays into in-home technology may not have anti-competitive intent, but they still may have anti-competitive consequences.