Elementary error misleads APPA on electricity pricing in states with retail electric choice

The American Public Power Association (APPA) recently published an analysis of retail power prices, but it makes an elementary mistake and gets the conclusion wrong.

The APPA analysis, “2014 Retail Electric Rates in Deregulated and Regulated States,” uses U.S. Energy Information Administration data to compare retail electric prices in “deregulated” and “regulated” states. The report itself presents its analysis without much in the way of evaluation, but the APPA blog post accompanying its release was clear on the message:

after nearly two decades of retail and wholesale electric market restructuring, the promise of reduced rates has failed to materialize. In fact, customers in states with retail choice programs located within RTO-operated markets are now paying more for their electricity.

In 1997, the retail electric rate in deregulated states — the ones offering retail choice and located within an RTO — was 2.8 cents per kilowatt-hour (kWh) higher than rates in the regulated states with no retail choice. The gap has increased over the last two decades. In 2014, customers in deregulated states paid, on average, 3.3 cents per kWh more than customers in regulated states.

But the APPA neglects the effects of inflation over the 17 year period of analysis. It is an elementary mistake. Merely adjusting for inflation from 1997 to 2014 reverses the conclusion.

The elementary mistake is easily corrected: Inflation data can be found at the St. Louis Fed site. Using the 2014 value of the dollar, average prices per kwh in the APPA-regulated states were 8.4 cents in 1997 and 9.4 cents in 2014. In the APPA-deregulated states the average prices per kwh were 12.5 cents in 1997 and 12.7 cents in 2014.

Prices were up for both groups after adjusting for inflation, but prices increased more in their regulated states (1 cent per kwh, so up about 11.3 percent) than in their deregulated states (0.2 cents; up about 1.4 percent). The inflation-adjusted “gap” fell from nearly 4.1 cents in 1997 to 3.3 cents in 2014.

ADDENDUM

Surprisingly, the APPA knows that an inflation adjustment would change their answer. The report ignores the issue completely; the APPA blog said:

For example, a recent analysis by the Compete Coalition finds that, after accounting for inflation, rates in restructured states decreased by 1.3 percent and increased by 9.8 percent in regulated states since 1997. The data in the APPA study, which does not account for inflation, show that rates in the deregulated states grew by 48 percent compared to a 62 percent increase for the regulated states.

However, a percentage-based comparison obscures the important fact that the 1997 rates in deregulated states were much greater than those in regulated states.

The Compete Coalition report is not linked in the APPA post, but the data points mentioned are here: “Consumers Continue To Fare Better With Competitive Markets, Both at Retail and Wholesale.”

The remaining differences between my inflation-adjusted APPA values and those of the Compete Coalition likely arise because Texas is in the Compete Coalition’s restructured states category, but not in the APPA’s deregulated states category. Seems an odd omission given that most power in Texas is sold in a quite competitive retail power market. APPA does not say why Texas is excluded from their deregulated category.

According to EIA data [XLS], average power prices in Texas were 9 cents per kwh in 1997, but in 2013 had fallen to 8.7 cents. Both numbers have been adjusted for inflation using CPI-U values from the St. Louis Fed website and reported using the 2014 value of a dollar. The 2013 numbers were the latest shown in the EIA dataset.

Forthcoming paper: Implications of Smart Grid Innovation for Organizational Models in Electricity Distribution

Back in 2001 I participated in a year-long forum on the future of the electricity distribution model. Convened by the Center for the Advancement of Energy Markets, the DISCO of the Future Forum brought together many stakeholders to develop several scenarios and analyze their implications (and several of those folks remain friends, playmates in the intellectual sandbox, and commenters here at KP [waves at Ed]!). As noted in this 2002 Electric Light and Power article,

Among the 100 recommendations that CAEM discusses in the report, the forum gave suggestions ranging from small issues-that regulators should consider requiring a standard form (or a “consumer label”) on pricing and terms and conditions of service for small customers to be provided to customers at the tie of the initial offer (as well as upon request)-to larger ones, including the suggestions that regulators should establish a standard distribution utility reporting format for all significant distribution upgrades and extensions, and that regulated DISCOs should be permitted to recover their reasonable costs for development of grid interface designs and grid interconnect application review.

“The technology exists to support a competitive retail market responsive to price signals and demand constraints,” the report concludes. “The extent to which the market is opened to competition and the extent to which these technologies are applied by suppliers, DISCOS and customers will, in large part, be determined by state legislatures and regulators.”

Now in 2015, technological dynamism has brought to a head many of the same questions, regulatory models, and business models that we “penciled out” 14 years ago.

In a new paper, forthcoming in the Wiley Handbook of Smart Grid Development, I grapple with that question: what are the implications of this technological dynamism for the organizational form of the distribution company? What transactions in the vertically-integrated supply chain should be unbundled, what assets should it own, and what are the practical policy issues being tackled in various places around the world as they deal with these questions? I analyze these questions using a theoretical framework from the economics of organization and new institutional economics. And I start off with a historical overview of the industry’s technology, regulation, and organizational model.

Implications of Smart Grid Innovation for Organizational Models in Electricity Distribution

Abstract: Digital technologies from outside the electricity industry are prompting changes in both regulatory institutions and electric utility business models, leading to the disaggregation or unbundling of historically vertically integrated electricity firms in some jurisdictions and not others, and simultaneously opening the door for competition with the traditional electric utility business. This chapter uses the technological and organizational history of the industry, combined with the transactions cost theory of the firm and of vertical integration, to explore the implications of smart grid technologies for future distribution company business models. Smart grid technologies reduce transactions costs, changing economical firm boundaries and reducing the traditional drivers of vertical integration. Possible business models for the distribution company include an integrated utility, a network manager, or a coordinating platform provider.

The New York REV and the distribution company of the future

We live in interesting times in the electricity industry. Vibrant technological dynamism, the very dynamism that has transformed how we work, play, and live, puts increasing pressure on the early-20th-century physical network, regulatory model, and resulting business model of the vertically-integrated distribution utility.

While the utility “death spiral” rhetoric is overblown, these pressures are real. They reflect the extent to which regulatory and organizational institutions, as well as the architecture of the network, are incompatible with a general social objective of not obstructing such innovation. Boosting my innovation-focused claim is the synthesis of relatively new environmental objectives into the policy mix. Innovation, particularly innovation at the distribution edge, is an expression of human creativity that fosters both older economic policy objectives of consumer protection from concentrations of market power and newer environmental policy objectives of a cleaner and prosperous energy future.

But institutions change slowly, especially bureaucratic institutions where decision-makers have a stake in the direction and magnitude of institutional change. Institutional change requires imagination to see a different world as possible, practical vision to see how to get from today’s reality toward that different world, and courage to exercise the leadership and navigate the tough tradeoffs that inevitably arise.

That’s the sense in which the New York Reforming the Energy Vision (REV) proceeding of the New York State Public Service Commission (Greentech) is compelling and encouraging. Launched in spring 2014 with a staff paper, REV is looking squarely at institutional change to align the regulatory framework and the business model of the distribution utility more with these policy objectives and with fostering innovation. As Katherine Tweed summarized the goals in the Greentech Media article linked above,

The report calls for an overhaul of the regulation of the state’s distribution utilities to achieve five policy objectives:

  • Increasing customer knowledge and providing tools that support effective management of their total energy bill
  • Market animation and leverage of ratepayer contributions
  • System-wide efficiency
  • Fuel and resource diversity
  • System reliability and resiliency

The PSC acknowledges that the current ratemaking procedure simply doesn’t work and that the distribution system is not equipped for the changes coming to the energy market. New York is already a deregulated market in which distribution is separated from generation and there is retail choice for electricity. Although that’s a step beyond many states, it is hardly enough for what’s coming in the market.

Last week the NY PSC issued its first order in the REV proceeding, that the incumbent distribution utilities will serve as distributed system platform providers (DSPPs) and should start planning accordingly. As noted by RTO Insider,

The framework envisions utilities serving a central role in the transition as distributed system platform (DSP) providers, responsible for integrated system planning and grid and market operations.

In most cases, however, utilities will be barred from owning distributed energy resources (DER): demand response, distributed generation, distributed storage and end-use energy efficiency.

The planning function will be reflected in the utilities’ distributed system implementation plan (DSIP), a multi-year forecast proposing capital and operating expenditures to serve the DSP functions and provide third parties the system information they need to plan for market participation.

A platform business model is not a cut and dry thing, though, especially in a regulated industry where the regulatory institutions reinforced and perpetuated a vertically integrated model for over a century (with that model only really modified due to generator technological change in the 1980s leading to generation unbundling). Institutional design and market design, the symbiosis of technology and institutions, will have to be front and center, if the vertically-integrated uni-directional delivery model of the 20th century is to evolve into a distribution facilitator of the 21st century.

In fact, the institutional design issues at stake here have been the focus of my research during my sabbatical, so I hope to have more to add to the discussion based on some of my forthcoming work on the subject.

Moody’s concludes: mass grid defection not yet on the horizon

Yes, solar power systems are getting cheaper and battery storage is improving. The combination has many folks worried (or elated) about the future prospects of grid-based electric utilities when consumers can get the power they want at home. (See Lynne’s post from last summer for background.)

An analysis by Moody’s concludes that battery storage remains an order of magnitude too high, so grid defections are not yet a demonstrable threat. Analysis of consumer power use data leads them to project a need for a larger home system than other analysts have used. Moody’s further suggests that consumers will be reluctant to make the lifestyle changes–frequent monitoring of battery levels, forced conservation during extended low-solar resource periods–so grid defection may be yet slower than the simple engineering economics computation would suggest.

COMMENT: I’ll project that in a world of widespread consumer power defections, we will see two developments to help consumers avoid the need to face forced conservation. Nobody will have to miss watching Super Bowl LXXX because it was cloudy the week before in Boston. First, plug-in hybrid vehicles hook-ups so the home batteries can be recharged by the consumer’s gasoline or diesel engine. Second, home battery service companies will provide similar mobile recharge services (or hot-swapping home battery systems, etc.) Who knows, in a world of widespread defection, maybe the local electric company will offer spot recharge services at a market-based rate?

[HT to Clean Beta]

Platform economics and “unscaling” the electricity industry

A few weeks ago I mused over the question of whether there would ever be an Uber or AirBnB for the electricity grid. This question is a platform question — both Uber and AirBnB have business models in which they bring together two parties for mutual benefit, and the platform provider’s revenue stream can come from charging one or both parties for facilitating the transaction (although there are other means too). I said that a “P2P platform very explicitly reduces transaction costs that prevent exchanges between buyer and seller”, and that’s really the core of a platform business model. Platform providers exist to make exchanges feasible that were not before, to make them easier, and ultimately to make them either cheaper or more valuable (or some combination of the two).

In this sense the Nobel Prize award to Jean Tirole (pdf, very good summary of his work) this week was timely, because one of the areas of economics to which he has contributed is the economics of two-sided platform markets. Alex Tabarrok wrote an excellent summary of Tirole’s platform economics work. As Alex observes,

Antitrust and regulation of two-sided markets is challenging because the two sets of prices [that the platform firm charges to the two parties] may look discriminatory or unfair even when they are welfare enhancing. … Platform markets mean that pricing at marginal cost can no longer be considered optimal in every market and pricing above marginal cost can no longer be considered as an indication of monopoly power.

One aspect of platform firms is that they connect distinct users in a network. Platform firms are network firms. Not all network firms/industries operate or think of their business models as platform firms, though. That will change.

What role does a network firm provide? It’s connection, facilitating exchange between two parties. This idea is not novel, not original in the digital age. Go back in economic history to the beginnings of canals, say, or rail networks. Transportation is a quintessential non-digital network platform industry. I think you can characterize all network infrastructure industries as having some aspects of platform or two-sided markets; rail networks bring together transportation providers and passengers/freight, postal networks bring together correspondents, pipeline networks bring together buyers and sellers of oil or natural gas, electric wires networks bring together generators and consumers.

What’s novel in the digital age is that by changing transaction costs, the technology changes the transactional boundary of the firm and reduces the economic impetus for vertical integration. A digital platform firm, like Google or Uber, is not vertically integrated upstream or downstream in any of the value chains that its platform enables (although some of Google’s acquisitions are changing that somewhat), whereas historically, railroads and gas companies and electric companies started out vertically integrated. Rail network owners were vertically integrated upstream into train ownership and transportation provision, and electric utilities were integrated upstream into generation. In network infrastructure industries, the platform is physical, and firms bundled the network service into their offering. But they have not been seen or thought of as platforms in the sense that we are coming to understand as such firms and industries emerge; I suspect that’s because of the economic benefit and the historical path dependence of the vertical integration.

Another distinguishing feature of platforms and two-sided markets is that the cost-revenue relationship is not uni-directional, a point summarized well in this Harvard Business Review article overview from 2006:

Two-sided networks can be found in many industries, sharing the space with traditional product and service offerings. However, two-sided networks differ from other offerings in a fundamental way. In the traditional value chain, value moves from left to right: To the left of the company is cost; to the right is revenue. In two-sided networks, cost and revenue are both to the left and the right, because the platform has a distinct group of users on each side. The platform incurs costs in serving both groups and can collect revenue from each, although one side is often subsidized, as we’ll see.

In this sense, I still think that the electricity network and its transactions has platform characteristics — the wires firm incurs costs to deliver energy from generators to consumers, and those costs arise in serving both distinct groups.

As I apply these concepts to the electricity industry, I think digital technologies have two platform-related types of effects. The first is the reduction in transaction costs that were a big part of the economic drive for vertical integration in the first place — digital technologies make distributed digital sensing, monitoring, and measurement of energy flow and system status possible in ways that were inconceivable or impossibly costly before the invention of the transistor.

The second is the ability that digital technologies create for the network firm to handle more diverse and heterogenous types of agents in a two-sided market. For example, digital sensors and automated digital switches make it possible to automate rules for the interconnection of distributed generation, electric vehicles, microgrids, and other diverse users into the distribution grid in ways that can be mutually beneficial in a two-sided market sense. The old electro-mechanical sensors could not do that.

This is the sense in which I think a lot of tech entrepreneurs talk about “unscaling the electricity industry”:

If we want secure, clean and affordable energy, we can’t continue down this path. Instead, we need to grow in a very different way, one more akin to the Silicon Valley playbook of unscaling an industry by aggregating individual users onto platforms.

Digitally-enabled distributed resources are becoming increasingly economical at smaller scales, and some of these types of resources — microgrids, electric vehicles — can either be producers or consumers, each having associated costs and revenues and with their identities changing depending on whether they are selling excess energy or buying it.

This is a substantive, meaningful sense in which the distribution wires firm can, and should, operate as a platform and think about platform strategies as the utility business model evolves. An electric distribution platform facilitates exchange in two-sided electricity and energy service markets, charging a fee for doing so. In the near term, much of that facilitation takes the form of distribution, of the transportation and delivery. As distributed resources proliferate, the platform firm must rethink how it creates value, and reaps revenues, by facilitating beneficial exchange in two-sided markets.

Solar generation in key states

I’ve been playing around with some ownership type and fuel source data on electricity generation, using the EIA’s annual data going back to 1990. I looked at solar’s share of the total MWH of generated electricity in eight states (AZ CA IL NC NJ NY OH TX), 1990-2012, and express it as a percentage of that total, here’s what I got:

solar share since 1990

In looking at the data and at this graph, a few things catch my attention. California (the green line) clearly has an active solar market throughout the entire period, much of which I attribute to the implementation of PURPA qualifying facilities regulations starting in 1978 (although I’m happy to be corrected if I’m mistaken). The other seven states here have little or no solar market until the mid-2010s; Arizona (starts having solar in 2001) and Texas (some solar before restructuring, then none, then an increase) are exceptions to the general pattern.

Of course the most striking pattern in these data is the large uptick in solar shares in 2011 and 2012. That uptick is driven by several factors, both economic and regulatory, and trying to distentangle that is part of what I’m working on currently. I’m interested in the development and change in residential solar market, and how the extent and type of regulatory policy influences the extent and type of innovation and changing market boundaries that ensue. Another way to parse the data is by ownership type, and how that varies by state depending on the regulatory institutions in place. In a state like North Carolina (teal), still vertically-integrated, both the regulated utility and independent power producers own solar. The path to market, and indeed whether or not you can actually say that a residential solar market qua market exists, differs in a vertically-integrated state from, say, New Jersey (orange) or Illinois (purple, but barely visible), where thus far the residential solar market is independent, and the regulated utility does not participate (again, please correct me if I’m mistaken).

It will be interesting to see what the 2013 data tell us, when the EIA release it in November. But even in California with that large uptick, solar’s share of total MWH generated does not go above 2 percent, and is substantially smaller in other states.

What do you see here? I know some of you will want to snark about subsidies for the uptick, but please keep it substantive :-).

Why does a theory of competition matter for electricity regulation?

For the firms in regulated industries, for the regulators, for their customers, does the theory underlying the applied regulation matter? I think it matters a lot, even down in the real-world trenches of doing regulation, because regulation’s theoretical foundation influences what regulators and firms do and how they do it. Think about a traditional regulated industry like electricity — vertically integrated because of initial technological constraints, with technologies that enable production of standard electric power service at a particular voltage range with economies of scale over the relevant range of demand.

When these technologies were new and the industry was young, the economic theory of competition underlying the form that regulation took was what we now think of as a static efficiency/allocation-focused model. In this model, production is represented by a known cost function with a given capital-labor ratio; that function is the representation of the firm and of its technology (note here how the organization of the firm fades into the background, to be re-illuminated starting in the mid-20th century by Coase and other organizational and new institutional economists). In the case of a high fixed cost industry with economies of scale, that cost function’s relevant characteristic is declining long-run average cost as output produced increases. On the demand side, consumers have stable preferences for this well-defined, standard good (electric power service at a particular voltage range).

In this model, the question is how to maximize total surplus given the technology, cost function, and preferences. This is the allocation question, and it’s a static question, because the technology, cost function, and preferences are given. The follow-on question in an industry with economies of scale is whether or not competition, rivalry among firms, will yield the best possible allocation, with the largest total surplus. The answer from this model is no: compared to the efficient benchmark where firms compete by lowering price to marginal cost, a “natural monopoly” industry/firm/cost structure cannot sustain P=MC because of the fixed costs, but price equal to average cost (where economic profits are “normal”) is not a stable equilibrium. The model indicates that the stable equilibrium is the monopoly price, with associated deadweight loss. But that P=AC point yields the highest feasible total surplus given the nature of the cost function. Thus this static allocative efficiency model is the justification for regulation of prices and quantities in this market, to make the quantity at which P=AC a stable outcome.

The theory of competition underlying this regulatory model is the static efficiency model, that competition is beneficial because it enables rival firms to bid prices down to P=MC, simultaneously maximizing firm profits, consumer surplus, and output produced (all the output that’s worth producing gets produced). Based on this model, legislators, regulators, and industry all influenced the design of regulation’s institutional details — rate-of-return regulation to target firm profits at “normal” levels, deriving retail prices from that, and erecting an entry barrier to exclude rivals while requiring the firm to serve all customers.

So what? I’ve just argued that regulatory institutional design is grounded in a theory of competition. If institutional designers hold a particular theory about what competition does and how it does it, that theory will inform their design to achieve their policy objectives. Institutional design is a function of the theory of competition, the policy objectives, and the ability/interest of industry to influence the design. If your theory of competition is the static allocative efficiency theory, you will design institutions to target the static efficient outcome in your model (in this case, P=AC). You start with a policy objective or a question to explore and a theory of competition, and out of that you derive an institutional design.

But what if competition is beneficial for other reasons, in other ways? What if the static allocative efficiency benefits of competition are just a single case in a larger set of possible outcomes? What if the phenomena we want to understand, the question to explore, the policy objective, would be better served by a different model? What if the world is not static, so the incumbent model becomes less useful because our questions and policy objectives have changed? Would we design different regulatory institutions if we use a different theory of competition? I want to try to treat that as a non-rhetorical question, even though my visceral reaction is “of course”.

These questions don’t get asked in legislative and regulatory proceedings, but given the pace and nature of dynamism, they should.