Power Up: The framework for a new era of UK energy distribution

The Adam Smith Institute has published a research report I wrote for them, Power Up: The framework for a new era of UK energy distribution. From the press release:

The report … argues that new technologies such as smart grids and distributed energy production can revolutionise old models of energy distribution and pricing, in the same way that apps like Uber are disrupting traditional models of transport.

In a world of expensive of energy prices, the report suggests regulators should encourage experimentation with new technologies, rather than cutting them off at inception. Regulating the market too heavily – often justified by claims that consumers are being ‘ripped off’ or overwhelmed by the number of tariffs available – closes down consumer experimentation and prevents technological and economic progress, which keeps energy prices high.

The paper envisions a world of choices in the energy market; where smart meters that relay real-time price changes to encourage better energy use are just the beginning. The author, Dr Lynne Kiesling, imagines consumers being able to see where their energy is coming from, and to choose what kind of green-grey energy mix they want.

Most important, Dr Kiesling argues, is for OFGEM to adopt a structure of ‘permissionless innovation’ – which allows companies to experiment freely without being granted permission from regulators. In the early days of the internet, no-one envisioned a world of Amazon, iPhones and Uber; but these inventions were able to thrive, as there were not limited by regulatory barriers. OFGEM, Kiesling argues, needs to adopt a more relaxed regulatory structure that dismantles the barriers that have been created.

Increasingly throughout the economy we see how decentralized technologies empower us to make decisions and to automate our decisions, leaving us free to pursue other projects with our time. We also see how producer and consumer experimentation with new products and new combinations of products and services are the essence of a vibrant, value-creating economy. In electricity the electrons won’t change, but the applications and services we layer on top of the electrons and bundle with them are changing dramatically, and in ways that can lead to a cleaner and prosperous future if barriers to entry and to innovation are lowered.

ASI’s Charlotte Bowyer wrote a summary of the report in City A.M., and she made some trenchant points:

Digital sensors and two-way devices allow automatic, preset heating and lighting, while dynamic pricing and smart meters will allow households to adjust the type of energy they use with real-time changes in electricity costs; for example, switching to renewable power when it reaches a certain price.

These innovations can reduce waste, improve efficiency, save money, and allow households to move towards cleaner energy useage, tailoring a “green-grey” mix to suit their budget.

However, such technologies will only proliferate within the right regulatory environment – one which the UK currently lacks. According to a new Adam Smith Institute report – Power Up: The framework for a new era of UK energy distribution – the electricity market’s current model of regulation severely limits the level and rate at which large-scale innovations can occur.

The current regulatory model was borne out of the privatisation of a vertically-integrated industry, where everything from electricity generation and transmission to distribution was performed by a single utility. The future of electricity generation and distribution will be nothing like this.

And in supposing a natural monopoly and applying a static model of innovation, even well-intentioned interventions have an adverse effect on competition, creating an institutional framework in which supplier innovation is severely hampered.

I am pleased to be extending the application of the idea of permissionless innovation to the electricity industry, and I am grateful to the Adam Smith Institute for inviting me to engage in this research.

2015 Nobel laureate Angus Deaton

Angus Deaton is the worthy and deserving winner of this year’s economics Nobel. The arc of his work, from theory to data to empirical application, has been consumption, measuring consumption, and consumption as an indicator of well-being, poverty, and inequality. His analyses also incorporate political economy as a factor influencing those relationships and incentives.

If you haven’t read his book The Great Escape, do so. It’s an accessible and optimistic account of the relationship between poverty and economic growth. Here’s an interview with Deaton from Caleb Brown at the Cato Daily Podcast in 2013 on the themes of the book. For a longer discussion based on the book, Russ Roberts’ EconTalk podcast with Deaton from November 2013 is well worth the time.

As always, Tyler’s post and Alex’s post at Marginal Revolution today have good synopses of Deaton’s work and its importance, and many good links to further reading.

In my mind, Deaton’s work sits with the demographic analyses and data visualization of Hans Rosling (if you haven’t seen his 200 countries in 200 years video, put down everything right now and watch it; his TED talk about the impact of the washing machine is guaranteed to bring tears to the eyes of at least this unsentimental economist). It also complements Joel Mokyr’s economic history of innovation and technological change as factors enabling economic growth.

Deaton’s unstinting attention to detail in improving the measurement of consumption has always stood out to me as a defining feature of his work. I recommend it to your attention, and congratulate him for this well-deserved award.

Cooperation between Bird-Watchers and Hot-Rodders

Dwight Lee writes of cooperation between antagonists, fostered by private ownership:

[M]ost members of the Audubon Society surely see the large sport utility vehicles and high-powered cars encouraged by abundant petroleum supplies as environmentally harmful. That perception, along with the environmental risks associated with oil recovery, helps explain why the Audubon Society vehemently opposes drilling for oil in the ANWR as well as in the continental shelves in the Atlantic, the Pacific, and the Gulf of Mexico. Although oil companies promise to take extraordinary precautions to prevent oil spills when drilling in these areas, the Audubon Society’s position is no off-shore drilling, none. One might expect to find Audubon Society members completely unsympathetic with hot-rodding enthusiasts, NASCAR racing fans, and drivers of Chevy Suburbans. Yet, as we have seen, by allowing drilling for gas and oil in the Rainey Sanctuary, the society is accommodating the interests of those with gas-guzzling lifestyles, risking the “integrity” of its prized wildlife sanctuary to make more gasoline available to those whose energy consumption it verbally condemns as excessive.

The incentives provided by private property and market prices not only motivate the Audubon Society to cooperate with NASCAR racing fans, but also motivate those racing enthusiasts to cooperate with the Audubon Society. Imagine the reaction you would get if you went to a stock-car race and tried to convince the spectators to skip the race and go bird-watching instead. Be prepared for some beer bottles tossed your way. Yet by purchasing tickets to their favorite sport, racing fans contribute to the purchase of gasoline that allows the Audubon Society to obtain additional wildlife habitat and to promote bird-watching. Many members of the Audubon Society may feel contempt for racing fans, and most racing fans may laugh at bird-watchers, but because of private property and market prices, they nevertheless act to promote one another’s interests.

From Dwight Lee, “To Drill or Not to Drill: Let the Environmentalists Decide,” The Independent Review, Fall 2001.

About the time Lee was writing his article the last of the oil and gas leases on the Rainey preserve expired and the Audubon Society banned further oil and gas activity at the site. A few years later, in 2008, as oil and gas prices were reaching new heights, oil and gas companies came calling and Audubon considered leasing development rights again.

As the New Orleans Times-Picayune reported in January 2010, oil and gas surrounded the preserve, and Audubon thought engaging with developers might be better than resistance. Audubon’s Paul Kemp said, “There’s no way of stopping the development of oil and gas out here. A lease gives us some ability to control things.”

Audubon is working with neighboring landowners on a policy to govern oil and gas development around the sanctuary. New directional drilling tools, for example, may enable firms to tap fuel beneath the sanctuary without actually setting foot on the property. In that case, Audubon may arrange a profit-sharing arrangement with neighbors that host the equipment used to reach the sanctuary’s fields.

“We’re talking about ways we can join forces to improve the environmental protections for not just our lands, but our neighbors’ land while oil and gas activity goes on,” Kemp said. “Just forswearing any involvement is really not all that effective in protecting our property.”

Randal Moertle, one of the consultants working with Audubon, believes oil and gas development can coexist with a healthy marsh. But for that to happen, landowners would have to actively monitor their properties, he said.

“The federal regulatory and state regulatory agencies are ill equipped to monitor surface operations by an oil company on private property,” said Moertle. “There are not enough people to police what’s going on.”

The BP Deepwater Horizon oil spill in the Gulf of Mexico, in April 2010, apparently brought development efforts at the Rainey Sanctuary to a halt. This particular example of cooperation — long the go-to example of free market environmentalists (see this article from 1981, this reflection on the article from 2010, and this short piece from Reason from 2015) — is no longer active.

Traces of the cooperation between bird-watchers and hot-rodders live on, however, reflected in the larger size of the preserve today. Audubon used some of the millions of hot-rodder dollars it earned to buy more property and expand its conservation-minded influence over Louisiana’s coastal wetlands.

Should we make it politically profitable for policymakers to do the right thing

Should we make it politically profitable for policymakers to do the right thing, or should we make it less profitable for policymakers to do anything?

Abigail Hall, writing a pair of posts for the Independent Institute blog The Beacon, urges liberty-minded people not to get too excited about electing the “right people.” (First post, second) The focus of her attention are Rand Paul supporters, but her argument is more general. Drawing on ideas of F.A. Hayek and James Buchanan, she notes that political actors are people like anyone else:

They respond to the incentives they face. These incentives are determined by the institutional context in which they operate. The incentives facing politicians do not necessarily align with those of the population as a whole. As a result, we wind up with pork barrel spending and policies that benefit special interests at the expense of the average American taxpayer.

She concludes by recommending engaging with ideas rather than politics. I thought her message lined up with that in a popular Milton Friedman quote, but Hall’s goal is more ambitious. Here is Friedman:

I do not believe that the solution to our problem is simply to elect the right people. The important thing is to establish a political climate of opinion which will make it politically profitable for the wrong people to do the right thing. Unless it is politically profitable for the wrong people to do the right thing, the right people will not do the right thing either, or if they try, they will shortly be out of office.

Hall, rather than trying to get politicians to do the right thing, wants to cut the power of politicians to do any thing.

How? She admits she does not know, but says, “we cannot rely on the current system to be the genesis of these changes” and “our battleground is one of ideas, not politics.”

Success in the world of ideas–what does it look like?

Hall is after more than nodding agreement among the tenured faculty, so the question arises: what would success in the world of ideas look like?

I think it looks like the growth of so-called “lifestyle libertarians” (a derisive term coined by book-based libertarians for the hemp-oil loving, rainbow wearing, do-your-own-thing kids in campus clubs who won’t read anything written before Mark Zuckerberg dropped out of Harvard). I think it looks like Glenn Beck donning glasses and calling himself libertarian, even if …. It could even include Donald Trump speaking at a freedom festival in Las Vegas (just kidding, that could never ever ever ever happendoh!). I think progress in the world of ideas means a thousand badly argued articles in opposition to libertarianism and perhaps as many bad articles in support. I think it means politicians claiming to fight for liberty, even if they are not consistent, because they think the claim buys support.

Success in the world of ideas would, unavoidably, produce millions of liberty-supporters who can only defend their views badly. Not everyone persuaded of liberty will refine their beliefs by exploring Rothbard or Friedman (or Bastiat or Spooner or Wilder Lane). Success in the world of ideas will result in voters more likely to support politicians who say libertarian-ish things. Further success will result in voters more discerning in their support for politicians who say libertarian-ish things.

Hall is right to warn that institutions matter, and political institutions by their nature tend to disappoint. Some of the best work in political economy gives us good reason to think so. But part of what success in the world of ideas looks like is an uptake of the ideas among politicians and campaigns and voters. It is not clear that progress requires squelching these political outgrowths.

Should we make it politically profitable for policymakers to do the right thing, or should we make it less profitable for policymakers to do anything? We do not need a once-for-all answer — the returns are likely positive for efforts on both margins.

Note: Also see David Henderson’s “Abigail Hall’s Case Against Supporting Politicians” at EconLog.

Cass Sunstein on regulatory analysis and the knowledge problem

Cass Sunstein begins:

With respect to the past and future of regulation, there are two truly indispensable ideas. Unfortunately, they are in serious tension with one another. Potential solutions lie in three reforms, all connected with democracy itself – but perhaps not quite in the way that most people think.

The first indispensable idea is that it is immensely important to measure, both in advance and on a continuing basis, the effects of regulation of social welfare. As an empirical matter, what are the human consequences of regulatory requirements? That is the right question to ask, but inside and outside of government, it is tempting to focus on other things. […]

At the present time, the right way to answer the right question is to try to identify the costs and benefits of regulations, in order to catalogue and to compare the various consequences, and to help make sensible tradeoffs. To be sure, cost-benefit analysis can create serious challenges, and at the present time, it is hardly a perfect tool. Nonetheless, it is the best one we now have. Some people do not love cost-benefit analysis, but they should. If we want to know about the real-world effects of regulation, that form of analysis deserves a lot of love.

The second idea, attributable above all to Friedrich Hayek, is that knowledge is widely dispersed in society. As Hayek and his followers emphasize, governments planners cannot possibly know what individuals know, simply because they lack that dispersed knowledge. The multiple failures of plans, and the omnipresence of unintended consequences, can be attributed, in large part, to the absence of relevant information. Hayek was particularly concerned about socialist-style planning. He contended that even if socialist planners are well-motivated and if the public interest is their true concern, they will fail, because they will not know enough to succeed. Hayek celebrated the price system as a “marvel,” not for any mystical reason, but because it can aggregate dispersed information, and do so in a way that permits rapid adjustment to changing circumstances, values, and tastes.

In sum, while it is immensely important to measure the effects of regulation, we may lack the necessary information.

Sunstein notes, pragmatically, that regulators have access to substantial amounts of information and methods of cost-benefit analysis are well-established and improving. He wrote, “In the day-to-day life of cost-benefit analysis, regulators are hardly making a stab in the dark.” He continues:

Nonetheless, modern followers of Hayek are correct to emphasize what they call “the knowledge problem,” understood as the government’s potential ignorance, which can be a problem for contemporary regulators of all kinds […]

The tension, in short, is that regulators have to focus on costs and benefits (the first indispensable idea), but they will sometimes lack the information that would enable them to make accurate assessments (the second indispensable idea). … In light of the knowledge problem, can they produce reliable cost-benefit analyses, or any other kind of projection of the human consequences of what they seek to do, and of potential alternatives?

Sunstein identified three reforms he said respond to the first indispensable idea while helping overcome or mitigate the limits imposed by the knowledge problem: First, modern “notice-and-comment” rulemaking processes; second, retrospective analysis of regulations; and third, careful experiments and especially randomized control trials. As pragmatic responses to knowledge problems, each of the three have something to contribute.

Is it enough?

What would Hayek say? Sunstein responded to this question in a footnote:

I am not suggesting that Hayek himself would be satisfied. Consider this remarkable passage:

This is, perhaps, also the point where I should briefly mention the fact that the sort of knowledge with which I have been concerned is knowledge of the kind which by its nature cannot enter into statistics and therefore cannot be conveyed to any central authority in statistical form. The statistics which such a central authority would have to use would have to be arrived at precisely by abstracting from minor differences between the things, by lumping together, as resources of one kind, items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision. It follows from this that central planning based on statistical information by its nature cannot take direct account of these circumstances of time and place and that the central planner will have to find some way or other in which the decisions depending on them can be left to the “man on the spot.”

Hayek [“The Use of Knowledge in Society,” AER, 1946]. In my view, the claim here is a mystification, at least as applied to the regulatory context. Statistical information “by its nature” can indeed “take direct account of these circumstances of time and place.” Of course it is true that for some purposes and activities, statistical knowledge is inadequate.

It is an odd note.

Sunstein quoted Hayek, said Hayek’s point is a mystification, then admitted, “Of course it is true….” I guess Sunstein’s point is that it is a true mystery?

In any case, we should note few things about Sunstein’s reforms. First, his focus is not so much on the doing of regulation itself, but rather on regulatory oversight. His three reforms are not to be applied in day-to-day regulation, but rather serve as external correctives. A related and perhaps more fundamental issue concerns the manner in which regulatory operations can facilitate the production and coordination of knowledge in ways that promote better outcomes.

Second, extending this last point, as we compare government and market processes, we note the relative power of feedback processes in the private sector and the weakness of feedback processes in the public sector. If you are dissatisfied with your service in a restaurant, you can take choose to eat elsewhere next time. If you are dissatisfied with your service at the Department of Motor Vehicles (or the Veterans Administration or the local Zoning Board, etc.), you have many fewer options. Feedback processes are essential to the production and coordination of knowledge. How can regulatory agencies learn in an environment in which feedback processes are so significantly attenuated?

Third, Sunstein is operating with a rather flat view of knowledge. That is to say, in his explanation knowledge and information and data are various names for more or less identical things. But if we take seriously Sunstein’s remark “of course it is true that for some purposes and activities, statistical knowledge is inadequate,” then further questions arise. Sunstein does not explore the point, but it is exactly here, for Hayek, that the force of the knowledge problem emerges.

There is a research agenda here.

Obviously Sunstein endorses further research and development of benefit-cost analysis, expansion of notice-and-comment rulemaking processes, retrospective regulatory analysis, and experimental tests of regulation. Benefit-cost analysis has a dedicated academic society and journal and book level treatments. Sunstein discusses efforts to broaden understanding of regulatory proposals and encourage public engagement (for example, the government’s www.regulations.gov and the university-based outreach project regulationroom.org). Retrospective regulatory analysis and experimental tests get less attention, but a number of academic programs do at least some retrospective work (for example: the Mercatus Center Regulatory Studies Program at GMU, the Penn Program on Regulation, the George Washington University Regulatory Studies Center). As Sunstein notes, a number of federal agencies have committed to using experiments to help understand the impact of regulation.

How far can these efforts get us in the face of the knowledge problem? For which regulatory “purposes and activities” is it the case that “statistical knowledge is inadequate”? Are there patterns in this inadequacy that bias or undermine regulatory action? Assuming Sunstein’s reforms are fully implemented, what residual knowledge problems would continue to trouble regulation?

A good place to start is Lynne Kiesling’s article “Knowledge Problem” (obviously!) which appeared in the 2014 volume Oxford Handbook of Austrian Economics. Mark Pennington’s book Robust Political Economy examines how knowledge problems and incentive problems can frustrate political activity. Obviously, too, Hayek’s own “The Use of Knowledge in Society” and “Economics and Knowledge” are relevant, as is the later “Competition as a Discovery Process.” Don Lavoie’s “The Market as a Procedure for Discovery and Conveyance of Inarticulate Knowledge” condenses Hayek’s statements on the knowledge problem and further explains why the problem cannot be overcome merely by further developments in computer technology.

Going deeper, one might explore James Scott’s book Seeing Like a State, which emphasizes how the top-down processes of measurement and aggregation can strip meaning from knowledge and result in destruction of value. Then perhaps a work like Hans Christian Garmann Johnsen’s new book The New Natural Resource: Knowledge Development, Society and Economics might have something to say. (I’ve only just begun looking at Johnsen’s ambitious book, so it is too soon to judge.) Complementary to work on institutional frameworks and knowledge would be close studies of government agencies, like the Pressman and Wildavsky classic Implementation: How Great Expectations in Washington are Dashed in Oakland… and broader surveys of policy history such as Michael Graetz’s The End of Energy or Peter Grossman’s U.S. Energy Policy and the Pursuit of Failure.

And more.

Here I have focused just on those parts of the Sunstein article where he bumps up against the knowledge problem most explicitly. He explores each of the three reforms in more detail, providing much more to think about.

Recommended reading for regulatory analysts.


Sunstein’s paper is “Cost-Benefit Analysis and the Knowledge Problem.” Regulatory Policy Program Working Paper RPP-2015-03. Cambridge, MA: Mossavar-Rahmani Center for Business and Government, Harvard Kennedy School, Harvard University.

Will raising the minimum bid in federal oil and gas lease auctions boost auction revenue?

The short answer to the title question is probably not.

In the Department of the Interior’s rulemaking docket concerning the financial terms governing oil and gas leasing on federal lands, a handful of comments endorsed the idea of raising the minimum bid in lease auctions as a way to increase federal revenues. But, as explained below, under current law raising the minimum bid could increase or decrease total government revenue generated.

Raising the minimum bid can have the effect of increasing the winning bid on some properties, so generating a higher bonus payment from the successful bidder. Presumably this is the direct effect that proponents of the idea have in mind. At the same time, a higher minimum bid means that some properties will not attract bids in the auction and instead be offered for non-competitive leasing after the auction. Parties securing non-competitive leases do not pay a bonus, so these properties yield lower federal revenues. Under existing regulations, royalty rates and other terms are mostly identical whether the lease is sold at auction or the property is leased non-competitively. Depending on whether the increased revenues from higher bonus bids is greater than the lost revenues from more properties becoming leased non-competitively, total revenue will either rise or fall.

In any case, relatively few properties likely would be affected by an increased minimum bid. Good prospects already sell at auction for higher than the current minimum bid or any likely raised minimum bid—often much, much higher—so bonus payments on those properties would be unchanged. Marginal properties already do not sell in auction at the existing minimum bid and become offered non-competitively. Revenue from those properties would also be unchanged. Only in the narrow range between “good” and “marginal” as described will the higher minimum have an effect, sometimes increasing revenue and other times decreasing it.

The narrowness of the opportunity to increase federal revenues by raising the minimum bid can be emphasized by noting that revenues only increase for cases in which the minimum bid is increased to a point higher than the second highest willingness to pay among bidders but lower than the highest willingness to pay. In cases in which the existing minimum bid is below the highest willingness to pay but the increased minimum bid is higher than that value, revenue will fall. In other cases revenue is unaffected by the change in minimum bid.

We can pin down the various effects on revenue analytically.

Call the existing minimum bid “MB” and the proposed higher minimum bid “MB+”. Assume each potential bidder comes to the auction with a maximum willingness to pay in mind for each property. Call the bidder with the highest maximum willingness to pay bidder A and the bidder with the second highest willingness to pay bidder B. Assume non-strategic bidding (i.e., bidders A and B are willing to bid up to their maximum willingness to pay in any auction in which that maximum is greater than the minimum bid). Leases are offered in an ascending price auction.

Therefore, so long as bidder A values the lease at a value greater than or equal to the minimum bid, bidder A wins the auction at a bid fractionally higher than bidder B’s maximum willingness to pay or at the minimum bid level, whichever is higher. If bidder A wins the auction, then bidder A pays the winning bid (termed the “bonus payment”). If bidder A’s value is below the minimum bid level, the lease is not sold at auction and becomes available for non-competitive leasing and no bonus payment is made.

We can describe the possibilities in the following way (simplifying by using A to stand for the maximum willingness to pay of bidder A, and similarly for bidder B).

  1. If B < A < MB < MB+, then the lease is not sold by auction and becomes available for non-competitive leasing both under the existing minimum bid and under the higher minimum. Revenue is unaffected.
  2. If MB < MB+ < B < A, then the lease is sold at auction at fractionally above B, both with the existing minimum and with the higher minimum. Again, revenue is unaffected.
  3. If MB < B < A < MB+, then the lease would have sold at auction at fractionally above B under the existing minimum bid, but would be leased non-competitively at the higher minimum bid. These properties will yield less revenue.
  4. If MB < B < MB+ < A, then the lease would be sold to bidder A at just above B under the existing minimum bid, but will be sold to bidder A at the higher MB+ under the raised minimum bid. These properties yield higher revenue.
  5. If B < MB < A < MB+, then the lease would have sold to bidder A at MB but now will be leased noncompetitively. These properties will yield less revenue.
  6. If B < MB < MB+ < A, then the lease would be sold to bidder A at MB under the lower minimum and at MB+ under the higher minimum. These properties yield higher revenue.

These six cases exhaust the possibilities. We can conclude that revenue will increase if added revenues from cases 4 and 6 are greater than the reduction in revenues from cases 3 and 5.

That is to say, revenue will increase in cases in which MB+ is greater than B but less than A, but revenue will decrease in cases in which MB is less than A but MB+ is greater than A. Or, restating the “narrowness” conclusion from above, revenues increase only in cases in which the minimum is raised to a level that is higher than B but below A—in all other cases revenues are unchanged or reduced.

We can estimate the net revenue effects of a higher minimum bid.

Or rather, if we had detailed auction data from recent lease auctions, then we would be in the position to estimate the net revenue consequences of a higher minimum bid. With sufficient information from auctions values for A and B could be estimated for various properties and the effects of higher minimum bids could then be simulated.


Related earlier posts

Information on the rulemaking process

ABBACUS report highlights benefits of retail electric markets

On Tuesday the Distributed Energy Financial Group released its 2015 report, Annual Baseline Assessment of Choice in Canada and the United States (ABACCUS). The report provides an excellent overview of the current state of retail electricity markets in the 18 jurisdictions in the U.S. and Canada that permit at least some degree of retail competition. The overall result will not surprise anyone who follows the electricity industry or is a KP reader: for the eighth year in a row, Texas tops the rankings by a wide margin, with Alberta second and Pennsylvania third. And the general trend is promising, both in terms of market experimentation and regulatory institutional change to reduce barriers:

In nearly every jurisdiction in North America, REPs continue to expand their presence, increase the number of offerings, and increase the variety of offerings in the residential marketplace. These positive developments are primarily in response to market opportunities, but the activities of regulators to facilitate retail choices should not be glossed over. Regulators have reduced barriers to entry, facilitated the speed and ease of market transactions, and raised public awareness about the opportunities for retail choice. Numerous states have invested in advanced metering infrastructure, providing lower-cost access to detailed consumption data. These data are essential to offer time-differentiated prices, to track the costs to serve an individual consumer (rather than relying on an estimated load profile) and to offer new products and services, including prepaid service, high-usage alerts, or targeted price-risk management offers. Combining advanced mobile communications with advanced metering data has also facilitated new products and services.

The report is also extremely clear and well-written, so if you are interested in learning more about retail electricity markets and regulatory policy, and what the current trends are in the distribution and retail segments of the industry, read this report. Its appendices also provide state-by-state (province-by-province) summaries with extensive detail.

The report’s policy recommendations are in keeping with the idea that market processes provide opportunities for producers and consumers to benefit through experimentation and trial-and-error learning, and that product differentiation through innovation is the most potent form of dynamic competition for creating meaningful consumer benefits. Note in particular that their recommendations focus on default service, suggesting to

Reform default service in the near term … Allow competitive suppliers to provide default service instead of the incumbent utilities … Limit residential default service pricing to basic (“plain vanilla”) service and let the market offer other choices … Adopt a plan to phase out default service. The plan must reflect the realities of each jurisdiction. No two plans would be the same as each jurisdiction must be mindful of past decisions.

I am thrilled to see these recommendations, because incumbent default service can be a costly obstacle and entry barrier for small, new potential entrants. In fact, my Independent Review article from last fall lays out the precise economic argument for how incumbent default service can be an entry barrier and why regulatory policy should “quarantine the monopoly”:

Incumbent vertical market power in deregulating markets can be anticompetitive, as seen in the current process of retail electricity restructuring. This paper uses the AT&T antitrust case’s Bell Doctrine precedent of “quarantine the monopoly” as a case study in incumbent vertical market power in a regulated industry. It then extends the Bell Doctrine by presenting an experimentation-based theory of competition, and applies this extended framework to analyzing the changing retail electricity industry. The general failure to quarantine the monopoly wires segment and its regulated monopolist from the potentially competitive downstream retail market contributes to the slow pace and lackluster performance of retail electricity markets for residential customers.

In the case of Texas, default service was indeed transitional, as intended, and was not provided by the incumbent.

The issue of incumbent default service as an entry barrier may be part of the upcoming “utility of the future” discussion that will take place in Illinois, according to this Retail Energy X story:

If the Illinois Commerce Commission opens a “utility of the future” proceeding, the structure of default service, including its potential elimination, would likely be discussed in such proceeding, ICC Commissioner Ann McCabe said during a media call discussing the Annual Baseline Assessment of Choice in Canada and the United States.

Asked about the ABACCUS recommendation to end default service, McCabe said, “That’s subject to discussion. If we pursue some kind of ‘utility of the future’ initiative, that will be one of the questions likely to be addressed.”