Cass Sunstein on regulatory analysis and the knowledge problem

Cass Sunstein begins:

With respect to the past and future of regulation, there are two truly indispensable ideas. Unfortunately, they are in serious tension with one another. Potential solutions lie in three reforms, all connected with democracy itself – but perhaps not quite in the way that most people think.

The first indispensable idea is that it is immensely important to measure, both in advance and on a continuing basis, the effects of regulation of social welfare. As an empirical matter, what are the human consequences of regulatory requirements? That is the right question to ask, but inside and outside of government, it is tempting to focus on other things. […]

At the present time, the right way to answer the right question is to try to identify the costs and benefits of regulations, in order to catalogue and to compare the various consequences, and to help make sensible tradeoffs. To be sure, cost-benefit analysis can create serious challenges, and at the present time, it is hardly a perfect tool. Nonetheless, it is the best one we now have. Some people do not love cost-benefit analysis, but they should. If we want to know about the real-world effects of regulation, that form of analysis deserves a lot of love.

The second idea, attributable above all to Friedrich Hayek, is that knowledge is widely dispersed in society. As Hayek and his followers emphasize, governments planners cannot possibly know what individuals know, simply because they lack that dispersed knowledge. The multiple failures of plans, and the omnipresence of unintended consequences, can be attributed, in large part, to the absence of relevant information. Hayek was particularly concerned about socialist-style planning. He contended that even if socialist planners are well-motivated and if the public interest is their true concern, they will fail, because they will not know enough to succeed. Hayek celebrated the price system as a “marvel,” not for any mystical reason, but because it can aggregate dispersed information, and do so in a way that permits rapid adjustment to changing circumstances, values, and tastes.

In sum, while it is immensely important to measure the effects of regulation, we may lack the necessary information.

Sunstein notes, pragmatically, that regulators have access to substantial amounts of information and methods of cost-benefit analysis are well-established and improving. He wrote, “In the day-to-day life of cost-benefit analysis, regulators are hardly making a stab in the dark.” He continues:

Nonetheless, modern followers of Hayek are correct to emphasize what they call “the knowledge problem,” understood as the government’s potential ignorance, which can be a problem for contemporary regulators of all kinds […]

The tension, in short, is that regulators have to focus on costs and benefits (the first indispensable idea), but they will sometimes lack the information that would enable them to make accurate assessments (the second indispensable idea). … In light of the knowledge problem, can they produce reliable cost-benefit analyses, or any other kind of projection of the human consequences of what they seek to do, and of potential alternatives?

Sunstein identified three reforms he said respond to the first indispensable idea while helping overcome or mitigate the limits imposed by the knowledge problem: First, modern “notice-and-comment” rulemaking processes; second, retrospective analysis of regulations; and third, careful experiments and especially randomized control trials. As pragmatic responses to knowledge problems, each of the three have something to contribute.

Is it enough?

What would Hayek say? Sunstein responded to this question in a footnote:

I am not suggesting that Hayek himself would be satisfied. Consider this remarkable passage:

This is, perhaps, also the point where I should briefly mention the fact that the sort of knowledge with which I have been concerned is knowledge of the kind which by its nature cannot enter into statistics and therefore cannot be conveyed to any central authority in statistical form. The statistics which such a central authority would have to use would have to be arrived at precisely by abstracting from minor differences between the things, by lumping together, as resources of one kind, items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision. It follows from this that central planning based on statistical information by its nature cannot take direct account of these circumstances of time and place and that the central planner will have to find some way or other in which the decisions depending on them can be left to the “man on the spot.”

Hayek [“The Use of Knowledge in Society,” AER, 1946]. In my view, the claim here is a mystification, at least as applied to the regulatory context. Statistical information “by its nature” can indeed “take direct account of these circumstances of time and place.” Of course it is true that for some purposes and activities, statistical knowledge is inadequate.

It is an odd note.

Sunstein quoted Hayek, said Hayek’s point is a mystification, then admitted, “Of course it is true….” I guess Sunstein’s point is that it is a true mystery?

In any case, we should note few things about Sunstein’s reforms. First, his focus is not so much on the doing of regulation itself, but rather on regulatory oversight. His three reforms are not to be applied in day-to-day regulation, but rather serve as external correctives. A related and perhaps more fundamental issue concerns the manner in which regulatory operations can facilitate the production and coordination of knowledge in ways that promote better outcomes.

Second, extending this last point, as we compare government and market processes, we note the relative power of feedback processes in the private sector and the weakness of feedback processes in the public sector. If you are dissatisfied with your service in a restaurant, you can take choose to eat elsewhere next time. If you are dissatisfied with your service at the Department of Motor Vehicles (or the Veterans Administration or the local Zoning Board, etc.), you have many fewer options. Feedback processes are essential to the production and coordination of knowledge. How can regulatory agencies learn in an environment in which feedback processes are so significantly attenuated?

Third, Sunstein is operating with a rather flat view of knowledge. That is to say, in his explanation knowledge and information and data are various names for more or less identical things. But if we take seriously Sunstein’s remark “of course it is true that for some purposes and activities, statistical knowledge is inadequate,” then further questions arise. Sunstein does not explore the point, but it is exactly here, for Hayek, that the force of the knowledge problem emerges.

There is a research agenda here.

Obviously Sunstein endorses further research and development of benefit-cost analysis, expansion of notice-and-comment rulemaking processes, retrospective regulatory analysis, and experimental tests of regulation. Benefit-cost analysis has a dedicated academic society and journal and book level treatments. Sunstein discusses efforts to broaden understanding of regulatory proposals and encourage public engagement (for example, the government’s and the university-based outreach project Retrospective regulatory analysis and experimental tests get less attention, but a number of academic programs do at least some retrospective work (for example: the Mercatus Center Regulatory Studies Program at GMU, the Penn Program on Regulation, the George Washington University Regulatory Studies Center). As Sunstein notes, a number of federal agencies have committed to using experiments to help understand the impact of regulation.

How far can these efforts get us in the face of the knowledge problem? For which regulatory “purposes and activities” is it the case that “statistical knowledge is inadequate”? Are there patterns in this inadequacy that bias or undermine regulatory action? Assuming Sunstein’s reforms are fully implemented, what residual knowledge problems would continue to trouble regulation?

A good place to start is Lynne Kiesling’s article “Knowledge Problem” (obviously!) which appeared in the 2014 volume Oxford Handbook of Austrian Economics. Mark Pennington’s book Robust Political Economy examines how knowledge problems and incentive problems can frustrate political activity. Obviously, too, Hayek’s own “The Use of Knowledge in Society” and “Economics and Knowledge” are relevant, as is the later “Competition as a Discovery Process.” Don Lavoie’s “The Market as a Procedure for Discovery and Conveyance of Inarticulate Knowledge” condenses Hayek’s statements on the knowledge problem and further explains why the problem cannot be overcome merely by further developments in computer technology.

Going deeper, one might explore James Scott’s book Seeing Like a State, which emphasizes how the top-down processes of measurement and aggregation can strip meaning from knowledge and result in destruction of value. Then perhaps a work like Hans Christian Garmann Johnsen’s new book The New Natural Resource: Knowledge Development, Society and Economics might have something to say. (I’ve only just begun looking at Johnsen’s ambitious book, so it is too soon to judge.) Complementary to work on institutional frameworks and knowledge would be close studies of government agencies, like the Pressman and Wildavsky classic Implementation: How Great Expectations in Washington are Dashed in Oakland… and broader surveys of policy history such as Michael Graetz’s The End of Energy or Peter Grossman’s U.S. Energy Policy and the Pursuit of Failure.

And more.

Here I have focused just on those parts of the Sunstein article where he bumps up against the knowledge problem most explicitly. He explores each of the three reforms in more detail, providing much more to think about.

Recommended reading for regulatory analysts.


Sunstein’s paper is “Cost-Benefit Analysis and the Knowledge Problem.” Regulatory Policy Program Working Paper RPP-2015-03. Cambridge, MA: Mossavar-Rahmani Center for Business and Government, Harvard Kennedy School, Harvard University.

Technological change, culture, and a “social license to operate”

Technological change is disruptive, and in the long sweep of human history, that disruption is one of the fundamental sources of economic growth and what Deirdre McCloskey calls the Great Enrichment:

In 1800 the average income per person…all over the planet was…an average of $3 a day. Imagine living in present-day Rio or Athens or Johannesburg on $3 a day…That’s three-fourths of a cappuccino at Starbucks. It was and is appalling. (Now)… the average person makes and consumes over $100 a day…And that doesn’t take account of the great improvement in the quality of many things, from electric lights to antibiotics.

McCloskey credits a culture that embraces change and commercial activity as having moral weight as well as yielding material improvement. Joseph Schumpeter himself characterizes such creative destruction as:

The fundamental impulse that sets and keeps the capitalist engine in motion comes from the new consumers’ goods, the new methods of production or transportation, the new markets, the new forms of industrial organization that capitalist enterprise creates. […] This process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in.

Much of the support for this perspective comes from the dramatic increase in consumer well-being, whether through material consumption or better health or more available enriching experiences. Producers create new products and services, make old ones obsolete, and create and destroy profits and industries in the process, all to the better on average over time.

Through those two lenses, the creative destruction in process because of the disruptive transportation platform Uber is a microcosm of the McCloskeyian-Schumpeterian process in action. Economist Eduardo Porter observed in the New York Times in January that

Customers have flocked to its service. In the final three months of last year, its so-called driver-partners made $656.8 million, according to an analysis of Uber data released last week by the Princeton economist Alan B. Krueger, who served as President Obama’s chief economic adviser during his first term, and Uber’s Jonathan V. Hall.

Drivers like it, too. By the end of last year, the service had grown to over 160,000 active drivers offering at least four drives a month, from near zero in mid-2012. And the analysis by Mr. Krueger and Mr. Hall suggests they make at least as much as regular taxi drivers and chauffeurs, on flexible hours. Often, they make more.

This kind of exponential growth confirms what every New Yorker and cab riders in many other cities have long suspected: Taxi service is woefully inefficient.

Consumers and drivers like Uber, despite a few bad events and missteps. The parties who dislike Uber are, of course, incumbent taxi drivers who are invested in the regulatory status quo; as I observed last July,

The more popular Uber becomes with more people, the harder it will be for existing taxi interests to succeed in shutting them down.

The ease, the transparency, the convenience, the lower transaction costs, the ability to see and submit driver ratings, the consumer assessment of whether Uber’s reputation and driver certification provides him/her with enough expectation of safety — all of these are things that consumers can now assess for themselves, without a regulator’s judgment substitution for their own judgment. The technology, the business model, and the reputation mechanism diminish the public safety justification for taxi regulation.

Uber creates value for consumers and for non-taxi drivers (who are not, repeat not, Uber employees, despite California’s wishes to the contrary). But its fairly abrupt erosion of the regulatory rents of taxi drivers leads them to use a variety of means to stop Uber from facilitating mutually beneficial interaction between consumers and drivers.

In France, one of those means is violence, which erupted last week when taxi drivers protested, lit tires on fire, and overturned cars (including ambushing musician Courtney Love’s car and egging it). A second form of violence took the form last week of the French government’s arrest of Uber for operating “an illegal taxi service” (as analyzed by Matthew Feeney at Forbes). Feeney suggests that

The technology that allows Uber to operate is not going anywhere. No matter how many cars French taxi drivers set on fire or how many regulations French lawmakers pass, demand for Uber’s technology will remain high.

If French taxi drivers want to survive in the long term perhaps they should consider developing an app to rival Uber’s or changing their business model. The absurd and embarrassing Luddite behavior on French streets last week and the arrest of Uber executives ought to prompt French lawmakers to consider a policy of taxi deregulation that will allow taxis to compete more easily with Uber. Unfortunately, French regulators and officials have a history of preferring protectionism over promoting innovation.

Does anyone think that France will succeed in standing athwart this McCloskeyian-Schumpeterian process? The culture has broadly changed along the lines McCloskey outlines — many, many consumers and drivers demonstrably value Uber’s facilitation platform, itself a Schumpeterian disruptive innovation. The Wall Street Journal opines similarly that

France isn’t the first place to have failed what might be called the Uber Test: namely, whether governments are willing to embrace disruptive innovations such as Uber or act as enforcers for local cartels. … But the French are failing the test at a particularly bad time for their economy, which foreign investors are fleeing at a faster rate than from almost any other developed country.

Taxi drivers are not the only people who do not accept these cultural and technological evolutions. Writing last week at Bloomberg View, the Berlin-based writer Leonid Bershidsky argued that the French are correct not to trust Uber:

The company is not doing enough to convince governments or the European public that it isn’t a scam. … Uber is not just a victim; it has invited much of the trouble. Katherine Teh-White, managing director of management consulting firm Futureye, says new businesses need to build up what she calls a “social license to operate”

He then goes on to list several reasons why he believes that Uber has not built a “social license to operate”, or what we might more generally call social capital. In his critique he fails to hold taxi companies to the same standards of safety, privacy, and fiduciary responsibility that he wants to impose on Uber.

But rather than a point-by-point refutation of his critique, I want to disagree most vigorously with his argument for a “social license to operate”. He quotes Teh-White as defining the concept as

This is the agreement by society or a community that an organization’s practices and products are acceptable and aligned with society’s values. If society begins to feel that an industry or company’s actions are no longer acceptable, then it can withdraw its agreement, demand new and costly dimensions, or simply ‘cancel’ the license. And that’s basically what you’re seeing in Europe and other parts of the world with Uber.

Bershidsky assumes that the government is the entity with the authority to “cancel” the “social license to operate”. Wrong. This is the McCloskey point: in a successful, dynamic society that is open to the capacity for commercial activity to enable widespread individual well-being, the social license to operate is distributed and informal, and it shows up in commercial activity patterns as well as social norms.

If French people, along with their bureaucrats, cede to their government the authority to revoke a social license to operate, then Matthew Feeney’s comments above are even more apt. By centralizing that social license to operate they maintain barriers to precisely the kinds of innovation that improve well-being, health, and happiness in a widespread manner over time. And they do so to protect a government-granted cartel. Feeney calls it embarrassing; I call it pathetic.

Geoff Manne in Wired on FCC Title II

Friend of Knowledge Problem Geoff Manne had a thorough opinion piece in Wired yesterday on the FCC’s Title II Internet designation. Well worth reading. From the “be careful what you wish for” department:

Title II (which, recall, is the basis for the catch-all) applies to all “telecommunications services”—not just ISPs. Now, every time an internet service might be deemed to transmit a communication (think WhatsApp, Snapchat, Twitter…), it either has to take its chances or ask the FCC in advance to advise it on its likely regulatory treatment.

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

And that’s even at the much-vaunted edge of the network that such regulation purports to protect.

Asking in advance. Nope, that’s not gonna slow innovation, not one bit …

FCC Title II and raising rivals’ costs

As the consequences of the FCC vote to classify the Internet as a Title II service start to sink in, here are a couple of good commentaries you may not have seen. Jeffrey Tucker’s political economy analysis of the Title II vote as a power grab is one of the best overall analyses of the situation that I’ve seen.

The incumbent rulers of the world’s most exciting technology have decided to lock down the prevailing market conditions to protect themselves against rising upstarts in a fast-changing market. To impose a new rule against throttling content or using the market price system to allocate bandwidth resources protects against innovations that would disrupt the status quo.

What’s being sold as economic fairness and a wonderful favor to consumers is actually a sop to industrial giants who are seeking untrammeled access to your wallet and an end to competitive threats to market power.

What’s being sold as keeping the Internet neutral for innovation at the edge of the network is substantively doing so by encasing the existing Internet structure and institutions in amber, which yields rents for its large incumbents. Some of those incumbents, like Comcast and Time-Warner, have achieved their current (and often resented by customers) market power not through rivalrous market competition, but through receiving municipal monopoly cable franchises. Yes, these restrictions raise their costs too, but as large incumbents they are better positioned to absorb those costs than smaller ISPs or other entrants would be. It’s naive to believe that regulations of this form will do much other than softening rivalry in the Internet itself.

But is there really that much scope for innovation and dynamism within the Internet? Yes. Not only with technologies, but also with institutions, such as interconnection agreements and peering agreements, which can affect packet delivery speeds. And, as Julian Sanchez noted, the Title II designation takes these kinds of innovations off the table.

But there’s another kind of permissionless innovation that the FCC’s decision is designed to preclude: innovation in business models and routing policies. As Neutralites love to point out, the neutral or “end-to-end” model has served the Internet pretty well over the past two decades. But is the model that worked for moving static, text-heavy webpages over phone lines also the optimal model for streaming video wirelessly to mobile devices? Are we sure it’s the best possible model, not just now but for all time? Are there different ways of routing traffic, or of dividing up the cost of moving packets from content providers, that might lower costs or improve quality of service? Again, I’m not certain—but I am certain we’re unlikely to find out if providers don’t get to run the experiment.

Why does a theory of competition matter for electricity regulation?

For the firms in regulated industries, for the regulators, for their customers, does the theory underlying the applied regulation matter? I think it matters a lot, even down in the real-world trenches of doing regulation, because regulation’s theoretical foundation influences what regulators and firms do and how they do it. Think about a traditional regulated industry like electricity — vertically integrated because of initial technological constraints, with technologies that enable production of standard electric power service at a particular voltage range with economies of scale over the relevant range of demand.

When these technologies were new and the industry was young, the economic theory of competition underlying the form that regulation took was what we now think of as a static efficiency/allocation-focused model. In this model, production is represented by a known cost function with a given capital-labor ratio; that function is the representation of the firm and of its technology (note here how the organization of the firm fades into the background, to be re-illuminated starting in the mid-20th century by Coase and other organizational and new institutional economists). In the case of a high fixed cost industry with economies of scale, that cost function’s relevant characteristic is declining long-run average cost as output produced increases. On the demand side, consumers have stable preferences for this well-defined, standard good (electric power service at a particular voltage range).

In this model, the question is how to maximize total surplus given the technology, cost function, and preferences. This is the allocation question, and it’s a static question, because the technology, cost function, and preferences are given. The follow-on question in an industry with economies of scale is whether or not competition, rivalry among firms, will yield the best possible allocation, with the largest total surplus. The answer from this model is no: compared to the efficient benchmark where firms compete by lowering price to marginal cost, a “natural monopoly” industry/firm/cost structure cannot sustain P=MC because of the fixed costs, but price equal to average cost (where economic profits are “normal”) is not a stable equilibrium. The model indicates that the stable equilibrium is the monopoly price, with associated deadweight loss. But that P=AC point yields the highest feasible total surplus given the nature of the cost function. Thus this static allocative efficiency model is the justification for regulation of prices and quantities in this market, to make the quantity at which P=AC a stable outcome.

The theory of competition underlying this regulatory model is the static efficiency model, that competition is beneficial because it enables rival firms to bid prices down to P=MC, simultaneously maximizing firm profits, consumer surplus, and output produced (all the output that’s worth producing gets produced). Based on this model, legislators, regulators, and industry all influenced the design of regulation’s institutional details — rate-of-return regulation to target firm profits at “normal” levels, deriving retail prices from that, and erecting an entry barrier to exclude rivals while requiring the firm to serve all customers.

So what? I’ve just argued that regulatory institutional design is grounded in a theory of competition. If institutional designers hold a particular theory about what competition does and how it does it, that theory will inform their design to achieve their policy objectives. Institutional design is a function of the theory of competition, the policy objectives, and the ability/interest of industry to influence the design. If your theory of competition is the static allocative efficiency theory, you will design institutions to target the static efficient outcome in your model (in this case, P=AC). You start with a policy objective or a question to explore and a theory of competition, and out of that you derive an institutional design.

But what if competition is beneficial for other reasons, in other ways? What if the static allocative efficiency benefits of competition are just a single case in a larger set of possible outcomes? What if the phenomena we want to understand, the question to explore, the policy objective, would be better served by a different model? What if the world is not static, so the incumbent model becomes less useful because our questions and policy objectives have changed? Would we design different regulatory institutions if we use a different theory of competition? I want to try to treat that as a non-rhetorical question, even though my visceral reaction is “of course”.

These questions don’t get asked in legislative and regulatory proceedings, but given the pace and nature of dynamism, they should.

The “utility death spiral”: The utility as a regulatory creation

Unless you follow the electricity industry you may not be aware of the past year’s discussion of the impending “utility death spiral”, ably summarized in this Clean Energy Group post:

There have been several reports out recently predicting that solar + storage systems will soon reach cost parity with grid-purchased electricity, thus presenting the first serious challenge to the centralized utility model.  Customers, the theory goes, will soon be able to cut the cord that has bound them to traditional utilities, opting instead to self-generate using cheap PV, with batteries to regulate the intermittent output and carry them through cloudy spells.  The plummeting cost of solar panels, plus the imminent increased production and decreased cost of electric vehicle batteries that can be used in stationary applications, have combined to create a technological perfect storm. As grid power costs rise and self-generation costs fall, a tipping point will arrive – within a decade, some analysts are predicting – at which time, it will become economically advantageous for millions of Americans to generate their own power.  The “death spiral” for utilities occurs because the more people self-generate, the more utilities will be forced to seek rate increases on a shrinking rate base… thus driving even more customers off the grid.

A January 2013 analysis from the Edison Electric Institute, Disruptive Challenges: Financial Implications and Strategic Responses to a Changing Retail Electric Business, precipitated this conversation. Focusing on the financial market implications for regulated utilities of distributed resources (DER) and technology-enabled demand-side management (an archaic term that I dislike intensely), or DSM, the report notes that:

The financial risks created by disruptive challenges include declining utility revenues, increasing costs, and lower profitability potential, particularly over the long term. As DER and DSM programs continue to capture “market share,” for example, utility revenues will be reduced. Adding the higher costs to integrate DER, increasing subsidies for DSM and direct metering of DER will result in the potential for a squeeze on profitability and, thus, credit metrics. While the regulatory process is expected to allow for recovery of lost revenues in future rate cases, tariff structures in most states call for non-DER customers to pay for (or absorb) lost revenues. As DER penetration increases, this is a cost recovery structure that will lead to political pressure to undo these cross subsidies and may result in utility stranded cost exposure.

I think the apocalyptic “death spiral” rhetoric is overblown and exaggerated, but this is a worthwhile, and perhaps overdue, conversation to have. As it has unfolded over the past year, though, I do think that some of the more essential questions on the topic are not being asked. Over the next few weeks I’m going to explore some of those questions, as I dive into a related new research project.

The theoretical argument for the possibility of death spiral is straightforward. The vertically-integrated, regulated distribution utility is a regulatory creation, intended to enable a financially sustainable business model for providing reliable basic electricity service to the largest possible number of customers for the least feasible cost, taking account of the economies of scale and scope resulting from the electro-mechanical generation and wires technologies implemented in the early 20th century. From a theoretical/benevolent social planner perspective, the objective is, given a market demand for a specific good/service, to minimize the total cost of providing that good/service subject to a zero economic profit constraint for the firm; this will lead to highest feasible output and total surplus combination (and lowest deadweight loss) consistent with the financial sustainability of the firm.

The regulatory mechanism for implementing this model to achieve this objective is to erect a legal entry barrier into the market for that specific good/service, and to assure the regulated monopolist cost recovery, including its opportunity cost of capital, otherwise known as rate-of-return regulation. In return, the regulated monopolist commits to serve all customers reliably through its vertically-integrated generation, transmission, distribution, and retail functions. The monopolist’s costs and opportunity cost of capital determine its revenue requirement, out of which we can derive flat, averaged retail prices that forecasts suggest will enable the monopolist to earn that amount of revenue.

That’s the regulatory model + business model that has existed with little substantive evolution since the early 20th century, and it did achieve the social policy objectives of the 20th century — widespread electrification and low, stable prices, which have enabled follow-on economic growth and well-distributed increased living standards. It’s a regulatory+business model, though, that is premised on a few things:

  1. Defining a market by defining the characteristics of the product/service sold in that market, in this case electricity with a particular physical (volts, amps, hertz) definition and a particular reliability level (paraphrasing Fred Kahn …)
  2. The economies of scale (those big central generators and big wires) and economies of scope (lower total cost when producing two or more products compared to producing those products separately) that exist due to large-scale electro-mechanical technologies
  3. The architectural implications of connecting large-scale electro-mechanical technologies together in a network via a set of centralized control nodes — technology -> architecture -> market environment, and in this case large-scale electro-mechanical technologies -> distributed wires network with centralized control points rather than distributed control points throughout the network, including the edge of the network (paraphrasing Larry Lessig …)
  4. The financial implications of having invested so many resources in long-lived physical assets to create that network and its control nodes — if demand is growing at a stable rate, and regulators can assure cost recovery, then the regulated monopolist can arrange financing for investments at attractive interest rates, as long as this arrangement is likely to be stable for the 30-to-40-year life of the assets

As long as those conditions are stable, regulatory cost recovery will sustain this business model. And that’s precisely the effect of smart grid technologies, distributed generation technologies, microgrid technologies — they violate one or more of those four premises, and can make it not just feasible, but actually beneficial for customers to change their behavior in ways that reduce the regulation-supported revenue of the regulated monopolist.

Digital technologies that enable greater consumer control and more choice of products and services break down the regulatory market boundaries that are required to regulate product quality. Generation innovations, from the combined-cycle gas turbine of the 1980s to small-scale Stirling engines, reduce the economies of scale that have driven the regulation of and investment in the industry for over a century. Wires networks with centralized control built to capitalize on those large-scale technologies may have less value in an environment with smaller-scale generation and digital, automated detection, response, and control. But those generation and wires assets are long-lived, and in a cost-recovery-based business model, have to be paid for even if they become the destruction in creative destruction. We saw that happen in the restructuring that occurred in the 1990s, with the liberalization of wholesale power markets and the unbundling of generation from the vertically-integrated monopolists in those states; part of the political bargain in restructuring was to compensate them for the “stranded costs” associated with having made those investments based on a regulatory commitment that they would receive cost recovery on them.

Thus the death spiral rhetoric, and the concern that the existing utility business model will not survive. But if my framing of the situation is accurate, then what we should be examining in more detail is the regulatory model, since the utility business model is itself a regulatory creation. This relationship between digital innovation (encompassing smart grid, distributed resources, and microgrids) and regulation is what I’m exploring. How should the regulatory model and the associated utility business model change in light of digital innovation?

The political economy of Uber’s multi-dimensional creative destruction

Over the past week it’s been hard to keep up with the news about Uber. Uber’s creative destruction is rapid, and occurring on multiple dimensions in different places. And while the focus right now is on Uber’s disruption in the shared transportation market, I suspect that more disruption will arise in other markets too.

Start with two facts from this Wired article from last week by Marcus Wohlsen: Uber has just completed a funding round that raised an additional $1.2 billion, and last week it announced lower UberX fares in San Francisco, New York, and Chicago (the Chicago reduction was not mentioned in the article, but I am an Uber Chicago customer, so I received a notification of it). This second fact is interesting, especially once one digs in a little deeper:

With not just success but survival on the line, Uber has even more incentive to expand as rapidly as possible. If it gets big enough quickly enough, the political price could become too high for any elected official who tries to pull Uber to the curb.

Yesterday, Uber announced it was lowering UberX fares by 20 percent in New York City, claiming the cuts would make its cheapest service cheaper than a regular yellow taxi. That follows a 25 percent decrease in the San Francisco Bay Areaannounced last week, and a similar drop in Los Angeles UberX prices revealed earlier last month. The company says UberX drivers in California (though apparently not in New York) will still get paid their standard 80 percent portion of what the fare would have been before the discount. As Forbes‘ Ellen Huet points out, the arrangement means a San Francisco ride that once cost $15 will now cost passengers $11.25, but the driver still gets paid $12.

So one thing they’re doing with their cash is essentially topping off payments to drivers while lowering prices to customers for the UberX service. Note that Uber is a multi-service firm, with rides at different quality/price combinations. I think Wohlsen’s Wired argument is right, and that they are pursuing a strategy of “grow the base quickly”, even if it means that the UberX prices are loss leaders for now (while their other service prices remain unchanged). In a recent (highly recommended!) EconTalk podcast, Russ Roberts and Mike Munger also make this point.

This “grow the base” strategy is common in tech industries, and we’ve seen it repeatedly over the past 15 years with Amazon and others. But, as Wohlsen notes, this strategy has an additional benefit of making regulatory inertia and status quo protection more costly. The more popular Uber becomes with more people, the harder it will be for existing taxi interests to succeed in shutting them down.

The ease, the transparency, the convenience, the lower transaction costs, the ability to see and submit driver ratings, the consumer assessment of whether Uber’s reputation and driver certification provides him/her with enough expectation of safety — all of these are things that consumers can now assess for themselves, without a regulator’s judgment substitution for their own judgment. The technology, the business model, and the reputation mechanism diminish the public safety justification for taxi regulation. Creative destruction and freedom to innovate are the core of improvements in living standards. But the regulated taxi industry, having paid for medallions with the expectation of perpetual entry barriers, are seeing the value of the government-created entry barrier wither, and are lobbying to stem the losses in the value of their medallions. Note here the similarity between this situation and the one in the 1990s when regulated electric utilities argued, largely successfully, that they should be compensated for “stranded costs” when they were required to divest their generation capacity at lower prices due to the anticipation of competitive wholesale markets. One consequence of regulation is the expectation of the right to a profitable business model, an expectation that flies in the face of economic growth and dynamic change.

Another move that I think represents a political compromise while giving Uber a PR opportunity was last week’s agreement with the New York Attorney General to cap “surge pricing” during citywide emergencies, a policy that Uber appears to be extending nationally. As Megan McArdle notes, this does indeed make economists sad, since Uber’s surge pricing is a wonderful example of how dynamic pricing induces more drivers to supply rides when demand is high, rather than leaving potential passengers with fewer taxis in the face of a fixed, regulated price.

Sadly, no one else loves surge pricing as much as economists do. Instead of getting all excited about the subtle, elegant machinery of price discovery, people get all outraged about “price gouging.” No matter how earnestly economists and their fellow travelers explain that this is irrational madness — that price gouging actually makes everyone better off by ensuring greater supply and allocating the supply to (approximately) those with the greatest demand — the rest of the country continues to view marking up generators after a hurricane, or similar maneuvers, as a pretty serious moral crime.

Back in April Mike wrote here about how likely this was to happen in NY, and in commenting on the agreement with the NY AG last week, Regulation editor Peter Van Doren gave a great shout-out to Mike’s lead article in the Spring 2011 issue on price gouging regulations and their ethical and welfare effects.

Even though the surge pricing cap during emergencies is economically harmful but politically predictable (in Megan’s words), I think the real effects of Uber will transcend the shared ride market. It’s a flexible piece of software — an app, a menu of contracts with drivers and riders, transparency, a reputation mechanism. Much as Amazon started by disrupting the retail book market and then expanded because of the flexibility of its software, I expect Uber to do something similar, in some form.