Technology market experimentation in regulated industries: Are administrative pilot projects bad for retail markets?

Since 2008, multiple smart grid pilot projects have been occurring in the US, funded jointly through regulated utility investments and taxpayer-funded Department of Energy cost sharing. In this bureaucratic market environment, market experimentation takes the form of the large-scale, multi-year pilot project. The regulated utility (after approval from the state public utility commission) publishes a request for proposals from smart grid technology vendors to sell devices and systems that provide a pre-determined range of services specified in the RFP. The regulated utility, not the end user, is thus the vendor’s primary customer.

When regulated incumbent distribution monopolists provide in-home technology to residential customers in states where retail markets are nominally competitive but the incumbent is the default service provider, does that involvement of the regulated incumbent have an anti-competitive effect? Does it reduce experimentation and innovation?

In markets with low entry and exit barriers, entrepreneurship drives new product creation and product differentiation. Market experimentation reveals whether or not consumers value such innovations. In regulated markets like electricity, however, this experimentation occurs in a top-down, procurement-oriented manner, without the organic evolution of market boundaries as entrants generate new products and services. Innovations do not succeed or fail based on their ability to attract end-use customers, but rather on their ability to persuade the regulated monopolist that the product is cost-reducing to the firm rather than value-creating for the consumer (and, similarly, their ability to persuade regulators).

The stated goal of many projects is installing digital technologies that increase performance and reliability of the basic provision of basic wires distribution service. For that reason, the projects emphasize technologies in the distribution wires network (distribution automation) and the digital meter at each home. The digital meter is the edge of the wires network, from the regulated utility’s perspective, and in restructured states it is the edge of its business, the edge of the regulated footprint. A secondary goal is to explore how some customers actually use technology to control and manage their own energy use; a longer-run consequence of this exploration may be consumer learning with respect to their electricity consumption, now that digital technology exists that can enable them to reduce consumption and save money by automating their actions.

In these cases, consumer technology choices are being made at the firm level by the regulated monopolist, not at the consumer level by consumers. This narrowed path to market for in-home technology changes the nature of the market experimentation – on one hand, the larger-volume purchases by regulated utilities may attract vendors and investors and increase rivalry and experimentation, but on the other hand, the margin at which the technology rivalry occurs is not at the end-user as decision-maker, but instead at the regulated utility. The objective functions of the utility and their heterogeneous residential customers differ substantially, and this more bureaucratic, narrowed experimentation path reduces the role of the different preferences and knowledge of those heterogeneous consumers. In that sense, the in-home technology choice being in the hands of the regulated utility stifles market experimentation with respect to the preferences of the heterogeneous consumers, although it increases experimentation with respect to the features that the regulated monopolist thinks that its customers want.

Focusing any burgeoning consumer demand on a specific technology, specific vendor, and specific firm, while creating critical mass for some technology entrepreneurs, rigidifies and channels experimentation into vendors and technologies chosen by the regulated monopolist, not by end-use consumers. Ask yourself this counterfactual: would the innovation and increase in features and value of mobile technologies have been this high if instead of competing for the end user’s business, Apple and Google had to pitch their offerings to a large, regulated utility?

These regulated incumbent technology choices may have anti-competitive downstream effects. They reduce the set of experimentation and commercialization opportunities available to retail entrants to provide product differentiation, product bundling, or other innovative value propositions beyond the scope of those being tested by the incumbent monopolist. Bundling and product differentiation are the dominant forms that dynamic competition take, and in this industry such retail bundling and product differentiation would probably include in-home devices. The regulated incumbent providing in-home technology to default customers participating in pilot projects reduces the scope for competing retail providers to engage in either product differentiation or bundling. That limitation undercuts their business models and is potentially anti-competitive.

The regulated incumbent’s default service provision and designation of in-home technology reduces a motive for consumers to search for other providers and other competing products and services. While they may argue that they are providing a convenience to their customers, they are substituting their judgment of what they think their customers want for the individual judgments of their customers.

By offering a competing regulated retail service and leveraging it into the provision of in-home devices for pilot projects, the incumbent reduces the set of feasible potentially valuable profit opportunities facing the potential retail competitors, thus reducing entry. They have to be that much more innovative to get a foothold in this market against the incumbent, in the face of consumer switching costs and inertia, when incumbent provision of in-home devices reduces potential demand facing potential entrants. Even if the customer pays for and owns the device, the anti-competitive effect can arise from the monopolist offering the device as a complement to their regulated default service product.

Leaving in-home technology choice to retailers and consumers contributes to healthy retail competition. Allowing the upstream regulated incumbent to provide in-home technology hampers it, to the detriment of both entrepreneurs and the residential customers who would have gotten more value out of a different device than the one provided by the regulated incumbent. By increasing the number of default service customers with in-home smart grid devices, these projects decrease the potential demand facing these independent retailers by removing or diluting one of the service dimensions on which they could compete. Their forays into in-home technology may not have anti-competitive intent, but they still may have anti-competitive consequences.

The sharing economy and the electricity industry

In a recent essay, the Rocky Mountain Institute’s Matthew Crosby asks “will there ever be an AirBnB or Uber for the electricity grid?” It’s a good question, a complicated question, and one that I have pondered myself a few times. He correctly identifies the characteristics of such platforms that have made them attractive and successful, and relates them to distributed energy resources (DERs):

What’s been missing so far is a trusted, open peer-to-peer (P2P) platform that will allow DERs to “play” in a shared economy. An independent platform underlies the success of many shared economy businesses. At its core, the platform monetizes trust and interconnection among market actors — a driver and a passenger, a homeowner and a visitor, and soon, a power producer and consumer — and allows users to both bypass the central incumbent (such as a taxi service, hotel, or electric utility) and go through a new service provider (Uber, Airbnb, or in the power sector, Google).

Now, as millions gain experience and trust with Airbnb, Uber and Lyft, they may likely begin to ask, “Why couldn’t I share, sell or buy the energy services of consumer-owned and -sited DERs like rooftop solar panels or smart thermostats?” The answer may lie in emerging business models that enable both peer-to-peer sharing of the benefits of DERs and the increased utilization of the electric system and DERs.

A P2P platform very explicitly reduces transaction costs that prevent exchanges between buyer and seller, earning revenue via a commission per transaction (and this is why Uber has in its sights such things as running your errands for you (video)). That reduction allows owners of underutilized assets (cars, apartments, solar panels, and who knows what else will evolve) to make someone else better off by selling them the use of that asset. Saying it that way makes the static welfare gain to the two parties obvious, but think also about the dynamic welfare gain — you are more likely, all other things equal, to invest in such an asset or to invest in a bigger/nicer asset if you can increase its capacity utilization. Deregulation catalyzed this process in the airline industry, and digital technology is catalyzing it now in rides and rooms. This prospect is exciting for those interested in accelerating the growth of DERs.

Note also that Crosby makes an insightful observation when he says that such P2P networks are more beneficial if they have access to a central backbone, which in this case would be the existing electricity distribution grid. Technologically, the edge of the network (where all of the cool distributed stuff is getting created) and the core of the network are complements, not substitutes. That is not and has not been the case in the electricity network, in large part because regulation has largely prevented “innovation at the edge of the network” since approximately the early 20th century and the creation of a standard plug for lights and appliances!

The standard static and dynamic welfare gain arguments, though, are not a deep enough analysis — we need to layer on the political economy analysis of the process of getting from here to there. As the controversies over Uber have shown, this process is often contentious and not straightforward, particularly in industries like rides and electricity, the incumbents in which have had regulatory entry barriers to create and protect regulatory rents. The incumbents may be in a transitional gains trap, where the rents are capitalized into their asset values, and thus to avoid economic losses to themselves and/or their shareholders, they must argue for the maintenance of the regulatory entry barrier even if overall social welfare is higher without it (i.e., if a Kaldor-Hicks improvement is possible). The concentration of benefits from maintaining the entry barrier may make this regulation persist, even if in aggregate the diffuse benefits across the non-incumbents is larger than the costs.

That’s one way to frame the current institutional design challenge in electricity. Given that the incumbent utility business model is a regulatory construct, what’s a useful and feasible way to adapt the regulatory environment to the new value propositions that new digital and distributed energy technologies have made possible? If it is likely that the diffuse economic and environmental benefits of P2P electricity exchange are larger than the costs, what does a regulatory environment look like that would enable P2P networks and the distribution grid to be complements and not substitutes? And how would we transfer the resources to the incumbents to get them out of the transitional gains trap, to get them to agree that they will serve as the intelligent digital platform for such innovation?

I think this is the question at the guts of all of the debate over the utility “death spiral”, the future utility business model, and other such innovation-induced dynamism in this industry. I’ve long argued that my vision of a technology-enabled value-creating electricity industry would have such P2P characteristics, with plug-level sensors that enable transactive automated control within the home, and with meshed connections that enable neighbors with electric vehicles and/or rooftop solar to exchange with each other (one place I made that argument was in my 2009 Beesley lecture at the IEA, captured in this 2010 Economic Affairs article). Crosby’s analysis here is consistent with that vision, and that future.

Critiquing the theory of disruptive innovation

Jill Lepore, a professor of history at Harvard and writer for the New Yorker, has written a critique of Clayton Christensen’s theory of disruptive innovation that is worth thinking through. Christensen’s The Innovator’s Dilemma (the dilemma is for firms to continue making the same decisions that made them successful, which will lead to their downfall) has been incredibly influential since its 1997 publication, and has moved the concept of disruptive innovation from its arcane Schumpeterian origins into modern business practice in a fast-changing technological environment. Disrupt or be disrupted, innovate or die, become corporate strategy maxims under the theory of disruptive innovation.

Lepore’s critique highlights the weaknesses of Christensen’s model (and it does have weaknesses, despite its success and prevalence in business culture). His historical analysis, the case study methodology, and the decisions he made regarding cutoff points in time all leave unsatisfyingly unsystematic support for his model, yet he argues that the theory of disruptive innovation is predictive and can be used with foresight to identify how firms can avoid failure. Lepore’s critique here is apt and worth considering.

Josh Gans weighs in on the Lepore article, and the theory of disruptive innovation more generally, by noting that at the core of the theory of disruptive innovation lies a new technology, and the appeal of that technology (or what it enables) to consumers:

But for every theory that reaches too far, there is a nugget of truth lurking at the centre. For Christensen, it was always clearer when we broke it down to its constituent parts as an economic theorist might (by the way, Christensen doesn’t like us economists but that is another matter). At the heart of the theory is a type of technology — a disruptive technology. In my mind, this is a technology that satisfies two criteria. First, it initially performs worse than existing technologies on precisely the dimensions that set the leading, for want of a better word, ‘metrics’ of the industry. So for disk drives, it might be capacity or performance even as new entrants promoted lower energy drives that were useful for laptops.

But that isn’t enough. You can’t actually ‘disrupt’ an industry with a technology that most consumers don’t like. There are many of those. To distinguish a disruptive technology from a mere bad idea or dead-end, you need a second criteria — the technology has a fast path of improvement on precisely those metrics the industry currently values. So your low powered drives get better performance and capacity. It is only then that the incumbents say ‘uh oh’ and are facing disruption that may be too late to deal with.

Herein lies the contradiction that Christensen has always faced. It is easy to tell if a technology is ‘potentially disruptive’ as it only has to satisfy criteria 1 — that it performs well on one thing but not on the ‘standard’ stuff. However, that is all you have to go on to make a prediction. Because the second criteria will only be determined in the future. And what is more, there has to be uncertainty over that prediction.

Josh has hit upon one of the most important dilemmas in innovation — if the new technology is likely to succeed against the old, it must offer satisfaction on the established value propositions of the incumbent technology as well as improving upon them either in speed, quality, or differentiation. And that’s inherently unknown; the incumbent can either innovate too soon and suffer losses, or innovate too late and suffer losses. At this level, the theory does not help us distinguish and identify the factors that associate innovation with continued success of the firm.

Both Lepore and Gans highlight Christensen’s desire for his theory to be predictive when it cannot be. Lepore summarizes the circularity that indicates this lack of a predictive hypothesis:

If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. … When an established company succeeds, that’s only because it hasn’t yet failed. And, when any of these things happen, all of them are only further evidence of disruption.

What Lepore brings to the party, in addition to a sharp mind and good analytical writing, is her background and sensibilities as an historian. A historical perspective on innovation helps balance some of the breathless enthusiasm for novelty often found in technology or business strategy writing. Her essay includes a discussion of the concept of “innovation” and how it has changed over several centuries (having been largely negative pre-Schumpeter), as has the Enlightenment’s theory of history as being one of human progress, which has since morphed into different theories of history:

The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. …

The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.

I think there’s a lot to her interpretation (and I say that wearing both my historian hat and my technologist hat). But I think that both the Lepore and Gans critiques, and indeed Christensen’s theory of disruptive innovation itself, would benefit from (for lack of a catchier name) a Smithian-Austrian perspective on creativity, uncertainty, and innovation.

The Lepore and Gans critiques indicate, correctly, that supporting the disruptive innovation theory requires hindsight and historical analysis because we have to observe realized outcomes to identify the relationship between innovation and the success/failure of the firm. That concept of an unknown future rests mostly in the category of risk — if we identify that past relationship, we can generate a probability distribution or a Bayesian prior for the factors likely to lead to innovation yielding success.

But the genesis of innovation is in uncertainty, not risk; if truly disruptive, innovation may break those historical relationships (pace the Gans observation about having to satisfy the incumbent value propositions). And we won’t know if that’s the case until after the innovators have unleashed the process. Some aspects of what leads to success or failure will indeed be unknowable. My epistemic/knowledge problem take on the innovator’s dilemma is that both risk and uncertainty are at play in the dynamics of innovation, and they are hard to disentangle, both epistemologically and as a matter of strategy. Successful innovation will arise from combining awareness of profit opportunities and taking action along with the disruption (the Schumpeter-Knight-Kirzner synthesis).

The genesis of innovation is also in our innate human creativity, and our channeling of that creativity into this thing we call innovation. I’d go back to the 18th century (and that Enlightenment notion of progress) and invoke both Adam Smith and David Hume to argue that innovation as an expression of human creativity is a natural consequence of our individual striving to make ourselves better off. Good market institutions using the signals of prices, profits, and losses align that individual striving with an incentive for creators to create goods and services that will benefit others, as indicated by their willingness to buy them rather than do other things with their resources.

By this model, we are inherent innovators, and successful innovation involves the combination of awareness, action, and disruption in the face of epistemic reality. Identifying that combination ex ante may be impossible. This is not a strategy model of why firms fail, but it does suggest that such strategy models should consider more than just disruption when trying to understand (or dare I say predict) future success or failure.

Joel Mokyr on growth, stagnation, and technological progress

My friend and colleague Joel Mokyr talked recently with Russ Roberts in an EconTalk podcast that I cannot recommend highly enough (and the links on the show notes are great too). The general topic is this back-and-forth that’s been going on over the past year involving Joel, Bob Gordon, Tyler Cowen, and Erik Brynjolfsson, among others, regarding diminishing returns to technological change and whether we’ve reached “the end of innovation”. Joel summarizes his argument in this Vox EU essay.

Joel is an optimist, and does not believe that technological dynamism is running out of steam (to make a 19th-century joke …). He argues that technological change and its ensuing economic growth are punctuated, and one reason for that is that conceptual breakthroughs are essential but unforeseeable. Economic growth also occurs because of the perpetual nature of innovation — the fact that others are innovating (here he uses county-level examples) means that everyone has to innovate as a form of running to stand still. I agree, and I think as long as the human mind, human creativity, and human striving to achieve and accomplish exist, there will be technological dynamism. A separate question is whether the institutional framework in which we interact in society is conducive to technological dynamism and to channeling our creativity and striving into such constructive application.

Adam Thierer on regulating media platforms

The Mercatus Center’s Adam Thierer analyzes communications technologies and the policies influencing the development and use of them, and I’ve always found his work extremely valuable in my own thinking. Adam and Brent Skorup have a new Mercatus study on lobbying in the information technology sector, A History of Cronyism and Capture in the Information Technology Sector.

One area where Adam and I have common cause is in the interaction of regulation and technological change, and the extent to which regulation may not yield the desired outcomes when regulation dilutes incentives to innovate and stifles change, due to some static definition of “public interest” that is inconsistent with dynamism and experimentation.

I recommend this Surprisingly Free podcast, in which Adam discusses proposals to regulate Facebook and other social media platform companies as public utilities; the podcast page also has links to some of Adam’s written work. In particular, if you want to explore these ideas I suggest Adam’s Mercatus paper on the perils of classifying social media companies as public utilities, in which he argues:

Social media aren’t public utilities for two key reasons:

  1. Social Media do not possess the potential to become natural monopolies. They are virtually no costs to consumers and competitors have the ability to duplicate such platforms. The hottest networks are changing every year, and there is no way for the government to determine which platform is going to become popular next.  Remember MySpace or CompuServe?
  2. Social Media are not essential facilities. Those who claim that Facebook is a “social utility” or “social commons” must admit that such sites are not essential to survival, economic success, or online life. Unlike water and electricity, life can go on without social networking services.

Public utility regulation would instead stifle digital innovation and raise prices of these services for users. Not only are social media sites largely free and universally available, but they are also constantly innovating.

I am going to be digging into a new research project later this summer using some of Adam’s arguments, so I am particularly interested in your comments and thoughts.

The Criminal N.S.A.

From law professors Jennifer Stisa Granick and Christopher Jon Sprigman, in today’s New York Times:

“We may never know all the details of the mass surveillance programs, but we know this: The administration has justified them through abuse of language, intentional evasion of statutory protections, secret, unreviewable investigative procedures and constitutional arguments that make a mockery of the government’s professed concern with protecting Americans’ privacy. It’s time to call the N.S.A.’s mass surveillance programs what they are: criminal.”

UPDATE: Here’s a good article in the Atlantic riffing off of the Granick & Sprigman piece, and filling in some background beyond what they could do within their word count limit.

Economist debate on technological progress

Lynne Kiesling

The Economist recently did one of their periodic debates, this time on the pace and effects of technological progress. Moderator Ryan Avent framed the debate thus:

This leads some scholars to conclude that accelerating technical change is an illusion. Autonomous vehicles and 3D printers are flashy but lack the transformative power of electricity or the jet engine, some argue. Indeed, the contribution of technology to growth may be weakening rather than strengthening. Others strongly disagree, noting that even in the thick of the Industrial Revolution there were periodic slowdowns in growth. Major new innovations do not generate immediate economic results, they reckon, but provide a boost over decades as firms and households learn how to use them to make life easier and better. The impressive inventions of the past decade—including remarkable growth in social networking—have hardly had time to make themselves felt across the economy.

Which side is right? Is technological change accelerating, or has most of the benefit from the IT revolution already been realised, leaving the rich world in the grip of continued technical stagnation?

Taking the “pro” position on technological progress is Andrew McAfee of MIT; taking the “con” position is my colleague Robert Gordon, whose recent work on technological stagnation has been widely discussed and controversial (see here a recent TED talk that Bob gave on technological stagnation and one from MIT’s Erik Brynjolfsoon on the same TED panel).

McAfee starts by pointing out that stagnation arguments rely on short-run data (post-1940s is definitely short run for technological change, as Bob also argues). Often 100 years is more of the timescale for looking at technological change and its effects, and since modern digital technology is mostly a post-1960 phenomenon, are we being premature in declaring stagnation? McAfee also points out that the nature of the changes in quality of life arising from technology makes those changes hard to capture in economic statistics. In the Industrial Revolutions of the 19th century, mechanical changes and changes in energy use led to large, quick productivity effects. But the nature of digital technology and its effects is more distributed, smaller scale but widespread, and focused on the communication of information and the ability to control processes. That makes for different patterns of both adoption and outcomes from the adoption of digital technology. It also makes for more distributed new product/service innovation at the edges of networks, which is another substantively different pattern in economic activity than seen in the 19th/early 20th century. Kevin Kelly also made many of these observations in a January 2013 EconTalk podcast with Russ Roberts.

I am, not surprisingly, sympathetic to this argument. I also think that framing the question as “is technological change accelerating?” is not helpful. As with any other changes arising from human action and interaction, rates of technological change will ebb and flow, and it’s only really informative to look retrospectively at long time periods to understand the effects of technological change. That’s why economic history, especially the history of innovation, is valuable, and attempts at predictive forecasting with respect to technology are not useful, or at least should be taken with massive grains of salt. It’s also why this Economist debate is a bit frustrating, because both parties (but especially Gordon) rely pedantically on the acceleration of the rate of change (in other words, the second derivative being positive) as the question at hand. Is that really the interesting question? I don’t think so, because of the ebb and flow. It’s how technological change affects the daily lives of the population that matters, and how, in Adam Smith’s language, it translates into “universal and widespread opulence”. There are lots of ways for that to manifest itself, and they won’t all show up in aggregate productivity statistics.

Gordon’s stagnation argument seems to have the most purchase when he makes this claim in his first debate post:

A sobering picture emerges from a look at the official data on personal consumption expenditures. Only about 7% of spending has anything to do with audio, video, or computer-related goods and services, including purchases of equipment to paying the bills for cable TV or mobile-phone subscriptions. Fully 70% of consumer spending is on services, and what are the largest categories? Housing rent, water supply, electricity and gas, doctor and dentist bills, hospitals, auto repair, public transport, membership clubs, theatres, museums, spending in restaurants and bars, bank and financial services fees, higher and secondary education, barber shops and nail salons, religious activities, air fares and hotel fees—none of which are being altered appreciably by recent high-tech innovation.

He’s right that some of these categories are in industries that are less prone to change in quantity, quality, or cost due to innovation, although it’s important to bear in mind with respect to electricity, medical care, and financial service fees that much of the apparent stagnation arises from regulatory institutions and the innovation-reducing (or stifling) effects of regulation, not from technological stagnation per se.

McAfee rebuts by elaborating on the slow unfolding of innovation’s effects in the past. He then offers some examples (including fracking, very familiar to KP readers!) to illustrate the demonstrable productivity impacts of technology. He doesn’t fully go at what I see as the Achilles heel of the stagnation-productivity argument — the extent to which small-scale, distributed effects on product differentiation, product quality, and transaction costs are not going to be reflected in aggregate economic statistics.

At the end, the readers find for McAfee. But in important ways the question is both pedantic and unanswerable. I think a better way of framing the question is to ask the comparative institutional question: what types of social institutions (culture, norms, law, statute, regulation) best facilitate thriving human creativity and the ability to turn innovation into new and different products and services, into transaction cost reductions that change organizational and industry structures, and lead to economic growth, even if it’s in ways that don’t show up in labor productivity statistics?