Technological change, culture, and a “social license to operate”

Technological change is disruptive, and in the long sweep of human history, that disruption is one of the fundamental sources of economic growth and what Deirdre McCloskey calls the Great Enrichment:

In 1800 the average income per person…all over the planet was…an average of $3 a day. Imagine living in present-day Rio or Athens or Johannesburg on $3 a day…That’s three-fourths of a cappuccino at Starbucks. It was and is appalling. (Now)… the average person makes and consumes over $100 a day…And that doesn’t take account of the great improvement in the quality of many things, from electric lights to antibiotics.

McCloskey credits a culture that embraces change and commercial activity as having moral weight as well as yielding material improvement. Joseph Schumpeter himself characterizes such creative destruction as:

The fundamental impulse that sets and keeps the capitalist engine in motion comes from the new consumers’ goods, the new methods of production or transportation, the new markets, the new forms of industrial organization that capitalist enterprise creates. […] This process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in.

Much of the support for this perspective comes from the dramatic increase in consumer well-being, whether through material consumption or better health or more available enriching experiences. Producers create new products and services, make old ones obsolete, and create and destroy profits and industries in the process, all to the better on average over time.

Through those two lenses, the creative destruction in process because of the disruptive transportation platform Uber is a microcosm of the McCloskeyian-Schumpeterian process in action. Economist Eduardo Porter observed in the New York Times in January that

Customers have flocked to its service. In the final three months of last year, its so-called driver-partners made $656.8 million, according to an analysis of Uber data released last week by the Princeton economist Alan B. Krueger, who served as President Obama’s chief economic adviser during his first term, and Uber’s Jonathan V. Hall.

Drivers like it, too. By the end of last year, the service had grown to over 160,000 active drivers offering at least four drives a month, from near zero in mid-2012. And the analysis by Mr. Krueger and Mr. Hall suggests they make at least as much as regular taxi drivers and chauffeurs, on flexible hours. Often, they make more.

This kind of exponential growth confirms what every New Yorker and cab riders in many other cities have long suspected: Taxi service is woefully inefficient.

Consumers and drivers like Uber, despite a few bad events and missteps. The parties who dislike Uber are, of course, incumbent taxi drivers who are invested in the regulatory status quo; as I observed last July,

The more popular Uber becomes with more people, the harder it will be for existing taxi interests to succeed in shutting them down.

The ease, the transparency, the convenience, the lower transaction costs, the ability to see and submit driver ratings, the consumer assessment of whether Uber’s reputation and driver certification provides him/her with enough expectation of safety — all of these are things that consumers can now assess for themselves, without a regulator’s judgment substitution for their own judgment. The technology, the business model, and the reputation mechanism diminish the public safety justification for taxi regulation.

Uber creates value for consumers and for non-taxi drivers (who are not, repeat not, Uber employees, despite California’s wishes to the contrary). But its fairly abrupt erosion of the regulatory rents of taxi drivers leads them to use a variety of means to stop Uber from facilitating mutually beneficial interaction between consumers and drivers.

In France, one of those means is violence, which erupted last week when taxi drivers protested, lit tires on fire, and overturned cars (including ambushing musician Courtney Love’s car and egging it). A second form of violence took the form last week of the French government’s arrest of Uber for operating “an illegal taxi service” (as analyzed by Matthew Feeney at Forbes). Feeney suggests that

The technology that allows Uber to operate is not going anywhere. No matter how many cars French taxi drivers set on fire or how many regulations French lawmakers pass, demand for Uber’s technology will remain high.

If French taxi drivers want to survive in the long term perhaps they should consider developing an app to rival Uber’s or changing their business model. The absurd and embarrassing Luddite behavior on French streets last week and the arrest of Uber executives ought to prompt French lawmakers to consider a policy of taxi deregulation that will allow taxis to compete more easily with Uber. Unfortunately, French regulators and officials have a history of preferring protectionism over promoting innovation.

Does anyone think that France will succeed in standing athwart this McCloskeyian-Schumpeterian process? The culture has broadly changed along the lines McCloskey outlines — many, many consumers and drivers demonstrably value Uber’s facilitation platform, itself a Schumpeterian disruptive innovation. The Wall Street Journal opines similarly that

France isn’t the first place to have failed what might be called the Uber Test: namely, whether governments are willing to embrace disruptive innovations such as Uber or act as enforcers for local cartels. … But the French are failing the test at a particularly bad time for their economy, which foreign investors are fleeing at a faster rate than from almost any other developed country.

Taxi drivers are not the only people who do not accept these cultural and technological evolutions. Writing last week at Bloomberg View, the Berlin-based writer Leonid Bershidsky argued that the French are correct not to trust Uber:

The company is not doing enough to convince governments or the European public that it isn’t a scam. … Uber is not just a victim; it has invited much of the trouble. Katherine Teh-White, managing director of management consulting firm Futureye, says new businesses need to build up what she calls a “social license to operate”

He then goes on to list several reasons why he believes that Uber has not built a “social license to operate”, or what we might more generally call social capital. In his critique he fails to hold taxi companies to the same standards of safety, privacy, and fiduciary responsibility that he wants to impose on Uber.

But rather than a point-by-point refutation of his critique, I want to disagree most vigorously with his argument for a “social license to operate”. He quotes Teh-White as defining the concept as

This is the agreement by society or a community that an organization’s practices and products are acceptable and aligned with society’s values. If society begins to feel that an industry or company’s actions are no longer acceptable, then it can withdraw its agreement, demand new and costly dimensions, or simply ‘cancel’ the license. And that’s basically what you’re seeing in Europe and other parts of the world with Uber.

Bershidsky assumes that the government is the entity with the authority to “cancel” the “social license to operate”. Wrong. This is the McCloskey point: in a successful, dynamic society that is open to the capacity for commercial activity to enable widespread individual well-being, the social license to operate is distributed and informal, and it shows up in commercial activity patterns as well as social norms.

If French people, along with their bureaucrats, cede to their government the authority to revoke a social license to operate, then Matthew Feeney’s comments above are even more apt. By centralizing that social license to operate they maintain barriers to precisely the kinds of innovation that improve well-being, health, and happiness in a widespread manner over time. And they do so to protect a government-granted cartel. Feeney calls it embarrassing; I call it pathetic.

Forthcoming paper: Implications of Smart Grid Innovation for Organizational Models in Electricity Distribution

Back in 2001 I participated in a year-long forum on the future of the electricity distribution model. Convened by the Center for the Advancement of Energy Markets, the DISCO of the Future Forum brought together many stakeholders to develop several scenarios and analyze their implications (and several of those folks remain friends, playmates in the intellectual sandbox, and commenters here at KP [waves at Ed]!). As noted in this 2002 Electric Light and Power article,

Among the 100 recommendations that CAEM discusses in the report, the forum gave suggestions ranging from small issues-that regulators should consider requiring a standard form (or a “consumer label”) on pricing and terms and conditions of service for small customers to be provided to customers at the tie of the initial offer (as well as upon request)-to larger ones, including the suggestions that regulators should establish a standard distribution utility reporting format for all significant distribution upgrades and extensions, and that regulated DISCOs should be permitted to recover their reasonable costs for development of grid interface designs and grid interconnect application review.

“The technology exists to support a competitive retail market responsive to price signals and demand constraints,” the report concludes. “The extent to which the market is opened to competition and the extent to which these technologies are applied by suppliers, DISCOS and customers will, in large part, be determined by state legislatures and regulators.”

Now in 2015, technological dynamism has brought to a head many of the same questions, regulatory models, and business models that we “penciled out” 14 years ago.

In a new paper, forthcoming in the Wiley Handbook of Smart Grid Development, I grapple with that question: what are the implications of this technological dynamism for the organizational form of the distribution company? What transactions in the vertically-integrated supply chain should be unbundled, what assets should it own, and what are the practical policy issues being tackled in various places around the world as they deal with these questions? I analyze these questions using a theoretical framework from the economics of organization and new institutional economics. And I start off with a historical overview of the industry’s technology, regulation, and organizational model.

Implications of Smart Grid Innovation for Organizational Models in Electricity Distribution

Abstract: Digital technologies from outside the electricity industry are prompting changes in both regulatory institutions and electric utility business models, leading to the disaggregation or unbundling of historically vertically integrated electricity firms in some jurisdictions and not others, and simultaneously opening the door for competition with the traditional electric utility business. This chapter uses the technological and organizational history of the industry, combined with the transactions cost theory of the firm and of vertical integration, to explore the implications of smart grid technologies for future distribution company business models. Smart grid technologies reduce transactions costs, changing economical firm boundaries and reducing the traditional drivers of vertical integration. Possible business models for the distribution company include an integrated utility, a network manager, or a coordinating platform provider.

The New York REV and the distribution company of the future

We live in interesting times in the electricity industry. Vibrant technological dynamism, the very dynamism that has transformed how we work, play, and live, puts increasing pressure on the early-20th-century physical network, regulatory model, and resulting business model of the vertically-integrated distribution utility.

While the utility “death spiral” rhetoric is overblown, these pressures are real. They reflect the extent to which regulatory and organizational institutions, as well as the architecture of the network, are incompatible with a general social objective of not obstructing such innovation. Boosting my innovation-focused claim is the synthesis of relatively new environmental objectives into the policy mix. Innovation, particularly innovation at the distribution edge, is an expression of human creativity that fosters both older economic policy objectives of consumer protection from concentrations of market power and newer environmental policy objectives of a cleaner and prosperous energy future.

But institutions change slowly, especially bureaucratic institutions where decision-makers have a stake in the direction and magnitude of institutional change. Institutional change requires imagination to see a different world as possible, practical vision to see how to get from today’s reality toward that different world, and courage to exercise the leadership and navigate the tough tradeoffs that inevitably arise.

That’s the sense in which the New York Reforming the Energy Vision (REV) proceeding of the New York State Public Service Commission (Greentech) is compelling and encouraging. Launched in spring 2014 with a staff paper, REV is looking squarely at institutional change to align the regulatory framework and the business model of the distribution utility more with these policy objectives and with fostering innovation. As Katherine Tweed summarized the goals in the Greentech Media article linked above,

The report calls for an overhaul of the regulation of the state’s distribution utilities to achieve five policy objectives:

  • Increasing customer knowledge and providing tools that support effective management of their total energy bill
  • Market animation and leverage of ratepayer contributions
  • System-wide efficiency
  • Fuel and resource diversity
  • System reliability and resiliency

The PSC acknowledges that the current ratemaking procedure simply doesn’t work and that the distribution system is not equipped for the changes coming to the energy market. New York is already a deregulated market in which distribution is separated from generation and there is retail choice for electricity. Although that’s a step beyond many states, it is hardly enough for what’s coming in the market.

Last week the NY PSC issued its first order in the REV proceeding, that the incumbent distribution utilities will serve as distributed system platform providers (DSPPs) and should start planning accordingly. As noted by RTO Insider,

The framework envisions utilities serving a central role in the transition as distributed system platform (DSP) providers, responsible for integrated system planning and grid and market operations.

In most cases, however, utilities will be barred from owning distributed energy resources (DER): demand response, distributed generation, distributed storage and end-use energy efficiency.

The planning function will be reflected in the utilities’ distributed system implementation plan (DSIP), a multi-year forecast proposing capital and operating expenditures to serve the DSP functions and provide third parties the system information they need to plan for market participation.

A platform business model is not a cut and dry thing, though, especially in a regulated industry where the regulatory institutions reinforced and perpetuated a vertically integrated model for over a century (with that model only really modified due to generator technological change in the 1980s leading to generation unbundling). Institutional design and market design, the symbiosis of technology and institutions, will have to be front and center, if the vertically-integrated uni-directional delivery model of the 20th century is to evolve into a distribution facilitator of the 21st century.

In fact, the institutional design issues at stake here have been the focus of my research during my sabbatical, so I hope to have more to add to the discussion based on some of my forthcoming work on the subject.

FCC Title II and raising rivals’ costs

As the consequences of the FCC vote to classify the Internet as a Title II service start to sink in, here are a couple of good commentaries you may not have seen. Jeffrey Tucker’s political economy analysis of the Title II vote as a power grab is one of the best overall analyses of the situation that I’ve seen.

The incumbent rulers of the world’s most exciting technology have decided to lock down the prevailing market conditions to protect themselves against rising upstarts in a fast-changing market. To impose a new rule against throttling content or using the market price system to allocate bandwidth resources protects against innovations that would disrupt the status quo.

What’s being sold as economic fairness and a wonderful favor to consumers is actually a sop to industrial giants who are seeking untrammeled access to your wallet and an end to competitive threats to market power.

What’s being sold as keeping the Internet neutral for innovation at the edge of the network is substantively doing so by encasing the existing Internet structure and institutions in amber, which yields rents for its large incumbents. Some of those incumbents, like Comcast and Time-Warner, have achieved their current (and often resented by customers) market power not through rivalrous market competition, but through receiving municipal monopoly cable franchises. Yes, these restrictions raise their costs too, but as large incumbents they are better positioned to absorb those costs than smaller ISPs or other entrants would be. It’s naive to believe that regulations of this form will do much other than softening rivalry in the Internet itself.

But is there really that much scope for innovation and dynamism within the Internet? Yes. Not only with technologies, but also with institutions, such as interconnection agreements and peering agreements, which can affect packet delivery speeds. And, as Julian Sanchez noted, the Title II designation takes these kinds of innovations off the table.

But there’s another kind of permissionless innovation that the FCC’s decision is designed to preclude: innovation in business models and routing policies. As Neutralites love to point out, the neutral or “end-to-end” model has served the Internet pretty well over the past two decades. But is the model that worked for moving static, text-heavy webpages over phone lines also the optimal model for streaming video wirelessly to mobile devices? Are we sure it’s the best possible model, not just now but for all time? Are there different ways of routing traffic, or of dividing up the cost of moving packets from content providers, that might lower costs or improve quality of service? Again, I’m not certain—but I am certain we’re unlikely to find out if providers don’t get to run the experiment.

Platform economics and “unscaling” the electricity industry

A few weeks ago I mused over the question of whether there would ever be an Uber or AirBnB for the electricity grid. This question is a platform question — both Uber and AirBnB have business models in which they bring together two parties for mutual benefit, and the platform provider’s revenue stream can come from charging one or both parties for facilitating the transaction (although there are other means too). I said that a “P2P platform very explicitly reduces transaction costs that prevent exchanges between buyer and seller”, and that’s really the core of a platform business model. Platform providers exist to make exchanges feasible that were not before, to make them easier, and ultimately to make them either cheaper or more valuable (or some combination of the two).

In this sense the Nobel Prize award to Jean Tirole (pdf, very good summary of his work) this week was timely, because one of the areas of economics to which he has contributed is the economics of two-sided platform markets. Alex Tabarrok wrote an excellent summary of Tirole’s platform economics work. As Alex observes,

Antitrust and regulation of two-sided markets is challenging because the two sets of prices [that the platform firm charges to the two parties] may look discriminatory or unfair even when they are welfare enhancing. … Platform markets mean that pricing at marginal cost can no longer be considered optimal in every market and pricing above marginal cost can no longer be considered as an indication of monopoly power.

One aspect of platform firms is that they connect distinct users in a network. Platform firms are network firms. Not all network firms/industries operate or think of their business models as platform firms, though. That will change.

What role does a network firm provide? It’s connection, facilitating exchange between two parties. This idea is not novel, not original in the digital age. Go back in economic history to the beginnings of canals, say, or rail networks. Transportation is a quintessential non-digital network platform industry. I think you can characterize all network infrastructure industries as having some aspects of platform or two-sided markets; rail networks bring together transportation providers and passengers/freight, postal networks bring together correspondents, pipeline networks bring together buyers and sellers of oil or natural gas, electric wires networks bring together generators and consumers.

What’s novel in the digital age is that by changing transaction costs, the technology changes the transactional boundary of the firm and reduces the economic impetus for vertical integration. A digital platform firm, like Google or Uber, is not vertically integrated upstream or downstream in any of the value chains that its platform enables (although some of Google’s acquisitions are changing that somewhat), whereas historically, railroads and gas companies and electric companies started out vertically integrated. Rail network owners were vertically integrated upstream into train ownership and transportation provision, and electric utilities were integrated upstream into generation. In network infrastructure industries, the platform is physical, and firms bundled the network service into their offering. But they have not been seen or thought of as platforms in the sense that we are coming to understand as such firms and industries emerge; I suspect that’s because of the economic benefit and the historical path dependence of the vertical integration.

Another distinguishing feature of platforms and two-sided markets is that the cost-revenue relationship is not uni-directional, a point summarized well in this Harvard Business Review article overview from 2006:

Two-sided networks can be found in many industries, sharing the space with traditional product and service offerings. However, two-sided networks differ from other offerings in a fundamental way. In the traditional value chain, value moves from left to right: To the left of the company is cost; to the right is revenue. In two-sided networks, cost and revenue are both to the left and the right, because the platform has a distinct group of users on each side. The platform incurs costs in serving both groups and can collect revenue from each, although one side is often subsidized, as we’ll see.

In this sense, I still think that the electricity network and its transactions has platform characteristics — the wires firm incurs costs to deliver energy from generators to consumers, and those costs arise in serving both distinct groups.

As I apply these concepts to the electricity industry, I think digital technologies have two platform-related types of effects. The first is the reduction in transaction costs that were a big part of the economic drive for vertical integration in the first place — digital technologies make distributed digital sensing, monitoring, and measurement of energy flow and system status possible in ways that were inconceivable or impossibly costly before the invention of the transistor.

The second is the ability that digital technologies create for the network firm to handle more diverse and heterogenous types of agents in a two-sided market. For example, digital sensors and automated digital switches make it possible to automate rules for the interconnection of distributed generation, electric vehicles, microgrids, and other diverse users into the distribution grid in ways that can be mutually beneficial in a two-sided market sense. The old electro-mechanical sensors could not do that.

This is the sense in which I think a lot of tech entrepreneurs talk about “unscaling the electricity industry”:

If we want secure, clean and affordable energy, we can’t continue down this path. Instead, we need to grow in a very different way, one more akin to the Silicon Valley playbook of unscaling an industry by aggregating individual users onto platforms.

Digitally-enabled distributed resources are becoming increasingly economical at smaller scales, and some of these types of resources — microgrids, electric vehicles — can either be producers or consumers, each having associated costs and revenues and with their identities changing depending on whether they are selling excess energy or buying it.

This is a substantive, meaningful sense in which the distribution wires firm can, and should, operate as a platform and think about platform strategies as the utility business model evolves. An electric distribution platform facilitates exchange in two-sided electricity and energy service markets, charging a fee for doing so. In the near term, much of that facilitation takes the form of distribution, of the transportation and delivery. As distributed resources proliferate, the platform firm must rethink how it creates value, and reaps revenues, by facilitating beneficial exchange in two-sided markets.

Why does a theory of competition matter for electricity regulation?

For the firms in regulated industries, for the regulators, for their customers, does the theory underlying the applied regulation matter? I think it matters a lot, even down in the real-world trenches of doing regulation, because regulation’s theoretical foundation influences what regulators and firms do and how they do it. Think about a traditional regulated industry like electricity — vertically integrated because of initial technological constraints, with technologies that enable production of standard electric power service at a particular voltage range with economies of scale over the relevant range of demand.

When these technologies were new and the industry was young, the economic theory of competition underlying the form that regulation took was what we now think of as a static efficiency/allocation-focused model. In this model, production is represented by a known cost function with a given capital-labor ratio; that function is the representation of the firm and of its technology (note here how the organization of the firm fades into the background, to be re-illuminated starting in the mid-20th century by Coase and other organizational and new institutional economists). In the case of a high fixed cost industry with economies of scale, that cost function’s relevant characteristic is declining long-run average cost as output produced increases. On the demand side, consumers have stable preferences for this well-defined, standard good (electric power service at a particular voltage range).

In this model, the question is how to maximize total surplus given the technology, cost function, and preferences. This is the allocation question, and it’s a static question, because the technology, cost function, and preferences are given. The follow-on question in an industry with economies of scale is whether or not competition, rivalry among firms, will yield the best possible allocation, with the largest total surplus. The answer from this model is no: compared to the efficient benchmark where firms compete by lowering price to marginal cost, a “natural monopoly” industry/firm/cost structure cannot sustain P=MC because of the fixed costs, but price equal to average cost (where economic profits are “normal”) is not a stable equilibrium. The model indicates that the stable equilibrium is the monopoly price, with associated deadweight loss. But that P=AC point yields the highest feasible total surplus given the nature of the cost function. Thus this static allocative efficiency model is the justification for regulation of prices and quantities in this market, to make the quantity at which P=AC a stable outcome.

The theory of competition underlying this regulatory model is the static efficiency model, that competition is beneficial because it enables rival firms to bid prices down to P=MC, simultaneously maximizing firm profits, consumer surplus, and output produced (all the output that’s worth producing gets produced). Based on this model, legislators, regulators, and industry all influenced the design of regulation’s institutional details — rate-of-return regulation to target firm profits at “normal” levels, deriving retail prices from that, and erecting an entry barrier to exclude rivals while requiring the firm to serve all customers.

So what? I’ve just argued that regulatory institutional design is grounded in a theory of competition. If institutional designers hold a particular theory about what competition does and how it does it, that theory will inform their design to achieve their policy objectives. Institutional design is a function of the theory of competition, the policy objectives, and the ability/interest of industry to influence the design. If your theory of competition is the static allocative efficiency theory, you will design institutions to target the static efficient outcome in your model (in this case, P=AC). You start with a policy objective or a question to explore and a theory of competition, and out of that you derive an institutional design.

But what if competition is beneficial for other reasons, in other ways? What if the static allocative efficiency benefits of competition are just a single case in a larger set of possible outcomes? What if the phenomena we want to understand, the question to explore, the policy objective, would be better served by a different model? What if the world is not static, so the incumbent model becomes less useful because our questions and policy objectives have changed? Would we design different regulatory institutions if we use a different theory of competition? I want to try to treat that as a non-rhetorical question, even though my visceral reaction is “of course”.

These questions don’t get asked in legislative and regulatory proceedings, but given the pace and nature of dynamism, they should.

Technology market experimentation in regulated industries: Are administrative pilot projects bad for retail markets?

Since 2008, multiple smart grid pilot projects have been occurring in the US, funded jointly through regulated utility investments and taxpayer-funded Department of Energy cost sharing. In this bureaucratic market environment, market experimentation takes the form of the large-scale, multi-year pilot project. The regulated utility (after approval from the state public utility commission) publishes a request for proposals from smart grid technology vendors to sell devices and systems that provide a pre-determined range of services specified in the RFP. The regulated utility, not the end user, is thus the vendor’s primary customer.

When regulated incumbent distribution monopolists provide in-home technology to residential customers in states where retail markets are nominally competitive but the incumbent is the default service provider, does that involvement of the regulated incumbent have an anti-competitive effect? Does it reduce experimentation and innovation?

In markets with low entry and exit barriers, entrepreneurship drives new product creation and product differentiation. Market experimentation reveals whether or not consumers value such innovations. In regulated markets like electricity, however, this experimentation occurs in a top-down, procurement-oriented manner, without the organic evolution of market boundaries as entrants generate new products and services. Innovations do not succeed or fail based on their ability to attract end-use customers, but rather on their ability to persuade the regulated monopolist that the product is cost-reducing to the firm rather than value-creating for the consumer (and, similarly, their ability to persuade regulators).

The stated goal of many projects is installing digital technologies that increase performance and reliability of the basic provision of basic wires distribution service. For that reason, the projects emphasize technologies in the distribution wires network (distribution automation) and the digital meter at each home. The digital meter is the edge of the wires network, from the regulated utility’s perspective, and in restructured states it is the edge of its business, the edge of the regulated footprint. A secondary goal is to explore how some customers actually use technology to control and manage their own energy use; a longer-run consequence of this exploration may be consumer learning with respect to their electricity consumption, now that digital technology exists that can enable them to reduce consumption and save money by automating their actions.

In these cases, consumer technology choices are being made at the firm level by the regulated monopolist, not at the consumer level by consumers. This narrowed path to market for in-home technology changes the nature of the market experimentation – on one hand, the larger-volume purchases by regulated utilities may attract vendors and investors and increase rivalry and experimentation, but on the other hand, the margin at which the technology rivalry occurs is not at the end-user as decision-maker, but instead at the regulated utility. The objective functions of the utility and their heterogeneous residential customers differ substantially, and this more bureaucratic, narrowed experimentation path reduces the role of the different preferences and knowledge of those heterogeneous consumers. In that sense, the in-home technology choice being in the hands of the regulated utility stifles market experimentation with respect to the preferences of the heterogeneous consumers, although it increases experimentation with respect to the features that the regulated monopolist thinks that its customers want.

Focusing any burgeoning consumer demand on a specific technology, specific vendor, and specific firm, while creating critical mass for some technology entrepreneurs, rigidifies and channels experimentation into vendors and technologies chosen by the regulated monopolist, not by end-use consumers. Ask yourself this counterfactual: would the innovation and increase in features and value of mobile technologies have been this high if instead of competing for the end user’s business, Apple and Google had to pitch their offerings to a large, regulated utility?

These regulated incumbent technology choices may have anti-competitive downstream effects. They reduce the set of experimentation and commercialization opportunities available to retail entrants to provide product differentiation, product bundling, or other innovative value propositions beyond the scope of those being tested by the incumbent monopolist. Bundling and product differentiation are the dominant forms that dynamic competition take, and in this industry such retail bundling and product differentiation would probably include in-home devices. The regulated incumbent providing in-home technology to default customers participating in pilot projects reduces the scope for competing retail providers to engage in either product differentiation or bundling. That limitation undercuts their business models and is potentially anti-competitive.

The regulated incumbent’s default service provision and designation of in-home technology reduces a motive for consumers to search for other providers and other competing products and services. While they may argue that they are providing a convenience to their customers, they are substituting their judgment of what they think their customers want for the individual judgments of their customers.

By offering a competing regulated retail service and leveraging it into the provision of in-home devices for pilot projects, the incumbent reduces the set of feasible potentially valuable profit opportunities facing the potential retail competitors, thus reducing entry. They have to be that much more innovative to get a foothold in this market against the incumbent, in the face of consumer switching costs and inertia, when incumbent provision of in-home devices reduces potential demand facing potential entrants. Even if the customer pays for and owns the device, the anti-competitive effect can arise from the monopolist offering the device as a complement to their regulated default service product.

Leaving in-home technology choice to retailers and consumers contributes to healthy retail competition. Allowing the upstream regulated incumbent to provide in-home technology hampers it, to the detriment of both entrepreneurs and the residential customers who would have gotten more value out of a different device than the one provided by the regulated incumbent. By increasing the number of default service customers with in-home smart grid devices, these projects decrease the potential demand facing these independent retailers by removing or diluting one of the service dimensions on which they could compete. Their forays into in-home technology may not have anti-competitive intent, but they still may have anti-competitive consequences.