The “utility death spiral”: The utility as a regulatory creation

Unless you follow the electricity industry you may not be aware of the past year’s discussion of the impending “utility death spiral”, ably summarized in this Clean Energy Group post:

There have been several reports out recently predicting that solar + storage systems will soon reach cost parity with grid-purchased electricity, thus presenting the first serious challenge to the centralized utility model.  Customers, the theory goes, will soon be able to cut the cord that has bound them to traditional utilities, opting instead to self-generate using cheap PV, with batteries to regulate the intermittent output and carry them through cloudy spells.  The plummeting cost of solar panels, plus the imminent increased production and decreased cost of electric vehicle batteries that can be used in stationary applications, have combined to create a technological perfect storm. As grid power costs rise and self-generation costs fall, a tipping point will arrive – within a decade, some analysts are predicting – at which time, it will become economically advantageous for millions of Americans to generate their own power.  The “death spiral” for utilities occurs because the more people self-generate, the more utilities will be forced to seek rate increases on a shrinking rate base… thus driving even more customers off the grid.

A January 2013 analysis from the Edison Electric Institute, Disruptive Challenges: Financial Implications and Strategic Responses to a Changing Retail Electric Business, precipitated this conversation. Focusing on the financial market implications for regulated utilities of distributed resources (DER) and technology-enabled demand-side management (an archaic term that I dislike intensely), or DSM, the report notes that:

The financial risks created by disruptive challenges include declining utility revenues, increasing costs, and lower profitability potential, particularly over the long term. As DER and DSM programs continue to capture “market share,” for example, utility revenues will be reduced. Adding the higher costs to integrate DER, increasing subsidies for DSM and direct metering of DER will result in the potential for a squeeze on profitability and, thus, credit metrics. While the regulatory process is expected to allow for recovery of lost revenues in future rate cases, tariff structures in most states call for non-DER customers to pay for (or absorb) lost revenues. As DER penetration increases, this is a cost recovery structure that will lead to political pressure to undo these cross subsidies and may result in utility stranded cost exposure.

I think the apocalyptic “death spiral” rhetoric is overblown and exaggerated, but this is a worthwhile, and perhaps overdue, conversation to have. As it has unfolded over the past year, though, I do think that some of the more essential questions on the topic are not being asked. Over the next few weeks I’m going to explore some of those questions, as I dive into a related new research project.

The theoretical argument for the possibility of death spiral is straightforward. The vertically-integrated, regulated distribution utility is a regulatory creation, intended to enable a financially sustainable business model for providing reliable basic electricity service to the largest possible number of customers for the least feasible cost, taking account of the economies of scale and scope resulting from the electro-mechanical generation and wires technologies implemented in the early 20th century. From a theoretical/benevolent social planner perspective, the objective is, given a market demand for a specific good/service, to minimize the total cost of providing that good/service subject to a zero economic profit constraint for the firm; this will lead to highest feasible output and total surplus combination (and lowest deadweight loss) consistent with the financial sustainability of the firm.

The regulatory mechanism for implementing this model to achieve this objective is to erect a legal entry barrier into the market for that specific good/service, and to assure the regulated monopolist cost recovery, including its opportunity cost of capital, otherwise known as rate-of-return regulation. In return, the regulated monopolist commits to serve all customers reliably through its vertically-integrated generation, transmission, distribution, and retail functions. The monopolist’s costs and opportunity cost of capital determine its revenue requirement, out of which we can derive flat, averaged retail prices that forecasts suggest will enable the monopolist to earn that amount of revenue.

That’s the regulatory model + business model that has existed with little substantive evolution since the early 20th century, and it did achieve the social policy objectives of the 20th century — widespread electrification and low, stable prices, which have enabled follow-on economic growth and well-distributed increased living standards. It’s a regulatory+business model, though, that is premised on a few things:

  1. Defining a market by defining the characteristics of the product/service sold in that market, in this case electricity with a particular physical (volts, amps, hertz) definition and a particular reliability level (paraphrasing Fred Kahn …)
  2. The economies of scale (those big central generators and big wires) and economies of scope (lower total cost when producing two or more products compared to producing those products separately) that exist due to large-scale electro-mechanical technologies
  3. The architectural implications of connecting large-scale electro-mechanical technologies together in a network via a set of centralized control nodes — technology -> architecture -> market environment, and in this case large-scale electro-mechanical technologies -> distributed wires network with centralized control points rather than distributed control points throughout the network, including the edge of the network (paraphrasing Larry Lessig …)
  4. The financial implications of having invested so many resources in long-lived physical assets to create that network and its control nodes — if demand is growing at a stable rate, and regulators can assure cost recovery, then the regulated monopolist can arrange financing for investments at attractive interest rates, as long as this arrangement is likely to be stable for the 30-to-40-year life of the assets

As long as those conditions are stable, regulatory cost recovery will sustain this business model. And that’s precisely the effect of smart grid technologies, distributed generation technologies, microgrid technologies — they violate one or more of those four premises, and can make it not just feasible, but actually beneficial for customers to change their behavior in ways that reduce the regulation-supported revenue of the regulated monopolist.

Digital technologies that enable greater consumer control and more choice of products and services break down the regulatory market boundaries that are required to regulate product quality. Generation innovations, from the combined-cycle gas turbine of the 1980s to small-scale Stirling engines, reduce the economies of scale that have driven the regulation of and investment in the industry for over a century. Wires networks with centralized control built to capitalize on those large-scale technologies may have less value in an environment with smaller-scale generation and digital, automated detection, response, and control. But those generation and wires assets are long-lived, and in a cost-recovery-based business model, have to be paid for even if they become the destruction in creative destruction. We saw that happen in the restructuring that occurred in the 1990s, with the liberalization of wholesale power markets and the unbundling of generation from the vertically-integrated monopolists in those states; part of the political bargain in restructuring was to compensate them for the “stranded costs” associated with having made those investments based on a regulatory commitment that they would receive cost recovery on them.

Thus the death spiral rhetoric, and the concern that the existing utility business model will not survive. But if my framing of the situation is accurate, then what we should be examining in more detail is the regulatory model, since the utility business model is itself a regulatory creation. This relationship between digital innovation (encompassing smart grid, distributed resources, and microgrids) and regulation is what I’m exploring. How should the regulatory model and the associated utility business model change in light of digital innovation?

The political economy of Uber’s multi-dimensional creative destruction

Over the past week it’s been hard to keep up with the news about Uber. Uber’s creative destruction is rapid, and occurring on multiple dimensions in different places. And while the focus right now is on Uber’s disruption in the shared transportation market, I suspect that more disruption will arise in other markets too.

Start with two facts from this Wired article from last week by Marcus Wohlsen: Uber has just completed a funding round that raised an additional $1.2 billion, and last week it announced lower UberX fares in San Francisco, New York, and Chicago (the Chicago reduction was not mentioned in the article, but I am an Uber Chicago customer, so I received a notification of it). This second fact is interesting, especially once one digs in a little deeper:

With not just success but survival on the line, Uber has even more incentive to expand as rapidly as possible. If it gets big enough quickly enough, the political price could become too high for any elected official who tries to pull Uber to the curb.

Yesterday, Uber announced it was lowering UberX fares by 20 percent in New York City, claiming the cuts would make its cheapest service cheaper than a regular yellow taxi. That follows a 25 percent decrease in the San Francisco Bay Areaannounced last week, and a similar drop in Los Angeles UberX prices revealed earlier last month. The company says UberX drivers in California (though apparently not in New York) will still get paid their standard 80 percent portion of what the fare would have been before the discount. As Forbes‘ Ellen Huet points out, the arrangement means a San Francisco ride that once cost $15 will now cost passengers $11.25, but the driver still gets paid $12.

So one thing they’re doing with their cash is essentially topping off payments to drivers while lowering prices to customers for the UberX service. Note that Uber is a multi-service firm, with rides at different quality/price combinations. I think Wohlsen’s Wired argument is right, and that they are pursuing a strategy of “grow the base quickly”, even if it means that the UberX prices are loss leaders for now (while their other service prices remain unchanged). In a recent (highly recommended!) EconTalk podcast, Russ Roberts and Mike Munger also make this point.

This “grow the base” strategy is common in tech industries, and we’ve seen it repeatedly over the past 15 years with Amazon and others. But, as Wohlsen notes, this strategy has an additional benefit of making regulatory inertia and status quo protection more costly. The more popular Uber becomes with more people, the harder it will be for existing taxi interests to succeed in shutting them down.

The ease, the transparency, the convenience, the lower transaction costs, the ability to see and submit driver ratings, the consumer assessment of whether Uber’s reputation and driver certification provides him/her with enough expectation of safety — all of these are things that consumers can now assess for themselves, without a regulator’s judgment substitution for their own judgment. The technology, the business model, and the reputation mechanism diminish the public safety justification for taxi regulation. Creative destruction and freedom to innovate are the core of improvements in living standards. But the regulated taxi industry, having paid for medallions with the expectation of perpetual entry barriers, are seeing the value of the government-created entry barrier wither, and are lobbying to stem the losses in the value of their medallions. Note here the similarity between this situation and the one in the 1990s when regulated electric utilities argued, largely successfully, that they should be compensated for “stranded costs” when they were required to divest their generation capacity at lower prices due to the anticipation of competitive wholesale markets. One consequence of regulation is the expectation of the right to a profitable business model, an expectation that flies in the face of economic growth and dynamic change.

Another move that I think represents a political compromise while giving Uber a PR opportunity was last week’s agreement with the New York Attorney General to cap “surge pricing” during citywide emergencies, a policy that Uber appears to be extending nationally. As Megan McArdle notes, this does indeed make economists sad, since Uber’s surge pricing is a wonderful example of how dynamic pricing induces more drivers to supply rides when demand is high, rather than leaving potential passengers with fewer taxis in the face of a fixed, regulated price.

Sadly, no one else loves surge pricing as much as economists do. Instead of getting all excited about the subtle, elegant machinery of price discovery, people get all outraged about “price gouging.” No matter how earnestly economists and their fellow travelers explain that this is irrational madness — that price gouging actually makes everyone better off by ensuring greater supply and allocating the supply to (approximately) those with the greatest demand — the rest of the country continues to view marking up generators after a hurricane, or similar maneuvers, as a pretty serious moral crime.

Back in April Mike wrote here about how likely this was to happen in NY, and in commenting on the agreement with the NY AG last week, Regulation editor Peter Van Doren gave a great shout-out to Mike’s lead article in the Spring 2011 issue on price gouging regulations and their ethical and welfare effects.

Even though the surge pricing cap during emergencies is economically harmful but politically predictable (in Megan’s words), I think the real effects of Uber will transcend the shared ride market. It’s a flexible piece of software — an app, a menu of contracts with drivers and riders, transparency, a reputation mechanism. Much as Amazon started by disrupting the retail book market and then expanded because of the flexibility of its software, I expect Uber to do something similar, in some form.

Critiquing the theory of disruptive innovation

Jill Lepore, a professor of history at Harvard and writer for the New Yorker, has written a critique of Clayton Christensen’s theory of disruptive innovation that is worth thinking through. Christensen’s The Innovator’s Dilemma (the dilemma is for firms to continue making the same decisions that made them successful, which will lead to their downfall) has been incredibly influential since its 1997 publication, and has moved the concept of disruptive innovation from its arcane Schumpeterian origins into modern business practice in a fast-changing technological environment. Disrupt or be disrupted, innovate or die, become corporate strategy maxims under the theory of disruptive innovation.

Lepore’s critique highlights the weaknesses of Christensen’s model (and it does have weaknesses, despite its success and prevalence in business culture). His historical analysis, the case study methodology, and the decisions he made regarding cutoff points in time all leave unsatisfyingly unsystematic support for his model, yet he argues that the theory of disruptive innovation is predictive and can be used with foresight to identify how firms can avoid failure. Lepore’s critique here is apt and worth considering.

Josh Gans weighs in on the Lepore article, and the theory of disruptive innovation more generally, by noting that at the core of the theory of disruptive innovation lies a new technology, and the appeal of that technology (or what it enables) to consumers:

But for every theory that reaches too far, there is a nugget of truth lurking at the centre. For Christensen, it was always clearer when we broke it down to its constituent parts as an economic theorist might (by the way, Christensen doesn’t like us economists but that is another matter). At the heart of the theory is a type of technology — a disruptive technology. In my mind, this is a technology that satisfies two criteria. First, it initially performs worse than existing technologies on precisely the dimensions that set the leading, for want of a better word, ‘metrics’ of the industry. So for disk drives, it might be capacity or performance even as new entrants promoted lower energy drives that were useful for laptops.

But that isn’t enough. You can’t actually ‘disrupt’ an industry with a technology that most consumers don’t like. There are many of those. To distinguish a disruptive technology from a mere bad idea or dead-end, you need a second criteria — the technology has a fast path of improvement on precisely those metrics the industry currently values. So your low powered drives get better performance and capacity. It is only then that the incumbents say ‘uh oh’ and are facing disruption that may be too late to deal with.

Herein lies the contradiction that Christensen has always faced. It is easy to tell if a technology is ‘potentially disruptive’ as it only has to satisfy criteria 1 — that it performs well on one thing but not on the ‘standard’ stuff. However, that is all you have to go on to make a prediction. Because the second criteria will only be determined in the future. And what is more, there has to be uncertainty over that prediction.

Josh has hit upon one of the most important dilemmas in innovation — if the new technology is likely to succeed against the old, it must offer satisfaction on the established value propositions of the incumbent technology as well as improving upon them either in speed, quality, or differentiation. And that’s inherently unknown; the incumbent can either innovate too soon and suffer losses, or innovate too late and suffer losses. At this level, the theory does not help us distinguish and identify the factors that associate innovation with continued success of the firm.

Both Lepore and Gans highlight Christensen’s desire for his theory to be predictive when it cannot be. Lepore summarizes the circularity that indicates this lack of a predictive hypothesis:

If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. … When an established company succeeds, that’s only because it hasn’t yet failed. And, when any of these things happen, all of them are only further evidence of disruption.

What Lepore brings to the party, in addition to a sharp mind and good analytical writing, is her background and sensibilities as an historian. A historical perspective on innovation helps balance some of the breathless enthusiasm for novelty often found in technology or business strategy writing. Her essay includes a discussion of the concept of “innovation” and how it has changed over several centuries (having been largely negative pre-Schumpeter), as has the Enlightenment’s theory of history as being one of human progress, which has since morphed into different theories of history:

The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. …

The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.

I think there’s a lot to her interpretation (and I say that wearing both my historian hat and my technologist hat). But I think that both the Lepore and Gans critiques, and indeed Christensen’s theory of disruptive innovation itself, would benefit from (for lack of a catchier name) a Smithian-Austrian perspective on creativity, uncertainty, and innovation.

The Lepore and Gans critiques indicate, correctly, that supporting the disruptive innovation theory requires hindsight and historical analysis because we have to observe realized outcomes to identify the relationship between innovation and the success/failure of the firm. That concept of an unknown future rests mostly in the category of risk — if we identify that past relationship, we can generate a probability distribution or a Bayesian prior for the factors likely to lead to innovation yielding success.

But the genesis of innovation is in uncertainty, not risk; if truly disruptive, innovation may break those historical relationships (pace the Gans observation about having to satisfy the incumbent value propositions). And we won’t know if that’s the case until after the innovators have unleashed the process. Some aspects of what leads to success or failure will indeed be unknowable. My epistemic/knowledge problem take on the innovator’s dilemma is that both risk and uncertainty are at play in the dynamics of innovation, and they are hard to disentangle, both epistemologically and as a matter of strategy. Successful innovation will arise from combining awareness of profit opportunities and taking action along with the disruption (the Schumpeter-Knight-Kirzner synthesis).

The genesis of innovation is also in our innate human creativity, and our channeling of that creativity into this thing we call innovation. I’d go back to the 18th century (and that Enlightenment notion of progress) and invoke both Adam Smith and David Hume to argue that innovation as an expression of human creativity is a natural consequence of our individual striving to make ourselves better off. Good market institutions using the signals of prices, profits, and losses align that individual striving with an incentive for creators to create goods and services that will benefit others, as indicated by their willingness to buy them rather than do other things with their resources.

By this model, we are inherent innovators, and successful innovation involves the combination of awareness, action, and disruption in the face of epistemic reality. Identifying that combination ex ante may be impossible. This is not a strategy model of why firms fail, but it does suggest that such strategy models should consider more than just disruption when trying to understand (or dare I say predict) future success or failure.

Permissionless innovation in electricity: the benefits of experimentation

Last Monday I was scheduled to participate in the Utility Industry of the Future Symposium at the NYU Law School. Risk aversion about getting back for Tuesday classes in the face of a forecast 7″ snowfall in New York kept me from attending (and the snow never materialized, which makes the cost even more bitter!), so I missed out on the great talks and panels. But I’ve edited my remarks into the essay below, with helpful comments and critical readings from Mark Silberg and Jim Speta. Happy thinking!

If you look through the lens of an economist, especially an economic historian, the modern world looks marvelous – innovation enables us to live very different lives than even 20 years ago, lives that are richer in experience and value in many ways. We are surrounded by dynamism, by the change arising from creativity, experimentation, and new ideas. The benefits of such dynamism are cumulative and compound upon each other. Economic history teaches us that well-being emerges from the compounding of incremental changes over time, until two decades later you look at your old, say, computer and you wonder that you ever accomplished anything that way at all.

The digital technology that allows us to flourish in unanticipated ways, large and small, is an expression of human creativity in an environment in which experimentation is rife and entry barriers are low. That combination of experimentation and low entry barriers is what has made the Internet such a rich, interesting, useful platform for us to use to make ourselves better off, in the different ways and meanings we each have.

And yet, very little (if any) of this dynamism has originated in the electricity industry, and little of this dynamism has affected how most people transact in and engage with electricity. Digital technologies now exist that consumers could use to observe and manage their electricity consumption in a more timely way than after the fact, at the end of the month, and to transact for services they value – different pricing, different fuel sources, and automating their consumption responses to changes in those. From the service convergence in telecom (“triple play”) we have experimented with and learned the value of bundling. Such bundling of retail electricity service with home entertainment, home security, etc. are services that companies like ADT and Verizon are exploring, but have been extremely slow to develop and have not commercialized yet, due to the combination of regulatory entry barriers that restrict producers and reinforce customer inertia. All of these examples of technologies, of pricing, of bundling, are examples of stalled innovation, of foregone innovation in this space.

Although we do not observe it directly, the cost of foregone innovation is high. Today residential consumers still generally have low-cost, plain-vanilla commodity electricity service, with untapped potential to create new value beyond basic service. Producers earn guaranteed, regulation-constrained profits by providing these services, and the persistence of regulated “default service contracts” in nominally competitive states is an entry barrier facing producers that might otherwise experiment with new services, pricing, and bundles. If producers don’t experiment, consumers can’t experiment, and thus both parties suffer the cost of foregone innovation – consumers lose the opportunity to choose services they may value more, and producers lose the opportunity to profit by providing them. By (imperfect) analogy, think about what your life would be like if Apple had not been allowed to set up retail stores that enable consumers to engage in learning while shopping. It would be poorer (and that’s true even if you don’t own any Apple devices, because the experimentation and learning and low entry barriers even benefits you because it encourages new products and entry).

This process of producer and consumer experimentation and learning is the essence of how we create value through exchange and market processes. What Internet pioneer Vint Cerf calls permissionless innovation, what writer Matt Ridley calls ideas having sex — these are the processes by which we humans create, strive, learn, adapt, and thrive.

But regulation is a permission-based system, and regulation slows or stifles innovation in electricity by cutting off this permissionless innovation. Legal entry barriers, the bureaucratic procedures for cost recovery, the risk aversion of both regulator and regulated, all undermine precisely the processes that enable innovation to yield consumer benefits and producer profits. In this way regulation that dictates business models and entry barriers discourages activities that benefit society, that are in the public interest.

The question of public interest is of course central to any analysis of electricity regulation’s effects. Our current model of utility regulation has been built on the late 19th century idea that cost-based regulation and restricting entry would make reliable electric service ubiquitous and as cheap as is feasible. Up through the 1960s, while exploiting the economies of scale and scope in the conventional mechanical technologies, that concept of the public interest was generally beneficial. But by so doing, utility regulation entrenched “iron in the ground” technologies in the bureaucratic process. It also entrenched an attitude and a culture of prudential preference for those conventional technologies on the part of both regulator and regulated.

This entrenchment becomes a problem because the substance of what constitutes the public interest is not static. It has changed since the late 19th century, as has so much in our lives, and it has changed to incorporate the dimension of environmental quality as we have learned of the environmental effects of fossil fuel consumption. But the concept of the public interest of central generation and low prices that is fossilized in regulatory rules does not reflect that change. I argue that the “Rube Goldberg” machine accretion of RPS, tax credits, and energy efficiency mandates to regulated utilities reflects just how poorly situated the traditional regulated environment is to adapting to the largely unforeseeable changes arising from the combination of dynamic economic and environmental considerations. Traditional regulation is not flexible enough to be adaptive.

The other entrenchment that we observe with regulation is the entrenchment of interests. Even if regulation was initiated as a mechanism for protecting consumer interests, in the administrative and legal process it creates entrenched interests in maintaining the legal and technological status quo. What we learn from public choice theory, and what we observe in regulated industries including electricity, is that regulation becomes industry-protecting regulation. Industry-protecting regulation cultivates constituency interests, and those constituency interests generally prefer to thwart innovation and retain entry barriers to restrict interconnection and third-party and consumer experimentation. This political economy dynamic contributes to the stifling of innovation.

As I’ve been thinking through this aloud with you, you’ve probably been thinking “but what about reliability and permissionless innovation – doesn’t the physical nature of our interconnected network necessitate permission to innovate?” In the centralized electro-mechanical T&D network that is more true, and in such an environment regulation provides stability of investments and returns. But again we see the cost of foregone innovation staring us in the face. Digital switches, open interconnection and interoperability standards (that haven’t been compromised by the NSA), and more economical small-scale generation are innovations that make high reliability in a resilient distributed system more possible (for example, a “system of systems” of microgrids and rooftop solar and EVs). Those are the types of conditions that hold in the Internet – digital switches, traffic rules, TCP-IP and other open data protocols — and as long as innovators abide by those physical rules, they can enter, enabling experimentation, trial and error, and learning.

Thus I conclude that for electricity policy to focus on facilitating what is socially beneficial, it should focus on clear, transparent, and just physical rules for the operation of the grid, on reducing entry barriers that prevent producer and consumer experimentation and learning, and on enabling a legal and technological environment in which consumers can use competition and technology to protect themselves.

Joel Mokyr on growth, stagnation, and technological progress

My friend and colleague Joel Mokyr talked recently with Russ Roberts in an EconTalk podcast that I cannot recommend highly enough (and the links on the show notes are great too). The general topic is this back-and-forth that’s been going on over the past year involving Joel, Bob Gordon, Tyler Cowen, and Erik Brynjolfsson, among others, regarding diminishing returns to technological change and whether we’ve reached “the end of innovation”. Joel summarizes his argument in this Vox EU essay.

Joel is an optimist, and does not believe that technological dynamism is running out of steam (to make a 19th-century joke …). He argues that technological change and its ensuing economic growth are punctuated, and one reason for that is that conceptual breakthroughs are essential but unforeseeable. Economic growth also occurs because of the perpetual nature of innovation — the fact that others are innovating (here he uses county-level examples) means that everyone has to innovate as a form of running to stand still. I agree, and I think as long as the human mind, human creativity, and human striving to achieve and accomplish exist, there will be technological dynamism. A separate question is whether the institutional framework in which we interact in society is conducive to technological dynamism and to channeling our creativity and striving into such constructive application.

The peanut butter Pop-Tart is not an innovation

Today’s Wall Street Journal has an article about the use, overuse, and misuse of the word “innovation” in modern business, particularly with respect to consumer products. The number of instances of S&P 500 CEOs using the word in their earnings calls has doubled since 2007. Sadly, this misuse and overuse threatens to remove all meaning from the word. Witness the example offered in the article’s title: Kellogg’s new peanut butter Pop-Tart, which Kellogg executives tout as one of the most important innovations of 2013. Peanut butter filling instead of cherry or strawberry or chocolate, an innovation? Really?

Next time your boss starts droning on about innovation, it might be helpful to stop and analyze: Is she talking about building the next iPod or the next Pop-Tart? Does “innovate” mean just “stay competitive”? And if so, where is the innovation in that? …

In this context, to innovate can often mean falling short of the word’s Latin roots (of “new creation”). It’s more modest: simply keeping pace with rivals.

They used to call it competitiveness—a word fraught with the implication that others might win. Now it has been elevated to innovation, a more regal way to describe what business has always done: Adapt.

That’s a great point, and it’s a point that Schumpeter and Austrian economists have made for over a century — there are many different ways that firms adapt to the effects of rivalry in markets, and one of them is innovation. But, you might reply, Schumpeter emphasized the role of product differentiation in lessening the effects of rivalry, by making your new product less substitutable for the existing competitor products, and isn’t a peanut butter Pop-Tart an example of product differentiation? (Technically speaking, my answer to that question is no, but that may be me being pedantic, which is what I do …)

That’s where an old post from Roger Pielke Jr. is helpful:

In recent comments I was asked about what I mean when I use the term “innovation.”  I use the term as Peter Drucker did:

Innovation is change that creates a new dimension of performance.

Roger tweeted the link to that old post in response to the WSJ peanut butter Pop-Tart article today. Does Drucker’s definition help; is it “operationalizable”? Only if you define “sell more peanut butter Pop-Tarts” as the new dimension of performance!

Joel Mokyr: Technopessimism is bunk

My department is currently a focal point in the debates over the future of innovation and economic growth. Technopessimist arguments from my colleague Bob Gordon (as profiled in this New York Magazine article from the weekend) join those in Tyler Cowen’s The Great Stagnation to suggest that the increase in living standards and the growth rates experienced over the past 200 years may be anomalous and not repeatable.

In the PBS Newshour Business Desk, my colleague (and former dissertation adviser) Joel Mokyr offers a different, more optimistic perspective. Joel emphasizes the dynamic aspects of new idea generation and the ensuing technological change and its effects on people and societies. Technology is never static, humans and our curiosity and our efforts to strive are never static, and that means that there’s not likely to be an “end of innovation” along the lines of an “end of history”:

Technology has not finished its work; it has barely started. Some lessons from history may show why. For one thing, technological progress has an unusual dynamic: it solves problems, but in doing so it, more often than not, creates new ones as unintended side-effects of the previous breakthroughs, and these in turn have to be solved, and so on. …

As we know more, we can push back against the pushback. And so on. The history of technology is to a large extent the history of unintended consequences. …

What will a future generation think of our technological efforts? During the Middle Ages, nobody knew they were living in the Middle Ages (the term emerged centuries later), and they would have resented a notion that it was an age of unbridled barbarism (it was not). During the early stages of the Industrial Revolution in the 18th century, few had a notion that a new technological dawn was breaking. So it is hard for someone alive today to imagine what future generations will make of our age. But to judge from progress in the past decades, it seems that the Digital Age may become to the Analog Age what the Iron Age was to the Stone Age. It will not last as long, and there is no way of knowing what will come after. But experience suggests that the metaphor of low-hanging fruit is misleading. Technology creates taller and taller ladders, and the higher-hanging fruits are within reach and may be just as juicy.

None of this is guaranteed. Lots of things can go wrong. Human history is always the result of a combination of deep impersonal forces, accidents and contingencies. Unintended consequences, stupidity, fear and selfishness often get in the way of making life better for more and more people. Technology alone cannot provide material progress; it’s just that without it, all the other ways of economic progress soon tend to fizzle out. Technological progress is perhaps not the cure-all for all human ills, but it beats the alternative.

Joel’s essay is well worth reading in its entirety. His argument highlights the decentralized, curiosity-driven process of technological change that does not proceed linearly, but is impossible to quash. These processes contribute to economic well-being in societies with good institutional and cultural contexts that facilitate and reward innovation when it generates value for others.

Economist debate on technological progress

Lynne Kiesling

The Economist recently did one of their periodic debates, this time on the pace and effects of technological progress. Moderator Ryan Avent framed the debate thus:

This leads some scholars to conclude that accelerating technical change is an illusion. Autonomous vehicles and 3D printers are flashy but lack the transformative power of electricity or the jet engine, some argue. Indeed, the contribution of technology to growth may be weakening rather than strengthening. Others strongly disagree, noting that even in the thick of the Industrial Revolution there were periodic slowdowns in growth. Major new innovations do not generate immediate economic results, they reckon, but provide a boost over decades as firms and households learn how to use them to make life easier and better. The impressive inventions of the past decade—including remarkable growth in social networking—have hardly had time to make themselves felt across the economy.

Which side is right? Is technological change accelerating, or has most of the benefit from the IT revolution already been realised, leaving the rich world in the grip of continued technical stagnation?

Taking the “pro” position on technological progress is Andrew McAfee of MIT; taking the “con” position is my colleague Robert Gordon, whose recent work on technological stagnation has been widely discussed and controversial (see here a recent TED talk that Bob gave on technological stagnation and one from MIT’s Erik Brynjolfsoon on the same TED panel).

McAfee starts by pointing out that stagnation arguments rely on short-run data (post-1940s is definitely short run for technological change, as Bob also argues). Often 100 years is more of the timescale for looking at technological change and its effects, and since modern digital technology is mostly a post-1960 phenomenon, are we being premature in declaring stagnation? McAfee also points out that the nature of the changes in quality of life arising from technology makes those changes hard to capture in economic statistics. In the Industrial Revolutions of the 19th century, mechanical changes and changes in energy use led to large, quick productivity effects. But the nature of digital technology and its effects is more distributed, smaller scale but widespread, and focused on the communication of information and the ability to control processes. That makes for different patterns of both adoption and outcomes from the adoption of digital technology. It also makes for more distributed new product/service innovation at the edges of networks, which is another substantively different pattern in economic activity than seen in the 19th/early 20th century. Kevin Kelly also made many of these observations in a January 2013 EconTalk podcast with Russ Roberts.

I am, not surprisingly, sympathetic to this argument. I also think that framing the question as “is technological change accelerating?” is not helpful. As with any other changes arising from human action and interaction, rates of technological change will ebb and flow, and it’s only really informative to look retrospectively at long time periods to understand the effects of technological change. That’s why economic history, especially the history of innovation, is valuable, and attempts at predictive forecasting with respect to technology are not useful, or at least should be taken with massive grains of salt. It’s also why this Economist debate is a bit frustrating, because both parties (but especially Gordon) rely pedantically on the acceleration of the rate of change (in other words, the second derivative being positive) as the question at hand. Is that really the interesting question? I don’t think so, because of the ebb and flow. It’s how technological change affects the daily lives of the population that matters, and how, in Adam Smith’s language, it translates into “universal and widespread opulence”. There are lots of ways for that to manifest itself, and they won’t all show up in aggregate productivity statistics.

Gordon’s stagnation argument seems to have the most purchase when he makes this claim in his first debate post:

A sobering picture emerges from a look at the official data on personal consumption expenditures. Only about 7% of spending has anything to do with audio, video, or computer-related goods and services, including purchases of equipment to paying the bills for cable TV or mobile-phone subscriptions. Fully 70% of consumer spending is on services, and what are the largest categories? Housing rent, water supply, electricity and gas, doctor and dentist bills, hospitals, auto repair, public transport, membership clubs, theatres, museums, spending in restaurants and bars, bank and financial services fees, higher and secondary education, barber shops and nail salons, religious activities, air fares and hotel fees—none of which are being altered appreciably by recent high-tech innovation.

He’s right that some of these categories are in industries that are less prone to change in quantity, quality, or cost due to innovation, although it’s important to bear in mind with respect to electricity, medical care, and financial service fees that much of the apparent stagnation arises from regulatory institutions and the innovation-reducing (or stifling) effects of regulation, not from technological stagnation per se.

McAfee rebuts by elaborating on the slow unfolding of innovation’s effects in the past. He then offers some examples (including fracking, very familiar to KP readers!) to illustrate the demonstrable productivity impacts of technology. He doesn’t fully go at what I see as the Achilles heel of the stagnation-productivity argument — the extent to which small-scale, distributed effects on product differentiation, product quality, and transaction costs are not going to be reflected in aggregate economic statistics.

At the end, the readers find for McAfee. But in important ways the question is both pedantic and unanswerable. I think a better way of framing the question is to ask the comparative institutional question: what types of social institutions (culture, norms, law, statute, regulation) best facilitate thriving human creativity and the ability to turn innovation into new and different products and services, into transaction cost reductions that change organizational and industry structures, and lead to economic growth, even if it’s in ways that don’t show up in labor productivity statistics?

The ephemeral Schumpeterian monopoly

Lynne Kiesling

The Atlantic’s Derek Thompson parses Mary Meeker’s annual state of the Internet presentation, which includes some nifty and insightful analyses of data. Here’s my favorite:

mm pres os market share

Note that this is in percentage terms, so it doesn’t show the overall increase in the number and variety of digital devices used — the number of devices using Windows OS hasn’t necessarily declined, but the growth in the past five years of mobile devices using Apple and Android OS is truly striking in terms of its effect on the WinOS overall market share.

The decade-long (1995-2005) Windows OS dominance and its subsequent decline is interesting to those of use who study the economic history of technology. To me it indicates Schumpeter’s point about the ephemeral nature of monopoly and how innovation is the process that generates the new products and platforms that compete with the existing ones.

Perennial gale of creative destruction indeed.

Ford’s MyEnergi Lifestyle

Lynne Kiesling

You may know that the annual Consumer Electronics Show has been going on this week in Las Vegas (CES2013). CES is the venue for displaying the latest, greatest, wonderful electronic gadgets that will enrich your life, improve your productivity, reduce your stress, and make your breath minty fresh.

And, increasingly, ways to save energy and reduce energy waste. The most ambitious proposition to come out of CES2013 is Ford’s MyEnergi Lifestyle, as described in a Wired magazine article from the show:

Here at CES 2013, the automaker announced MyEnergi Lifestyle, a sweeping collaboration with appliance giant Whirlpool, smart-meter supplier Infineon, Internet-connected thermostat company Nest Labs and, for a green-energy slant, solar-tech provider SunPower. The goal is to help people understand how the “time-flexible” EV charging model can more cheaply power home appliances, and how combining an EV, connected appliances and the data they generate can help them better manage their energy consumption and avoid paying for power at high rates. …

Appliances are getting smarter, too. Some of the most power-hungry appliances, such as a water heater and the ice maker in your freezer, can now schedule their most energy-intensive activities at night. Nest’s Internet-connected thermostat can help homeowners save energy while their [sic] away. While some of the appliances and devices within MyEnergi Lifestyle launch early this year, others are available now, Tinskey said.

One reason why I think this initiative is promising is its involvement of Whirlpool and Nest, two very different companies that are both focused on ways to combine digital technology and elegant design to make energy efficiency in the home appealing, attractive, and easy to implement.

The value proposition is largely a cloud-based data one — gather data on the electricity use in the home in real time, program in some consumer-focused triggers, such as price thresholds, and manage the electricity use in the home with the objective of minimizing cost and emissions. Gee, I think I’ve heard that one here before