Critiquing the theory of disruptive innovation

Jill Lepore, a professor of history at Harvard and writer for the New Yorker, has written a critique of Clayton Christensen’s theory of disruptive innovation that is worth thinking through. Christensen’s The Innovator’s Dilemma (the dilemma is for firms to continue making the same decisions that made them successful, which will lead to their downfall) has been incredibly influential since its 1997 publication, and has moved the concept of disruptive innovation from its arcane Schumpeterian origins into modern business practice in a fast-changing technological environment. Disrupt or be disrupted, innovate or die, become corporate strategy maxims under the theory of disruptive innovation.

Lepore’s critique highlights the weaknesses of Christensen’s model (and it does have weaknesses, despite its success and prevalence in business culture). His historical analysis, the case study methodology, and the decisions he made regarding cutoff points in time all leave unsatisfyingly unsystematic support for his model, yet he argues that the theory of disruptive innovation is predictive and can be used with foresight to identify how firms can avoid failure. Lepore’s critique here is apt and worth considering.

Josh Gans weighs in on the Lepore article, and the theory of disruptive innovation more generally, by noting that at the core of the theory of disruptive innovation lies a new technology, and the appeal of that technology (or what it enables) to consumers:

But for every theory that reaches too far, there is a nugget of truth lurking at the centre. For Christensen, it was always clearer when we broke it down to its constituent parts as an economic theorist might (by the way, Christensen doesn’t like us economists but that is another matter). At the heart of the theory is a type of technology — a disruptive technology. In my mind, this is a technology that satisfies two criteria. First, it initially performs worse than existing technologies on precisely the dimensions that set the leading, for want of a better word, ‘metrics’ of the industry. So for disk drives, it might be capacity or performance even as new entrants promoted lower energy drives that were useful for laptops.

But that isn’t enough. You can’t actually ‘disrupt’ an industry with a technology that most consumers don’t like. There are many of those. To distinguish a disruptive technology from a mere bad idea or dead-end, you need a second criteria — the technology has a fast path of improvement on precisely those metrics the industry currently values. So your low powered drives get better performance and capacity. It is only then that the incumbents say ‘uh oh’ and are facing disruption that may be too late to deal with.

Herein lies the contradiction that Christensen has always faced. It is easy to tell if a technology is ‘potentially disruptive’ as it only has to satisfy criteria 1 — that it performs well on one thing but not on the ‘standard’ stuff. However, that is all you have to go on to make a prediction. Because the second criteria will only be determined in the future. And what is more, there has to be uncertainty over that prediction.

Josh has hit upon one of the most important dilemmas in innovation — if the new technology is likely to succeed against the old, it must offer satisfaction on the established value propositions of the incumbent technology as well as improving upon them either in speed, quality, or differentiation. And that’s inherently unknown; the incumbent can either innovate too soon and suffer losses, or innovate too late and suffer losses. At this level, the theory does not help us distinguish and identify the factors that associate innovation with continued success of the firm.

Both Lepore and Gans highlight Christensen’s desire for his theory to be predictive when it cannot be. Lepore summarizes the circularity that indicates this lack of a predictive hypothesis:

If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. … When an established company succeeds, that’s only because it hasn’t yet failed. And, when any of these things happen, all of them are only further evidence of disruption.

What Lepore brings to the party, in addition to a sharp mind and good analytical writing, is her background and sensibilities as an historian. A historical perspective on innovation helps balance some of the breathless enthusiasm for novelty often found in technology or business strategy writing. Her essay includes a discussion of the concept of “innovation” and how it has changed over several centuries (having been largely negative pre-Schumpeter), as has the Enlightenment’s theory of history as being one of human progress, which has since morphed into different theories of history:

The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. …

The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.

I think there’s a lot to her interpretation (and I say that wearing both my historian hat and my technologist hat). But I think that both the Lepore and Gans critiques, and indeed Christensen’s theory of disruptive innovation itself, would benefit from (for lack of a catchier name) a Smithian-Austrian perspective on creativity, uncertainty, and innovation.

The Lepore and Gans critiques indicate, correctly, that supporting the disruptive innovation theory requires hindsight and historical analysis because we have to observe realized outcomes to identify the relationship between innovation and the success/failure of the firm. That concept of an unknown future rests mostly in the category of risk — if we identify that past relationship, we can generate a probability distribution or a Bayesian prior for the factors likely to lead to innovation yielding success.

But the genesis of innovation is in uncertainty, not risk; if truly disruptive, innovation may break those historical relationships (pace the Gans observation about having to satisfy the incumbent value propositions). And we won’t know if that’s the case until after the innovators have unleashed the process. Some aspects of what leads to success or failure will indeed be unknowable. My epistemic/knowledge problem take on the innovator’s dilemma is that both risk and uncertainty are at play in the dynamics of innovation, and they are hard to disentangle, both epistemologically and as a matter of strategy. Successful innovation will arise from combining awareness of profit opportunities and taking action along with the disruption (the Schumpeter-Knight-Kirzner synthesis).

The genesis of innovation is also in our innate human creativity, and our channeling of that creativity into this thing we call innovation. I’d go back to the 18th century (and that Enlightenment notion of progress) and invoke both Adam Smith and David Hume to argue that innovation as an expression of human creativity is a natural consequence of our individual striving to make ourselves better off. Good market institutions using the signals of prices, profits, and losses align that individual striving with an incentive for creators to create goods and services that will benefit others, as indicated by their willingness to buy them rather than do other things with their resources.

By this model, we are inherent innovators, and successful innovation involves the combination of awareness, action, and disruption in the face of epistemic reality. Identifying that combination ex ante may be impossible. This is not a strategy model of why firms fail, but it does suggest that such strategy models should consider more than just disruption when trying to understand (or dare I say predict) future success or failure.

Joel Mokyr on growth, stagnation, and technological progress

My friend and colleague Joel Mokyr talked recently with Russ Roberts in an EconTalk podcast that I cannot recommend highly enough (and the links on the show notes are great too). The general topic is this back-and-forth that’s been going on over the past year involving Joel, Bob Gordon, Tyler Cowen, and Erik Brynjolfsson, among others, regarding diminishing returns to technological change and whether we’ve reached “the end of innovation”. Joel summarizes his argument in this Vox EU essay.

Joel is an optimist, and does not believe that technological dynamism is running out of steam (to make a 19th-century joke …). He argues that technological change and its ensuing economic growth are punctuated, and one reason for that is that conceptual breakthroughs are essential but unforeseeable. Economic growth also occurs because of the perpetual nature of innovation — the fact that others are innovating (here he uses county-level examples) means that everyone has to innovate as a form of running to stand still. I agree, and I think as long as the human mind, human creativity, and human striving to achieve and accomplish exist, there will be technological dynamism. A separate question is whether the institutional framework in which we interact in society is conducive to technological dynamism and to channeling our creativity and striving into such constructive application.

Adam Thierer on regulating media platforms

The Mercatus Center’s Adam Thierer analyzes communications technologies and the policies influencing the development and use of them, and I’ve always found his work extremely valuable in my own thinking. Adam and Brent Skorup have a new Mercatus study on lobbying in the information technology sector, A History of Cronyism and Capture in the Information Technology Sector.

One area where Adam and I have common cause is in the interaction of regulation and technological change, and the extent to which regulation may not yield the desired outcomes when regulation dilutes incentives to innovate and stifles change, due to some static definition of “public interest” that is inconsistent with dynamism and experimentation.

I recommend this Surprisingly Free podcast, in which Adam discusses proposals to regulate Facebook and other social media platform companies as public utilities; the podcast page also has links to some of Adam’s written work. In particular, if you want to explore these ideas I suggest Adam’s Mercatus paper on the perils of classifying social media companies as public utilities, in which he argues:

Social media aren’t public utilities for two key reasons:

  1. Social Media do not possess the potential to become natural monopolies. They are virtually no costs to consumers and competitors have the ability to duplicate such platforms. The hottest networks are changing every year, and there is no way for the government to determine which platform is going to become popular next.  Remember MySpace or CompuServe?
  2. Social Media are not essential facilities. Those who claim that Facebook is a “social utility” or “social commons” must admit that such sites are not essential to survival, economic success, or online life. Unlike water and electricity, life can go on without social networking services.

Public utility regulation would instead stifle digital innovation and raise prices of these services for users. Not only are social media sites largely free and universally available, but they are also constantly innovating.

I am going to be digging into a new research project later this summer using some of Adam’s arguments, so I am particularly interested in your comments and thoughts.

The Criminal N.S.A.

From law professors Jennifer Stisa Granick and Christopher Jon Sprigman, in today’s New York Times:

“We may never know all the details of the mass surveillance programs, but we know this: The administration has justified them through abuse of language, intentional evasion of statutory protections, secret, unreviewable investigative procedures and constitutional arguments that make a mockery of the government’s professed concern with protecting Americans’ privacy. It’s time to call the N.S.A.’s mass surveillance programs what they are: criminal.”

UPDATE: Here’s a good article in the Atlantic riffing off of the Granick & Sprigman piece, and filling in some background beyond what they could do within their word count limit.

Economist debate on technological progress

Lynne Kiesling

The Economist recently did one of their periodic debates, this time on the pace and effects of technological progress. Moderator Ryan Avent framed the debate thus:

This leads some scholars to conclude that accelerating technical change is an illusion. Autonomous vehicles and 3D printers are flashy but lack the transformative power of electricity or the jet engine, some argue. Indeed, the contribution of technology to growth may be weakening rather than strengthening. Others strongly disagree, noting that even in the thick of the Industrial Revolution there were periodic slowdowns in growth. Major new innovations do not generate immediate economic results, they reckon, but provide a boost over decades as firms and households learn how to use them to make life easier and better. The impressive inventions of the past decade—including remarkable growth in social networking—have hardly had time to make themselves felt across the economy.

Which side is right? Is technological change accelerating, or has most of the benefit from the IT revolution already been realised, leaving the rich world in the grip of continued technical stagnation?

Taking the “pro” position on technological progress is Andrew McAfee of MIT; taking the “con” position is my colleague Robert Gordon, whose recent work on technological stagnation has been widely discussed and controversial (see here a recent TED talk that Bob gave on technological stagnation and one from MIT’s Erik Brynjolfsoon on the same TED panel).

McAfee starts by pointing out that stagnation arguments rely on short-run data (post-1940s is definitely short run for technological change, as Bob also argues). Often 100 years is more of the timescale for looking at technological change and its effects, and since modern digital technology is mostly a post-1960 phenomenon, are we being premature in declaring stagnation? McAfee also points out that the nature of the changes in quality of life arising from technology makes those changes hard to capture in economic statistics. In the Industrial Revolutions of the 19th century, mechanical changes and changes in energy use led to large, quick productivity effects. But the nature of digital technology and its effects is more distributed, smaller scale but widespread, and focused on the communication of information and the ability to control processes. That makes for different patterns of both adoption and outcomes from the adoption of digital technology. It also makes for more distributed new product/service innovation at the edges of networks, which is another substantively different pattern in economic activity than seen in the 19th/early 20th century. Kevin Kelly also made many of these observations in a January 2013 EconTalk podcast with Russ Roberts.

I am, not surprisingly, sympathetic to this argument. I also think that framing the question as “is technological change accelerating?” is not helpful. As with any other changes arising from human action and interaction, rates of technological change will ebb and flow, and it’s only really informative to look retrospectively at long time periods to understand the effects of technological change. That’s why economic history, especially the history of innovation, is valuable, and attempts at predictive forecasting with respect to technology are not useful, or at least should be taken with massive grains of salt. It’s also why this Economist debate is a bit frustrating, because both parties (but especially Gordon) rely pedantically on the acceleration of the rate of change (in other words, the second derivative being positive) as the question at hand. Is that really the interesting question? I don’t think so, because of the ebb and flow. It’s how technological change affects the daily lives of the population that matters, and how, in Adam Smith’s language, it translates into “universal and widespread opulence”. There are lots of ways for that to manifest itself, and they won’t all show up in aggregate productivity statistics.

Gordon’s stagnation argument seems to have the most purchase when he makes this claim in his first debate post:

A sobering picture emerges from a look at the official data on personal consumption expenditures. Only about 7% of spending has anything to do with audio, video, or computer-related goods and services, including purchases of equipment to paying the bills for cable TV or mobile-phone subscriptions. Fully 70% of consumer spending is on services, and what are the largest categories? Housing rent, water supply, electricity and gas, doctor and dentist bills, hospitals, auto repair, public transport, membership clubs, theatres, museums, spending in restaurants and bars, bank and financial services fees, higher and secondary education, barber shops and nail salons, religious activities, air fares and hotel fees—none of which are being altered appreciably by recent high-tech innovation.

He’s right that some of these categories are in industries that are less prone to change in quantity, quality, or cost due to innovation, although it’s important to bear in mind with respect to electricity, medical care, and financial service fees that much of the apparent stagnation arises from regulatory institutions and the innovation-reducing (or stifling) effects of regulation, not from technological stagnation per se.

McAfee rebuts by elaborating on the slow unfolding of innovation’s effects in the past. He then offers some examples (including fracking, very familiar to KP readers!) to illustrate the demonstrable productivity impacts of technology. He doesn’t fully go at what I see as the Achilles heel of the stagnation-productivity argument — the extent to which small-scale, distributed effects on product differentiation, product quality, and transaction costs are not going to be reflected in aggregate economic statistics.

At the end, the readers find for McAfee. But in important ways the question is both pedantic and unanswerable. I think a better way of framing the question is to ask the comparative institutional question: what types of social institutions (culture, norms, law, statute, regulation) best facilitate thriving human creativity and the ability to turn innovation into new and different products and services, into transaction cost reductions that change organizational and industry structures, and lead to economic growth, even if it’s in ways that don’t show up in labor productivity statistics?

The ephemeral Schumpeterian monopoly

Lynne Kiesling

The Atlantic’s Derek Thompson parses Mary Meeker’s annual state of the Internet presentation, which includes some nifty and insightful analyses of data. Here’s my favorite:

mm pres os market share

Note that this is in percentage terms, so it doesn’t show the overall increase in the number and variety of digital devices used — the number of devices using Windows OS hasn’t necessarily declined, but the growth in the past five years of mobile devices using Apple and Android OS is truly striking in terms of its effect on the WinOS overall market share.

The decade-long (1995-2005) Windows OS dominance and its subsequent decline is interesting to those of use who study the economic history of technology. To me it indicates Schumpeter’s point about the ephemeral nature of monopoly and how innovation is the process that generates the new products and platforms that compete with the existing ones.

Perennial gale of creative destruction indeed.

Nest and technology-service bundling

Lynne Kiesling

nest-rush-hour-alert

Nest’s recent business developments are refreshing and promising. Building on the popularity of its elegant and easy-to-use learning thermostat in its first couple of years, Nest is introducing new Nest-enabled services to automate changes in settings and energy use in the home. Called Rush Hour Rewards and Seasonal Savings, Nest claims:

Rush Hour Rewards could help you earn anywhere from $20-$60 this summer—it takes advantage of energy company incentives that pay you to use less energy when everyone else is using more. Seasonal Savings takes everything Nest has learned about you and automatically fine-tunes Nest’s schedule to save energy, without sacrificing comfort. Field trials have been impressive: Nest owners have used 5-10% less heating and cooling with Seasonal Savings and 80% said they’d keep their tuned-up schedules after Seasonal Savings ended.

The ever-incisive Katie Fehrenbacher calls their move a bundling of its “smart thermostat with data-driven services“, which sounds about right to me.

Behind these new services is the cloud-based big data algorithms that are the secret sauce of Nest, and which Nest has now named Auto-Tune. Now that Nest has gotten hundreds of thousands of thermostats out there in the market, and has done two years of field trials, it has been able to collect a large amount of data about how customers use and react to temperature and cooling changes. Nest uses this data about behavioral changes to inform its services and how its algorithms work.

She also remarks on something I noticed — in its marketing of its new services Nest assiduously avoids the phrase “demand response”, instead saying “New features save energy & make money. Automatically.” Once you get beyond the elegant interface, the thoughtful network and device connectivity, and the “secret sauce” algorithms, Rush Hour Rewards is little more than standard, administered, regulator-approved direct load control. But Nest’s elegance, marketing, and social-media-savvy outreach may make it more widespread and appealing than any number of regulator-approved bill inserts about AC cyling have over the decades.

In a very good Wired story on Nest Energy Services, Steven Levy analogizes between the technology-digital service bundle in energy and in music; quoting Nest CEO Tony Faddell, Levy notes that:

This pivot is in the best tradition of companies like Apple and even Amazon, whose hardware devices have evolved to become front ends for services like iTunes or Amazon Prime Instant Movies. Explaining how this model works in the thermostat world, Fadell compares power utilities to record labels. Just as Apple provided services to help customers link with the labels to get music, Nest is building digital services to help customers save money. Unlike the case with record labels, however, Nest isn’t eroding the utility business model, but fulfilling a long-term need–getting customers to change their behavior during periods of energy scarcity.

“Until now, if utilities wanted customers to change their behavior to use less electricity at those time, they instituted what was called unilateral demand response—they wouldn’t automate the process, they’d turn off the air-conditioning whenever they wanted. It was like DRM during the iPod days—where companies like Sony said, ‘I am the guardian, and I’m going to tell you what to do’.”

Faddell (and Levy and Fehrenbacher) articulates the value potential of technology-service bundles to automate energy consumption decisions in ways that save energy and money without reducing comfort. While the guts of their services are still direct load control and are not dynamic in any way that would make meaningful use of such a potentially transactive technology, I do think it’s a promising evolution beyond the monolithic, administrative, regulatory demand response approach.

The LIFX lightbulb: Bringing the Internet of things to electricity

Lynne Kiesling

The LIFX lightbulb is one of the most exciting things I’ve seen in a while, even in a period of substantial innovation affecting many areas of our lives. It’s a Kickstarter project, not coming from an established company like GE or Philips, not coming from within the electricity industry. Go watch the intro video, and then come back … you back? So how cool is that? Wifi enabled for automation and remote control from your smartphone. Automation of electricity consumption at the bulb level. You can set your nightstand bulb to dim and brighten according to your sleep cycle. It’s an LED bulb, so it can change colors, any combination in the Pantone scale, from your phone, anywhere. And, as an LED bulb, you get all of these automation and aesthetic features in a low-energy, low-carbon package.

This discussion of their project provides insight into the entrepreneurial future of consumer-facing energy technology — it’s not about the hardware, it’s about the software:

The LIFX app is one of our favorite aspects of the entire project, and we’ve spent countless hours thinking about how you can interact with your lights. We have mapped out a very smooth configuration UX from the app to the LIFX master bulb. In essence you place your LIFX smartbulb into a light socket, turn the switch on and then launch the app. You will be guided through a process of choosing your home network from a list and then entering your password. The LIFX master bulb will then auto configure itself to your router and all the slave bulbs will auto connect to the master. If you add more slave bulbs down the track  these will also auto connect.

Regarding security: LIFX will be as secure as your WiFi network. eg. without the WiFi network password you can’t control the smartbulbs.

We’re aware that while the hardware is the most visible and interesting part of this project our software is the soul.

This. This is the right thing to do, from my perspective, from both economic and environmental perspectives. And while I think Kevin Tofel at Greentech is right that there’s a network architecture issue here (separate control systems vs. a single server capturing and implementing your automation decisions throughout the house), a system like LIFX’s seems to me to be flexible enough to be incorporated into a whole-house energy management setup. And, given how enthusiastically consumers have adopted wireless mobile technologies, that seems to be a good place to start to get consumers comfortable with this degree of automation and functionality. Transactive capabilities and dynamic pricing are next! Unless our electricity network is transactive it’s not smart, and intelligent end-use devices (and the connectivity to network them for automation) create value for consumers from that intelligence.

Note also the implications of software like LIFX’s for having electricity enter the Internet of Things. As sensors and the connectivity among them become ubiquitous, we can automate our consumption decisions much more deeply, at a much more granular level (down to the bulb, here), in ways that do not inconvenience us. We can use the technology to make ourselves better off by automating our choices in response to variables we care about, which eventually will include variables like the retail price of electricity and the carbon content of the fuel used to generate it. The Internet of Things reflects Alfred North Whitehead’s observation that “civilization advances by extending the number of important operations which we can perform without thinking of them.”

The Internet of Things enables mass customization and the ability of each individual to choose a bundle, a set of features, a price contract that they expect to bring them the most net benefits. This is a dramatic technological and cultural break from the century-long custom and regulatory practice of uniform products, uniform quality, uniform pricing as a matter of social policy. The public interest ethic of uniformity ties us to mediocrity, to the extent that it constrains what features and pricing people can bundle and consume with technologies like these.

Another Internet of Things implication here is that, with each bulb having a unique sensor and identifier, we will generate very detailed, granular data about how the connected, sensing devices operate. Such “big data” can help us use less energy, save money, do more with less, and lots of other things I can’t imagine but some other entrepreneur will, and will bring to market, if regulation doesn’t stifle it, and with clear stipulations of consumer privacy and property rights in their data.

You can also tell that this is an interesting topic when I am not the first economist to write about it! I love seeing my colleagues interested in electricity-related technologies. Mark Perry shares my enthusiasm about the application of human creativity to generate such a product. Josh Gans shares my enthusiasm for the networking, the interoperability, and the open architecture. And Felix Salmon offers a worthy note of caution about the ability of LIFX to deliver on its promised features and timeline, given the time delays experienced in other Kickstarter projects.

A fashion coda on wireless charging

Lynne Kiesling

As a coda to my previous post on wireless induction charging, there’s a Kickstarter out right now for Everpurse, a wireless charging purse for iPhone 4. The battery pack in the purse has to be charged from an AC adapter, but it will charge a phone as long as it has charge.

Is wireless charging finally going to take off?

Lynne Kiesling

Since the pioneering research of Nikola Tesla (have you contributed to his museum yet?) we’ve dreamed of wireless transmission of electricity, including wireless charging of devices. Tesla’s magnetic induction experiments gave us proof of concept almost 140 years ago, so where are the wireless chargers? We were promised wireless charging!

Jessical Leber at Technology Review suggests that it’s not a lack of supply, but rather slow consumer adoption that’s the reason why we don’t have ubiquitous wireless charging.

Why hasn’t cord-free charging—where a device gets charged when you place it on a charging surface—caught on? It’s not due to a shortage of products, nor from a shortage of companies that want to sell them. More than 125 businesses have joined the Wireless Power Consortium, formed in late 2008 to create a global charging standard. While the consortium hopes the technology will one day become as common as Bluetooth in most devices and, like Wi-Fi, available in many public spaces, wireless charging has been slow to take off.

I think a common, open standard, like we have for Bluetooth and USB, will reduce adoption hurdles, although she does discuss toward the end of the article the current set of two competing standards. History tells us, though, from rail gauges to DVD formats, convergence eventually occurs.

This article is full of valuable and interesting information about the technical hurdles of getting induction receivers in devices, investing in charging mats for public places, and so on, but it’s frustratingly oblique in its answer to the question in the quote above. Leber doesn’t offer any specific answers to the question of why consumers have been slow to adopt wireless charging, although she implies two reasons: non-binding power requirement constraints and the coordination-complementarity bottleneck between devices and charging pads. As more consumers bump up against the constraints of battery capacity, wireless charging will become more attractive. The article also mentions the potential inclusion of charging mats in cars, and the investment involvement of automobile manufacturers in wireless charging companies.

Note here the interaction of three different innovation processes — device, battery, and charging pad. Devices are becoming more functional, requiring more power, draining batteries faster. Batteries have become more robust and longer-lived, but have been outpaced by the power demands of the increased functionality of mobile devices. Layer that on top of the innovation process in charging pads and you have quite a moving target, technologically and economically.