The political economy of Uber’s multi-dimensional creative destruction

Over the past week it’s been hard to keep up with the news about Uber. Uber’s creative destruction is rapid, and occurring on multiple dimensions in different places. And while the focus right now is on Uber’s disruption in the shared transportation market, I suspect that more disruption will arise in other markets too.

Start with two facts from this Wired article from last week by Marcus Wohlsen: Uber has just completed a funding round that raised an additional $1.2 billion, and last week it announced lower UberX fares in San Francisco, New York, and Chicago (the Chicago reduction was not mentioned in the article, but I am an Uber Chicago customer, so I received a notification of it). This second fact is interesting, especially once one digs in a little deeper:

With not just success but survival on the line, Uber has even more incentive to expand as rapidly as possible. If it gets big enough quickly enough, the political price could become too high for any elected official who tries to pull Uber to the curb.

Yesterday, Uber announced it was lowering UberX fares by 20 percent in New York City, claiming the cuts would make its cheapest service cheaper than a regular yellow taxi. That follows a 25 percent decrease in the San Francisco Bay Areaannounced last week, and a similar drop in Los Angeles UberX prices revealed earlier last month. The company says UberX drivers in California (though apparently not in New York) will still get paid their standard 80 percent portion of what the fare would have been before the discount. As Forbes‘ Ellen Huet points out, the arrangement means a San Francisco ride that once cost $15 will now cost passengers $11.25, but the driver still gets paid $12.

So one thing they’re doing with their cash is essentially topping off payments to drivers while lowering prices to customers for the UberX service. Note that Uber is a multi-service firm, with rides at different quality/price combinations. I think Wohlsen’s Wired argument is right, and that they are pursuing a strategy of “grow the base quickly”, even if it means that the UberX prices are loss leaders for now (while their other service prices remain unchanged). In a recent (highly recommended!) EconTalk podcast, Russ Roberts and Mike Munger also make this point.

This “grow the base” strategy is common in tech industries, and we’ve seen it repeatedly over the past 15 years with Amazon and others. But, as Wohlsen notes, this strategy has an additional benefit of making regulatory inertia and status quo protection more costly. The more popular Uber becomes with more people, the harder it will be for existing taxi interests to succeed in shutting them down.

The ease, the transparency, the convenience, the lower transaction costs, the ability to see and submit driver ratings, the consumer assessment of whether Uber’s reputation and driver certification provides him/her with enough expectation of safety — all of these are things that consumers can now assess for themselves, without a regulator’s judgment substitution for their own judgment. The technology, the business model, and the reputation mechanism diminish the public safety justification for taxi regulation. Creative destruction and freedom to innovate are the core of improvements in living standards. But the regulated taxi industry, having paid for medallions with the expectation of perpetual entry barriers, are seeing the value of the government-created entry barrier wither, and are lobbying to stem the losses in the value of their medallions. Note here the similarity between this situation and the one in the 1990s when regulated electric utilities argued, largely successfully, that they should be compensated for “stranded costs” when they were required to divest their generation capacity at lower prices due to the anticipation of competitive wholesale markets. One consequence of regulation is the expectation of the right to a profitable business model, an expectation that flies in the face of economic growth and dynamic change.

Another move that I think represents a political compromise while giving Uber a PR opportunity was last week’s agreement with the New York Attorney General to cap “surge pricing” during citywide emergencies, a policy that Uber appears to be extending nationally. As Megan McArdle notes, this does indeed make economists sad, since Uber’s surge pricing is a wonderful example of how dynamic pricing induces more drivers to supply rides when demand is high, rather than leaving potential passengers with fewer taxis in the face of a fixed, regulated price.

Sadly, no one else loves surge pricing as much as economists do. Instead of getting all excited about the subtle, elegant machinery of price discovery, people get all outraged about “price gouging.” No matter how earnestly economists and their fellow travelers explain that this is irrational madness — that price gouging actually makes everyone better off by ensuring greater supply and allocating the supply to (approximately) those with the greatest demand — the rest of the country continues to view marking up generators after a hurricane, or similar maneuvers, as a pretty serious moral crime.

Back in April Mike wrote here about how likely this was to happen in NY, and in commenting on the agreement with the NY AG last week, Regulation editor Peter Van Doren gave a great shout-out to Mike’s lead article in the Spring 2011 issue on price gouging regulations and their ethical and welfare effects.

Even though the surge pricing cap during emergencies is economically harmful but politically predictable (in Megan’s words), I think the real effects of Uber will transcend the shared ride market. It’s a flexible piece of software — an app, a menu of contracts with drivers and riders, transparency, a reputation mechanism. Much as Amazon started by disrupting the retail book market and then expanded because of the flexibility of its software, I expect Uber to do something similar, in some form.

Building, and commercializing, a better nuclear reactor

A couple of years ago, I was transfixed by the research from Leslie Dewan and Mark Massie highlighted in their TedX video on the future of nuclear power.

 

A recent IEEE Spectrum article highlights what Dewan and Massie have been up to since then, which is founding a startup called Transatomic Power in partnership with investor Russ Wilcox. The description of the reactor from the article indicates its potential benefits:

The design they came up with is a variant on the molten salt reactors first demonstrated in the 1950s. This type of reactor uses fuel dissolved in a liquid salt at a temperature of around 650 °C instead of the solid fuel rods found in today’s conventional reactors. Improving on the 1950s design, Dewan and Massie’s reactor could run on spent nuclear fuel, thus reducing the industry’s nuclear waste problem. What’s more, Dewan says, their reactor would be “walk-away safe,” a key selling point in a post-Fukushima world. “If you don’t have electric power, or if you don’t have any operators on site, the reactor will just coast to a stop, and the salt will freeze solid in the course of a few hours,” she says.

The article goes on to discuss raising funds for lab experiments and a subsequent demonstration project, and it ends on a skeptical note, with an indication that existing industrial nuclear manufacturers in the US and Europe are unlikely to be interested in commercializing such an advanced reactor technology. Perhaps the best prospects for such a technology are in Asia.

Another thing I found striking in reading this article, and that I find in general when reading about advanced nuclear reactor technology, is how dismissive some people are of such innovation — why not go for thorium, or why even bother with this when the “real” answer is to harness solar power for nuclear fission? Such criticisms of innovations like this are misguided, and show a misunderstanding of both the economics of innovation and the process of innovation itself. One of the clear benefits of this innovation is its use of a known, proven reactor technology in a novel way and using spent fuel rod waste as fuel. This incremental “killing two birds with one stone” approach may be an economical approach to generating clean electricity, reducing waste, and filling a technology gap while more basic science research continues on other generation technologies.

Arguing that nuclear is a waste of time is the equivalent of a “swing for the fences” energy innovation strategy. Transatomic’s reactor represents a “get guys on base” energy innovation strategy. We certainly should do basic research and swing for the fences, but that’s no substitute for the incremental benefits of getting new technologies on base that create value in multiple energy and environmental dimensions.

Critiquing the theory of disruptive innovation

Jill Lepore, a professor of history at Harvard and writer for the New Yorker, has written a critique of Clayton Christensen’s theory of disruptive innovation that is worth thinking through. Christensen’s The Innovator’s Dilemma (the dilemma is for firms to continue making the same decisions that made them successful, which will lead to their downfall) has been incredibly influential since its 1997 publication, and has moved the concept of disruptive innovation from its arcane Schumpeterian origins into modern business practice in a fast-changing technological environment. Disrupt or be disrupted, innovate or die, become corporate strategy maxims under the theory of disruptive innovation.

Lepore’s critique highlights the weaknesses of Christensen’s model (and it does have weaknesses, despite its success and prevalence in business culture). His historical analysis, the case study methodology, and the decisions he made regarding cutoff points in time all leave unsatisfyingly unsystematic support for his model, yet he argues that the theory of disruptive innovation is predictive and can be used with foresight to identify how firms can avoid failure. Lepore’s critique here is apt and worth considering.

Josh Gans weighs in on the Lepore article, and the theory of disruptive innovation more generally, by noting that at the core of the theory of disruptive innovation lies a new technology, and the appeal of that technology (or what it enables) to consumers:

But for every theory that reaches too far, there is a nugget of truth lurking at the centre. For Christensen, it was always clearer when we broke it down to its constituent parts as an economic theorist might (by the way, Christensen doesn’t like us economists but that is another matter). At the heart of the theory is a type of technology — a disruptive technology. In my mind, this is a technology that satisfies two criteria. First, it initially performs worse than existing technologies on precisely the dimensions that set the leading, for want of a better word, ‘metrics’ of the industry. So for disk drives, it might be capacity or performance even as new entrants promoted lower energy drives that were useful for laptops.

But that isn’t enough. You can’t actually ‘disrupt’ an industry with a technology that most consumers don’t like. There are many of those. To distinguish a disruptive technology from a mere bad idea or dead-end, you need a second criteria — the technology has a fast path of improvement on precisely those metrics the industry currently values. So your low powered drives get better performance and capacity. It is only then that the incumbents say ‘uh oh’ and are facing disruption that may be too late to deal with.

Herein lies the contradiction that Christensen has always faced. It is easy to tell if a technology is ‘potentially disruptive’ as it only has to satisfy criteria 1 — that it performs well on one thing but not on the ‘standard’ stuff. However, that is all you have to go on to make a prediction. Because the second criteria will only be determined in the future. And what is more, there has to be uncertainty over that prediction.

Josh has hit upon one of the most important dilemmas in innovation — if the new technology is likely to succeed against the old, it must offer satisfaction on the established value propositions of the incumbent technology as well as improving upon them either in speed, quality, or differentiation. And that’s inherently unknown; the incumbent can either innovate too soon and suffer losses, or innovate too late and suffer losses. At this level, the theory does not help us distinguish and identify the factors that associate innovation with continued success of the firm.

Both Lepore and Gans highlight Christensen’s desire for his theory to be predictive when it cannot be. Lepore summarizes the circularity that indicates this lack of a predictive hypothesis:

If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. … When an established company succeeds, that’s only because it hasn’t yet failed. And, when any of these things happen, all of them are only further evidence of disruption.

What Lepore brings to the party, in addition to a sharp mind and good analytical writing, is her background and sensibilities as an historian. A historical perspective on innovation helps balance some of the breathless enthusiasm for novelty often found in technology or business strategy writing. Her essay includes a discussion of the concept of “innovation” and how it has changed over several centuries (having been largely negative pre-Schumpeter), as has the Enlightenment’s theory of history as being one of human progress, which has since morphed into different theories of history:

The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. …

The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.

I think there’s a lot to her interpretation (and I say that wearing both my historian hat and my technologist hat). But I think that both the Lepore and Gans critiques, and indeed Christensen’s theory of disruptive innovation itself, would benefit from (for lack of a catchier name) a Smithian-Austrian perspective on creativity, uncertainty, and innovation.

The Lepore and Gans critiques indicate, correctly, that supporting the disruptive innovation theory requires hindsight and historical analysis because we have to observe realized outcomes to identify the relationship between innovation and the success/failure of the firm. That concept of an unknown future rests mostly in the category of risk — if we identify that past relationship, we can generate a probability distribution or a Bayesian prior for the factors likely to lead to innovation yielding success.

But the genesis of innovation is in uncertainty, not risk; if truly disruptive, innovation may break those historical relationships (pace the Gans observation about having to satisfy the incumbent value propositions). And we won’t know if that’s the case until after the innovators have unleashed the process. Some aspects of what leads to success or failure will indeed be unknowable. My epistemic/knowledge problem take on the innovator’s dilemma is that both risk and uncertainty are at play in the dynamics of innovation, and they are hard to disentangle, both epistemologically and as a matter of strategy. Successful innovation will arise from combining awareness of profit opportunities and taking action along with the disruption (the Schumpeter-Knight-Kirzner synthesis).

The genesis of innovation is also in our innate human creativity, and our channeling of that creativity into this thing we call innovation. I’d go back to the 18th century (and that Enlightenment notion of progress) and invoke both Adam Smith and David Hume to argue that innovation as an expression of human creativity is a natural consequence of our individual striving to make ourselves better off. Good market institutions using the signals of prices, profits, and losses align that individual striving with an incentive for creators to create goods and services that will benefit others, as indicated by their willingness to buy them rather than do other things with their resources.

By this model, we are inherent innovators, and successful innovation involves the combination of awareness, action, and disruption in the face of epistemic reality. Identifying that combination ex ante may be impossible. This is not a strategy model of why firms fail, but it does suggest that such strategy models should consider more than just disruption when trying to understand (or dare I say predict) future success or failure.

Permissionless innovation in electricity: the benefits of experimentation

Last Monday I was scheduled to participate in the Utility Industry of the Future Symposium at the NYU Law School. Risk aversion about getting back for Tuesday classes in the face of a forecast 7″ snowfall in New York kept me from attending (and the snow never materialized, which makes the cost even more bitter!), so I missed out on the great talks and panels. But I’ve edited my remarks into the essay below, with helpful comments and critical readings from Mark Silberg and Jim Speta. Happy thinking!

If you look through the lens of an economist, especially an economic historian, the modern world looks marvelous – innovation enables us to live very different lives than even 20 years ago, lives that are richer in experience and value in many ways. We are surrounded by dynamism, by the change arising from creativity, experimentation, and new ideas. The benefits of such dynamism are cumulative and compound upon each other. Economic history teaches us that well-being emerges from the compounding of incremental changes over time, until two decades later you look at your old, say, computer and you wonder that you ever accomplished anything that way at all.

The digital technology that allows us to flourish in unanticipated ways, large and small, is an expression of human creativity in an environment in which experimentation is rife and entry barriers are low. That combination of experimentation and low entry barriers is what has made the Internet such a rich, interesting, useful platform for us to use to make ourselves better off, in the different ways and meanings we each have.

And yet, very little (if any) of this dynamism has originated in the electricity industry, and little of this dynamism has affected how most people transact in and engage with electricity. Digital technologies now exist that consumers could use to observe and manage their electricity consumption in a more timely way than after the fact, at the end of the month, and to transact for services they value – different pricing, different fuel sources, and automating their consumption responses to changes in those. From the service convergence in telecom (“triple play”) we have experimented with and learned the value of bundling. Such bundling of retail electricity service with home entertainment, home security, etc. are services that companies like ADT and Verizon are exploring, but have been extremely slow to develop and have not commercialized yet, due to the combination of regulatory entry barriers that restrict producers and reinforce customer inertia. All of these examples of technologies, of pricing, of bundling, are examples of stalled innovation, of foregone innovation in this space.

Although we do not observe it directly, the cost of foregone innovation is high. Today residential consumers still generally have low-cost, plain-vanilla commodity electricity service, with untapped potential to create new value beyond basic service. Producers earn guaranteed, regulation-constrained profits by providing these services, and the persistence of regulated “default service contracts” in nominally competitive states is an entry barrier facing producers that might otherwise experiment with new services, pricing, and bundles. If producers don’t experiment, consumers can’t experiment, and thus both parties suffer the cost of foregone innovation – consumers lose the opportunity to choose services they may value more, and producers lose the opportunity to profit by providing them. By (imperfect) analogy, think about what your life would be like if Apple had not been allowed to set up retail stores that enable consumers to engage in learning while shopping. It would be poorer (and that’s true even if you don’t own any Apple devices, because the experimentation and learning and low entry barriers even benefits you because it encourages new products and entry).

This process of producer and consumer experimentation and learning is the essence of how we create value through exchange and market processes. What Internet pioneer Vint Cerf calls permissionless innovation, what writer Matt Ridley calls ideas having sex — these are the processes by which we humans create, strive, learn, adapt, and thrive.

But regulation is a permission-based system, and regulation slows or stifles innovation in electricity by cutting off this permissionless innovation. Legal entry barriers, the bureaucratic procedures for cost recovery, the risk aversion of both regulator and regulated, all undermine precisely the processes that enable innovation to yield consumer benefits and producer profits. In this way regulation that dictates business models and entry barriers discourages activities that benefit society, that are in the public interest.

The question of public interest is of course central to any analysis of electricity regulation’s effects. Our current model of utility regulation has been built on the late 19th century idea that cost-based regulation and restricting entry would make reliable electric service ubiquitous and as cheap as is feasible. Up through the 1960s, while exploiting the economies of scale and scope in the conventional mechanical technologies, that concept of the public interest was generally beneficial. But by so doing, utility regulation entrenched “iron in the ground” technologies in the bureaucratic process. It also entrenched an attitude and a culture of prudential preference for those conventional technologies on the part of both regulator and regulated.

This entrenchment becomes a problem because the substance of what constitutes the public interest is not static. It has changed since the late 19th century, as has so much in our lives, and it has changed to incorporate the dimension of environmental quality as we have learned of the environmental effects of fossil fuel consumption. But the concept of the public interest of central generation and low prices that is fossilized in regulatory rules does not reflect that change. I argue that the “Rube Goldberg” machine accretion of RPS, tax credits, and energy efficiency mandates to regulated utilities reflects just how poorly situated the traditional regulated environment is to adapting to the largely unforeseeable changes arising from the combination of dynamic economic and environmental considerations. Traditional regulation is not flexible enough to be adaptive.

The other entrenchment that we observe with regulation is the entrenchment of interests. Even if regulation was initiated as a mechanism for protecting consumer interests, in the administrative and legal process it creates entrenched interests in maintaining the legal and technological status quo. What we learn from public choice theory, and what we observe in regulated industries including electricity, is that regulation becomes industry-protecting regulation. Industry-protecting regulation cultivates constituency interests, and those constituency interests generally prefer to thwart innovation and retain entry barriers to restrict interconnection and third-party and consumer experimentation. This political economy dynamic contributes to the stifling of innovation.

As I’ve been thinking through this aloud with you, you’ve probably been thinking “but what about reliability and permissionless innovation – doesn’t the physical nature of our interconnected network necessitate permission to innovate?” In the centralized electro-mechanical T&D network that is more true, and in such an environment regulation provides stability of investments and returns. But again we see the cost of foregone innovation staring us in the face. Digital switches, open interconnection and interoperability standards (that haven’t been compromised by the NSA), and more economical small-scale generation are innovations that make high reliability in a resilient distributed system more possible (for example, a “system of systems” of microgrids and rooftop solar and EVs). Those are the types of conditions that hold in the Internet – digital switches, traffic rules, TCP-IP and other open data protocols — and as long as innovators abide by those physical rules, they can enter, enabling experimentation, trial and error, and learning.

Thus I conclude that for electricity policy to focus on facilitating what is socially beneficial, it should focus on clear, transparent, and just physical rules for the operation of the grid, on reducing entry barriers that prevent producer and consumer experimentation and learning, and on enabling a legal and technological environment in which consumers can use competition and technology to protect themselves.

Interpreting Google’s purchase of Nest

Were you surprised to hear of Google’s acquisition of Nest? Probably not; nor was I. Google has long been interested in energy monitoring technologies and the effect that access to energy information can have on individual consumption decisions. In 2009 they introduced Power Meter, which was an energy monitoring and visualization tool; I wrote about it a few times, including it on my list of devices for creating intelligence at the edge of the electric power network. Google discontinued it in 2011 (and I think Martin LaMonica is right that its demise showed the difficulty of competition and innovation in residential retail electricity), but it pointed the way toward transactive energy and what we have come to know as the Internet of things.

In his usual trenchant manner, Alexis Madrigal at the Atlantic gets at what I think is the real value opportunity that Google sees in Nest: automation and machine-to-machine communication to carry out our desires. He couches it in terms of robotics:

Nest always thought of itself as a robotics company; the robot is just hidden inside this sleek Appleish case.

Look at who the company brought in as its VP of technology: Yoky Matsuoka, a roboticist and artificial intelligence expert from the University of Washington.

In an interview I did with her in 2012, Matsuoka explained why that made sense. She saw Nest positioned right in a place where it could help machine and human intelligence work together: “The intersection of neuroscience and robotics is about how the human brain learns to do things and how machine learning comes in to augment that.”

I agree that it is an acquisition to expand their capabilities to do distributed sensing and automation. Thus far Nest’s concept of sensing has been behavioral — when do you use your space and how do you use it — and not transactive. Perhaps that can be a next step.

The Economist also writes this week about the acquisition, and compares Google’s acquisitions and evolution to GE’s in the 20th century. The Economist article touches on the three most important aspects of this acquisition: the robotics that Alexis analyzed, the data generated and accessible to Google for advertising purposes, and the design talent at Nest to contribute to the growing interest in the Internet-of-things technologies that make the connected home increasingly feasible and attractive to consumers (and that some of us have been waiting, and waiting, and waiting to see develop):

Packed with sensors and software that can, say, detect that the house is empty and turn down the heating, Nest’s connected thermostats generate plenty of data, which the firm captures. Tony Fadell, Nest’s boss, has often talked about how Nest is well-positioned to profit from “the internet of things”—a world in which all kinds of devices use a combination of software, sensors and wireless connectivity to talk to their owners and one another.

Other big technology firms are also joining the battle to dominate the connected home. This month Samsung announced a new smart-home computing platform that will let people control washing machines, televisions and other devices it makes from a single app. Microsoft, Apple and Amazon were also tipped to take a lead there, but Google was until now seen as something of a laggard. “I don’t think Google realised how fast the internet of things would develop,” says Tim Bajarin of Creative Strategies, a consultancy.

Buying Nest will allow it to leapfrog much of the opposition. It also brings Google some stellar talent. Mr Fadell, who led the team that created the iPod while at Apple, has a knack for breathing new life into stale products. His skills and those of fellow Apple alumni at Nest could be helpful in other Google hardware businesses, such as Motorola Mobility.

Are we finally about to enter a period of energy consumption automation and transactive energy? This acquisition is a step in that direction.

Adapting to technological change: solar power and fire

Here’s an important tradeoff I never really considered until reading this article: rooftop solar panels can be hazardous for firefighters. It’s an interesting example of how wide and varied the adaptations are to innovation. In this case the potential lethal electrocution from the traditional means of venting a roof on a burning building (creating holes in the roof with an axe) has meant that both firefighters and the solar industry have had to think about fire risk and how solar installations change firefighting and the expected cost of a fire. I wonder how many benefit-cost analyses of solar take into account the higher expected cost of a fire, and the logical associated higher fire insurance premium.

Joel Mokyr on growth, stagnation, and technological progress

My friend and colleague Joel Mokyr talked recently with Russ Roberts in an EconTalk podcast that I cannot recommend highly enough (and the links on the show notes are great too). The general topic is this back-and-forth that’s been going on over the past year involving Joel, Bob Gordon, Tyler Cowen, and Erik Brynjolfsson, among others, regarding diminishing returns to technological change and whether we’ve reached “the end of innovation”. Joel summarizes his argument in this Vox EU essay.

Joel is an optimist, and does not believe that technological dynamism is running out of steam (to make a 19th-century joke …). He argues that technological change and its ensuing economic growth are punctuated, and one reason for that is that conceptual breakthroughs are essential but unforeseeable. Economic growth also occurs because of the perpetual nature of innovation — the fact that others are innovating (here he uses county-level examples) means that everyone has to innovate as a form of running to stand still. I agree, and I think as long as the human mind, human creativity, and human striving to achieve and accomplish exist, there will be technological dynamism. A separate question is whether the institutional framework in which we interact in society is conducive to technological dynamism and to channeling our creativity and striving into such constructive application.

“If your toilet’s so smart, how come I can hack it?”

Thus reads the headlines on David Meyer’s Gigaom post on news that the Satis toilet, manufactured by the Japanese firm Lixii, comes with a smartphone app that can be used to control any Satis toilet (see also this BBC news article). You may wonder why a toilet needs an app, which is a valid question; this one allows recording of one’s activity (if you so choose …), remote flushing, remote air freshener spray, and remote bidet operation. Subjective utility being what it is, I’ll consider Lixii as entrepreneurs responding to what they perceive as some undersatisfied preference in the market, which the extent of their subsequent profits will indicate or not …

Although the story is scatologically humorous, Meyer’s closing observation hits upon exactly the same point I made recently in my post about the hackability of home management systems:

Of course, it’s not like someone will be exploiting this vulnerability to prank someone a continent away — Bluetooth is a pretty short-range wireless technology. However, it’s the kind of thing that should be borne in mind by manufacturers who are starting to jazz up previously low-tech appliances with new-fangled connectivity.

Because when it comes to security, as Trustwave SpiderLabs and others have warned, the home is the last place you want to be caught with your pants down.

NSA surveillance imperils the Internet as an economic platform

Today’s new revelations from Edward Snowden’s whistleblowing show that the NSA can, and does, use a program that surveils our Internet behavior in a general, blanket way (much in the nature of the “general warrants” that were the whole reason the authors of the Bill of Rights put the Fourth Amendment in there in the first place!).

Make no mistake: this deep and broad US government surveillance diminishes trust not just in the federal government (as if there is any general trust in the federal government any more), but also in Internet companies — communications companies, ISPs, Apple, Google, Yahoo, Amazon, and so on. The economic implications of the deep and broad US government surveillance are profound. How much economic activity on the Internet will leave those companies? Will government surveillance be able to access substitutes for these companies in other countries, if substitutes come into being? Isn’t this going to cause the commercial Internet to shrink?

The federal government may not have intended to stifle the role of the Internet as an economic value-creating commercial platform, but that consequence is almost certain.

UPDATE, 1 August, 3:19 CDT: Welcome readers from reddit, and thanks for the link! Since some commenters wanted more original analysis of this issue than I intended, I’ve recommended a follow-up post that provides deeper evaluation of the potential effects on the Internet as a commercial platform.

Joel Mokyr: Technopessimism is bunk

My department is currently a focal point in the debates over the future of innovation and economic growth. Technopessimist arguments from my colleague Bob Gordon (as profiled in this New York Magazine article from the weekend) join those in Tyler Cowen’s The Great Stagnation to suggest that the increase in living standards and the growth rates experienced over the past 200 years may be anomalous and not repeatable.

In the PBS Newshour Business Desk, my colleague (and former dissertation adviser) Joel Mokyr offers a different, more optimistic perspective. Joel emphasizes the dynamic aspects of new idea generation and the ensuing technological change and its effects on people and societies. Technology is never static, humans and our curiosity and our efforts to strive are never static, and that means that there’s not likely to be an “end of innovation” along the lines of an “end of history”:

Technology has not finished its work; it has barely started. Some lessons from history may show why. For one thing, technological progress has an unusual dynamic: it solves problems, but in doing so it, more often than not, creates new ones as unintended side-effects of the previous breakthroughs, and these in turn have to be solved, and so on. …

As we know more, we can push back against the pushback. And so on. The history of technology is to a large extent the history of unintended consequences. …

What will a future generation think of our technological efforts? During the Middle Ages, nobody knew they were living in the Middle Ages (the term emerged centuries later), and they would have resented a notion that it was an age of unbridled barbarism (it was not). During the early stages of the Industrial Revolution in the 18th century, few had a notion that a new technological dawn was breaking. So it is hard for someone alive today to imagine what future generations will make of our age. But to judge from progress in the past decades, it seems that the Digital Age may become to the Analog Age what the Iron Age was to the Stone Age. It will not last as long, and there is no way of knowing what will come after. But experience suggests that the metaphor of low-hanging fruit is misleading. Technology creates taller and taller ladders, and the higher-hanging fruits are within reach and may be just as juicy.

None of this is guaranteed. Lots of things can go wrong. Human history is always the result of a combination of deep impersonal forces, accidents and contingencies. Unintended consequences, stupidity, fear and selfishness often get in the way of making life better for more and more people. Technology alone cannot provide material progress; it’s just that without it, all the other ways of economic progress soon tend to fizzle out. Technological progress is perhaps not the cure-all for all human ills, but it beats the alternative.

Joel’s essay is well worth reading in its entirety. His argument highlights the decentralized, curiosity-driven process of technological change that does not proceed linearly, but is impossible to quash. These processes contribute to economic well-being in societies with good institutional and cultural contexts that facilitate and reward innovation when it generates value for others.