Online Library of Liberty forum on McCloskey’s Bourgeois Era

At its Online Library of Liberty, Liberty Fund hosts a monthly “Liberty Matters” forum in which a set of scholars discusses a particular set of ideas. This month’s forum features Deirdre McCloskey‘s Bourgeois Era series of books, two of which have been published (Bourgeois Virtues, Bourgeois Dignity). McCloskey’s main argument is that the various material and institutional factors that we’ve hypothesized as the causes of industrialization and the dramatic increase in living standards are insufficient for explaining why it happened when, where, and how it did — in northern Europe, particularly Britain and the Netherlands, accelerating in the 18th century from previous foundations there. The most important factor, according to McCloskey, was ideas, particularly the cultural acceptance of commerce, trade, and mercantile activity as honorable.

The forum features a lead essay from Don Boudreaux, commentary essays from Joel Mokyr and John Nye, and responses from McCloskey and the other authors. The forum will continue for the rest of the month, with further commentary certain to follow.

If you want an opportunity to think about one of the most important intellectual questions of economics, here it is. The essays, responses, and interactions are an encapsulation of a lively and important debate in economic history over the past two decades. And if you want to dig more deeply, the bibliography and the references in each essay are a reading list for a solid course in economic history. These ideas affect not only our understanding of economic history and the history of industrialization, but also how ideas and attitudes affect economic activity and living standards today. Well worth your time and consideration.

The political economy of Uber’s multi-dimensional creative destruction

Over the past week it’s been hard to keep up with the news about Uber. Uber’s creative destruction is rapid, and occurring on multiple dimensions in different places. And while the focus right now is on Uber’s disruption in the shared transportation market, I suspect that more disruption will arise in other markets too.

Start with two facts from this Wired article from last week by Marcus Wohlsen: Uber has just completed a funding round that raised an additional $1.2 billion, and last week it announced lower UberX fares in San Francisco, New York, and Chicago (the Chicago reduction was not mentioned in the article, but I am an Uber Chicago customer, so I received a notification of it). This second fact is interesting, especially once one digs in a little deeper:

With not just success but survival on the line, Uber has even more incentive to expand as rapidly as possible. If it gets big enough quickly enough, the political price could become too high for any elected official who tries to pull Uber to the curb.

Yesterday, Uber announced it was lowering UberX fares by 20 percent in New York City, claiming the cuts would make its cheapest service cheaper than a regular yellow taxi. That follows a 25 percent decrease in the San Francisco Bay Areaannounced last week, and a similar drop in Los Angeles UberX prices revealed earlier last month. The company says UberX drivers in California (though apparently not in New York) will still get paid their standard 80 percent portion of what the fare would have been before the discount. As Forbes‘ Ellen Huet points out, the arrangement means a San Francisco ride that once cost $15 will now cost passengers $11.25, but the driver still gets paid $12.

So one thing they’re doing with their cash is essentially topping off payments to drivers while lowering prices to customers for the UberX service. Note that Uber is a multi-service firm, with rides at different quality/price combinations. I think Wohlsen’s Wired argument is right, and that they are pursuing a strategy of “grow the base quickly”, even if it means that the UberX prices are loss leaders for now (while their other service prices remain unchanged). In a recent (highly recommended!) EconTalk podcast, Russ Roberts and Mike Munger also make this point.

This “grow the base” strategy is common in tech industries, and we’ve seen it repeatedly over the past 15 years with Amazon and others. But, as Wohlsen notes, this strategy has an additional benefit of making regulatory inertia and status quo protection more costly. The more popular Uber becomes with more people, the harder it will be for existing taxi interests to succeed in shutting them down.

The ease, the transparency, the convenience, the lower transaction costs, the ability to see and submit driver ratings, the consumer assessment of whether Uber’s reputation and driver certification provides him/her with enough expectation of safety — all of these are things that consumers can now assess for themselves, without a regulator’s judgment substitution for their own judgment. The technology, the business model, and the reputation mechanism diminish the public safety justification for taxi regulation. Creative destruction and freedom to innovate are the core of improvements in living standards. But the regulated taxi industry, having paid for medallions with the expectation of perpetual entry barriers, are seeing the value of the government-created entry barrier wither, and are lobbying to stem the losses in the value of their medallions. Note here the similarity between this situation and the one in the 1990s when regulated electric utilities argued, largely successfully, that they should be compensated for “stranded costs” when they were required to divest their generation capacity at lower prices due to the anticipation of competitive wholesale markets. One consequence of regulation is the expectation of the right to a profitable business model, an expectation that flies in the face of economic growth and dynamic change.

Another move that I think represents a political compromise while giving Uber a PR opportunity was last week’s agreement with the New York Attorney General to cap “surge pricing” during citywide emergencies, a policy that Uber appears to be extending nationally. As Megan McArdle notes, this does indeed make economists sad, since Uber’s surge pricing is a wonderful example of how dynamic pricing induces more drivers to supply rides when demand is high, rather than leaving potential passengers with fewer taxis in the face of a fixed, regulated price.

Sadly, no one else loves surge pricing as much as economists do. Instead of getting all excited about the subtle, elegant machinery of price discovery, people get all outraged about “price gouging.” No matter how earnestly economists and their fellow travelers explain that this is irrational madness — that price gouging actually makes everyone better off by ensuring greater supply and allocating the supply to (approximately) those with the greatest demand — the rest of the country continues to view marking up generators after a hurricane, or similar maneuvers, as a pretty serious moral crime.

Back in April Mike wrote here about how likely this was to happen in NY, and in commenting on the agreement with the NY AG last week, Regulation editor Peter Van Doren gave a great shout-out to Mike’s lead article in the Spring 2011 issue on price gouging regulations and their ethical and welfare effects.

Even though the surge pricing cap during emergencies is economically harmful but politically predictable (in Megan’s words), I think the real effects of Uber will transcend the shared ride market. It’s a flexible piece of software — an app, a menu of contracts with drivers and riders, transparency, a reputation mechanism. Much as Amazon started by disrupting the retail book market and then expanded because of the flexibility of its software, I expect Uber to do something similar, in some form.

Pauline Maier on colonial radicalism

With Independence Day upon us, my bedtime reading for the past couple of weeks has become timely. Pauline Maier, the MIT historian who unfortunately passed away last year, published From Resistance to Revolution in 1972. It’s a carefully researched and well-written account, weaving together reports from contemporaneous sources, of the increasing radicalization of American colonists from 1765 to 1776. How did the beliefs of so many colonists evolve from being loyal British subjects to supporting revolution and independence from Britain — why this radicalization?

Maier’s ultimate conclusion is overreaching and misinterpretations on the part of the British government, which is consistent with the “generally received” historical narrative. But what I have found most interesting and novel from her argument is Chapter 2: An Ideology of Resistance and Restraint. Maier grounds the intellectual origins of revolution in the 17th-18th-century English revolutionary writers — John Locke is best known among Americans, but also John Milton and Algernon Sidney (see here my summary of Sidney on illegitimate political power) and Frances Hutcheson. She describes a category of political belief called “Real Whigs”, and argued that the Real Whig beliefs in both the people as the ultimate source of legitimate political power and the value of social order meant that the colonists were inclined to resist the illegitimate exercise of authority, but not to jump quickly to a radical revolutionary position. For example:

Spokesmen for this English revolutionary tradition were distinguished in the eighteenth century above all by their outspoken defense of the people’s right to rise up against their rulers, which they supported in traditional contractural [sic] terms. Government was created by the people to promote the public welfare. If magistrates failed to honor that trust, they automatically forfeited their powers back to the people, who were free and even obliged [as per Sidney's argument -- ed.] to reclaim political authority. The people could do so, moreoever, in acts of limited resistance, intended to nullify only isolated wrongful acts of the magistrates, or ultimately in revolution, which denied the continued legitimacy of the established government as a whole. …

The fundamental values of the Radical Whigs were realized most fully in a well-ordered free society, such that obedience to the law was stressed as much or more than occasional resistance to it. (pp. 27-28)

This chapter really resonated with me as a clear explanation of the primacy of individual liberty combined with a society ordered using universally-applied general legal principles (otherwise known as the “rule of law”). This combination of resistance and restraint is the key to understanding the political philosophy underlying the American republic, and Maier’s chapter is the best articulation of it that I’ve read.

Given the fractious and polarized political climate we inhabit today, I think a refresher on these ideas and their foundations is a good idea. We should be having a larger conversation about what constitutes legitimate and illegitimate political authority, particularly in the wake of the Snowden disclosures, the expansion of federal executive branch assertion of authority over the past 14 years, the expansion of administrative regulation (which is a sub-category of executive assertion), and the ability of business interests with political power to influence that regulation’s form and scope. There are a lot of arguments from all parts of the political spectrum that mischaracterize or misunderstand the ideas that Maier lays out here so clearly. We’d still be likely to have a fractious and polarized political climate, but we’d have better-informed public debate.

Building, and commercializing, a better nuclear reactor

A couple of years ago, I was transfixed by the research from Leslie Dewan and Mark Massie highlighted in their TedX video on the future of nuclear power.

 

A recent IEEE Spectrum article highlights what Dewan and Massie have been up to since then, which is founding a startup called Transatomic Power in partnership with investor Russ Wilcox. The description of the reactor from the article indicates its potential benefits:

The design they came up with is a variant on the molten salt reactors first demonstrated in the 1950s. This type of reactor uses fuel dissolved in a liquid salt at a temperature of around 650 °C instead of the solid fuel rods found in today’s conventional reactors. Improving on the 1950s design, Dewan and Massie’s reactor could run on spent nuclear fuel, thus reducing the industry’s nuclear waste problem. What’s more, Dewan says, their reactor would be “walk-away safe,” a key selling point in a post-Fukushima world. “If you don’t have electric power, or if you don’t have any operators on site, the reactor will just coast to a stop, and the salt will freeze solid in the course of a few hours,” she says.

The article goes on to discuss raising funds for lab experiments and a subsequent demonstration project, and it ends on a skeptical note, with an indication that existing industrial nuclear manufacturers in the US and Europe are unlikely to be interested in commercializing such an advanced reactor technology. Perhaps the best prospects for such a technology are in Asia.

Another thing I found striking in reading this article, and that I find in general when reading about advanced nuclear reactor technology, is how dismissive some people are of such innovation — why not go for thorium, or why even bother with this when the “real” answer is to harness solar power for nuclear fission? Such criticisms of innovations like this are misguided, and show a misunderstanding of both the economics of innovation and the process of innovation itself. One of the clear benefits of this innovation is its use of a known, proven reactor technology in a novel way and using spent fuel rod waste as fuel. This incremental “killing two birds with one stone” approach may be an economical approach to generating clean electricity, reducing waste, and filling a technology gap while more basic science research continues on other generation technologies.

Arguing that nuclear is a waste of time is the equivalent of a “swing for the fences” energy innovation strategy. Transatomic’s reactor represents a “get guys on base” energy innovation strategy. We certainly should do basic research and swing for the fences, but that’s no substitute for the incremental benefits of getting new technologies on base that create value in multiple energy and environmental dimensions.

Ben Powell on drought and water pricing

Ben Powell at Texas Tech has an essay on water scarcity at Huffington Post in which he channels David Zetland:

But water shortages in Lubbock and elsewhere are not meteorological phenomena. The shortages are a man-made result of bad economic policy.

Droughts make water scarcer, but by themselves they cannot cause shortages. To have a shortage and a risk of depletion, a resource must be mispriced.

With the freedom to choose, consumers can demonstrate whether it’s worth the cost to them to water their lawn an extra day or hose dust off of their house. Realistic pricing also incentivizes them to take account of water’s scarcity when they consume it in ways that aren’t currently prohibited. Have your long shower if you want . . . but pay the real price of it instead of the current subsidized rate.

Of course Ben is correct in his analysis and his policy recommendation, although I would nuance it with David’s “some for free, pay for more” to address some of the income distribution/regressivity aspects of municipal water pricing. Water is almost universally mispriced and wasted, exacerbating the distress and economic costs of drought.

Critiquing the theory of disruptive innovation

Jill Lepore, a professor of history at Harvard and writer for the New Yorker, has written a critique of Clayton Christensen’s theory of disruptive innovation that is worth thinking through. Christensen’s The Innovator’s Dilemma (the dilemma is for firms to continue making the same decisions that made them successful, which will lead to their downfall) has been incredibly influential since its 1997 publication, and has moved the concept of disruptive innovation from its arcane Schumpeterian origins into modern business practice in a fast-changing technological environment. Disrupt or be disrupted, innovate or die, become corporate strategy maxims under the theory of disruptive innovation.

Lepore’s critique highlights the weaknesses of Christensen’s model (and it does have weaknesses, despite its success and prevalence in business culture). His historical analysis, the case study methodology, and the decisions he made regarding cutoff points in time all leave unsatisfyingly unsystematic support for his model, yet he argues that the theory of disruptive innovation is predictive and can be used with foresight to identify how firms can avoid failure. Lepore’s critique here is apt and worth considering.

Josh Gans weighs in on the Lepore article, and the theory of disruptive innovation more generally, by noting that at the core of the theory of disruptive innovation lies a new technology, and the appeal of that technology (or what it enables) to consumers:

But for every theory that reaches too far, there is a nugget of truth lurking at the centre. For Christensen, it was always clearer when we broke it down to its constituent parts as an economic theorist might (by the way, Christensen doesn’t like us economists but that is another matter). At the heart of the theory is a type of technology — a disruptive technology. In my mind, this is a technology that satisfies two criteria. First, it initially performs worse than existing technologies on precisely the dimensions that set the leading, for want of a better word, ‘metrics’ of the industry. So for disk drives, it might be capacity or performance even as new entrants promoted lower energy drives that were useful for laptops.

But that isn’t enough. You can’t actually ‘disrupt’ an industry with a technology that most consumers don’t like. There are many of those. To distinguish a disruptive technology from a mere bad idea or dead-end, you need a second criteria — the technology has a fast path of improvement on precisely those metrics the industry currently values. So your low powered drives get better performance and capacity. It is only then that the incumbents say ‘uh oh’ and are facing disruption that may be too late to deal with.

Herein lies the contradiction that Christensen has always faced. It is easy to tell if a technology is ‘potentially disruptive’ as it only has to satisfy criteria 1 — that it performs well on one thing but not on the ‘standard’ stuff. However, that is all you have to go on to make a prediction. Because the second criteria will only be determined in the future. And what is more, there has to be uncertainty over that prediction.

Josh has hit upon one of the most important dilemmas in innovation — if the new technology is likely to succeed against the old, it must offer satisfaction on the established value propositions of the incumbent technology as well as improving upon them either in speed, quality, or differentiation. And that’s inherently unknown; the incumbent can either innovate too soon and suffer losses, or innovate too late and suffer losses. At this level, the theory does not help us distinguish and identify the factors that associate innovation with continued success of the firm.

Both Lepore and Gans highlight Christensen’s desire for his theory to be predictive when it cannot be. Lepore summarizes the circularity that indicates this lack of a predictive hypothesis:

If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. … When an established company succeeds, that’s only because it hasn’t yet failed. And, when any of these things happen, all of them are only further evidence of disruption.

What Lepore brings to the party, in addition to a sharp mind and good analytical writing, is her background and sensibilities as an historian. A historical perspective on innovation helps balance some of the breathless enthusiasm for novelty often found in technology or business strategy writing. Her essay includes a discussion of the concept of “innovation” and how it has changed over several centuries (having been largely negative pre-Schumpeter), as has the Enlightenment’s theory of history as being one of human progress, which has since morphed into different theories of history:

The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. …

The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.

I think there’s a lot to her interpretation (and I say that wearing both my historian hat and my technologist hat). But I think that both the Lepore and Gans critiques, and indeed Christensen’s theory of disruptive innovation itself, would benefit from (for lack of a catchier name) a Smithian-Austrian perspective on creativity, uncertainty, and innovation.

The Lepore and Gans critiques indicate, correctly, that supporting the disruptive innovation theory requires hindsight and historical analysis because we have to observe realized outcomes to identify the relationship between innovation and the success/failure of the firm. That concept of an unknown future rests mostly in the category of risk — if we identify that past relationship, we can generate a probability distribution or a Bayesian prior for the factors likely to lead to innovation yielding success.

But the genesis of innovation is in uncertainty, not risk; if truly disruptive, innovation may break those historical relationships (pace the Gans observation about having to satisfy the incumbent value propositions). And we won’t know if that’s the case until after the innovators have unleashed the process. Some aspects of what leads to success or failure will indeed be unknowable. My epistemic/knowledge problem take on the innovator’s dilemma is that both risk and uncertainty are at play in the dynamics of innovation, and they are hard to disentangle, both epistemologically and as a matter of strategy. Successful innovation will arise from combining awareness of profit opportunities and taking action along with the disruption (the Schumpeter-Knight-Kirzner synthesis).

The genesis of innovation is also in our innate human creativity, and our channeling of that creativity into this thing we call innovation. I’d go back to the 18th century (and that Enlightenment notion of progress) and invoke both Adam Smith and David Hume to argue that innovation as an expression of human creativity is a natural consequence of our individual striving to make ourselves better off. Good market institutions using the signals of prices, profits, and losses align that individual striving with an incentive for creators to create goods and services that will benefit others, as indicated by their willingness to buy them rather than do other things with their resources.

By this model, we are inherent innovators, and successful innovation involves the combination of awareness, action, and disruption in the face of epistemic reality. Identifying that combination ex ante may be impossible. This is not a strategy model of why firms fail, but it does suggest that such strategy models should consider more than just disruption when trying to understand (or dare I say predict) future success or failure.

Price gouging in a second language

Research on differences between decisions made in a person’s native tongue and decisions made in a second language reminded me of an unexplored idea in the social dynamics surrounding price gouging.

I’ve devoted a few posts to the question of whether or not price gouging laws get applied in a discriminatory fashion against “outsiders,” primarily thinking of immigrants or cultural minorities. My evidence is slim, mostly the casual reading of a handful of news stories, but consider these prior posts and possible examples from Mississippi, New Jersey, and West Virginia.

In the New Jersey article I speculated it was possible that “outsiders” were more likely to engage in price gouging behaviors, and observed, “Social distance between buyers and sellers can work both ways.”

Some support for my speculation comes through communication research by Boaz Keysar of the University of Chicago, who has documented the view, as the subtitle of an article in the journal Psychological Science puts it, “thinking in a foreign tongue reduces decision biases.” Part of the explanation Keysar and his coauthors offer is the “foreign language provides greater cognitive and emotional distance than a native tongue does.” (The work was mentioned in a recent Freakonomics podcast.) An immigrant hotelier or retailer may not connect as emotionally as a native does with laws expressed in the native’s language or with customers when transacting in that language. When exchange is seen as impersonal rather than personal, price-setters are less constrained in their pricing decisions.

Interestingly, Keysar is also coauthor on a study concluding that moral judgments are markedly more utilitarian when problems and responses are conducted in a second language. Economic analysis tends to support the view that “price gouging” in response to sudden shifts in demand is the correct utilitarian response (as flexible prices help goods and services move toward those who value them most).