Critiquing the theory of disruptive innovation

Jill Lepore, a professor of history at Harvard and writer for the New Yorker, has written a critique of Clayton Christensen’s theory of disruptive innovation that is worth thinking through. Christensen’s The Innovator’s Dilemma (the dilemma is for firms to continue making the same decisions that made them successful, which will lead to their downfall) has been incredibly influential since its 1997 publication, and has moved the concept of disruptive innovation from its arcane Schumpeterian origins into modern business practice in a fast-changing technological environment. Disrupt or be disrupted, innovate or die, become corporate strategy maxims under the theory of disruptive innovation.

Lepore’s critique highlights the weaknesses of Christensen’s model (and it does have weaknesses, despite its success and prevalence in business culture). His historical analysis, the case study methodology, and the decisions he made regarding cutoff points in time all leave unsatisfyingly unsystematic support for his model, yet he argues that the theory of disruptive innovation is predictive and can be used with foresight to identify how firms can avoid failure. Lepore’s critique here is apt and worth considering.

Josh Gans weighs in on the Lepore article, and the theory of disruptive innovation more generally, by noting that at the core of the theory of disruptive innovation lies a new technology, and the appeal of that technology (or what it enables) to consumers:

But for every theory that reaches too far, there is a nugget of truth lurking at the centre. For Christensen, it was always clearer when we broke it down to its constituent parts as an economic theorist might (by the way, Christensen doesn’t like us economists but that is another matter). At the heart of the theory is a type of technology — a disruptive technology. In my mind, this is a technology that satisfies two criteria. First, it initially performs worse than existing technologies on precisely the dimensions that set the leading, for want of a better word, ‘metrics’ of the industry. So for disk drives, it might be capacity or performance even as new entrants promoted lower energy drives that were useful for laptops.

But that isn’t enough. You can’t actually ‘disrupt’ an industry with a technology that most consumers don’t like. There are many of those. To distinguish a disruptive technology from a mere bad idea or dead-end, you need a second criteria — the technology has a fast path of improvement on precisely those metrics the industry currently values. So your low powered drives get better performance and capacity. It is only then that the incumbents say ‘uh oh’ and are facing disruption that may be too late to deal with.

Herein lies the contradiction that Christensen has always faced. It is easy to tell if a technology is ‘potentially disruptive’ as it only has to satisfy criteria 1 — that it performs well on one thing but not on the ‘standard’ stuff. However, that is all you have to go on to make a prediction. Because the second criteria will only be determined in the future. And what is more, there has to be uncertainty over that prediction.

Josh has hit upon one of the most important dilemmas in innovation — if the new technology is likely to succeed against the old, it must offer satisfaction on the established value propositions of the incumbent technology as well as improving upon them either in speed, quality, or differentiation. And that’s inherently unknown; the incumbent can either innovate too soon and suffer losses, or innovate too late and suffer losses. At this level, the theory does not help us distinguish and identify the factors that associate innovation with continued success of the firm.

Both Lepore and Gans highlight Christensen’s desire for his theory to be predictive when it cannot be. Lepore summarizes the circularity that indicates this lack of a predictive hypothesis:

If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. … When an established company succeeds, that’s only because it hasn’t yet failed. And, when any of these things happen, all of them are only further evidence of disruption.

What Lepore brings to the party, in addition to a sharp mind and good analytical writing, is her background and sensibilities as an historian. A historical perspective on innovation helps balance some of the breathless enthusiasm for novelty often found in technology or business strategy writing. Her essay includes a discussion of the concept of “innovation” and how it has changed over several centuries (having been largely negative pre-Schumpeter), as has the Enlightenment’s theory of history as being one of human progress, which has since morphed into different theories of history:

The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. …

The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.

I think there’s a lot to her interpretation (and I say that wearing both my historian hat and my technologist hat). But I think that both the Lepore and Gans critiques, and indeed Christensen’s theory of disruptive innovation itself, would benefit from (for lack of a catchier name) a Smithian-Austrian perspective on creativity, uncertainty, and innovation.

The Lepore and Gans critiques indicate, correctly, that supporting the disruptive innovation theory requires hindsight and historical analysis because we have to observe realized outcomes to identify the relationship between innovation and the success/failure of the firm. That concept of an unknown future rests mostly in the category of risk — if we identify that past relationship, we can generate a probability distribution or a Bayesian prior for the factors likely to lead to innovation yielding success.

But the genesis of innovation is in uncertainty, not risk; if truly disruptive, innovation may break those historical relationships (pace the Gans observation about having to satisfy the incumbent value propositions). And we won’t know if that’s the case until after the innovators have unleashed the process. Some aspects of what leads to success or failure will indeed be unknowable. My epistemic/knowledge problem take on the innovator’s dilemma is that both risk and uncertainty are at play in the dynamics of innovation, and they are hard to disentangle, both epistemologically and as a matter of strategy. Successful innovation will arise from combining awareness of profit opportunities and taking action along with the disruption (the Schumpeter-Knight-Kirzner synthesis).

The genesis of innovation is also in our innate human creativity, and our channeling of that creativity into this thing we call innovation. I’d go back to the 18th century (and that Enlightenment notion of progress) and invoke both Adam Smith and David Hume to argue that innovation as an expression of human creativity is a natural consequence of our individual striving to make ourselves better off. Good market institutions using the signals of prices, profits, and losses align that individual striving with an incentive for creators to create goods and services that will benefit others, as indicated by their willingness to buy them rather than do other things with their resources.

By this model, we are inherent innovators, and successful innovation involves the combination of awareness, action, and disruption in the face of epistemic reality. Identifying that combination ex ante may be impossible. This is not a strategy model of why firms fail, but it does suggest that such strategy models should consider more than just disruption when trying to understand (or dare I say predict) future success or failure.

Price gouging in a second language

Research on differences between decisions made in a person’s native tongue and decisions made in a second language reminded me of an unexplored idea in the social dynamics surrounding price gouging.

I’ve devoted a few posts to the question of whether or not price gouging laws get applied in a discriminatory fashion against “outsiders,” primarily thinking of immigrants or cultural minorities. My evidence is slim, mostly the casual reading of a handful of news stories, but consider these prior posts and possible examples from Mississippi, New Jersey, and West Virginia.

In the New Jersey article I speculated it was possible that “outsiders” were more likely to engage in price gouging behaviors, and observed, “Social distance between buyers and sellers can work both ways.”

Some support for my speculation comes through communication research by Boaz Keysar of the University of Chicago, who has documented the view, as the subtitle of an article in the journal Psychological Science puts it, “thinking in a foreign tongue reduces decision biases.” Part of the explanation Keysar and his coauthors offer is the “foreign language provides greater cognitive and emotional distance than a native tongue does.” (The work was mentioned in a recent Freakonomics podcast.) An immigrant hotelier or retailer may not connect as emotionally as a native does with laws expressed in the native’s language or with customers when transacting in that language. When exchange is seen as impersonal rather than personal, price-setters are less constrained in their pricing decisions.

Interestingly, Keysar is also coauthor on a study concluding that moral judgments are markedly more utilitarian when problems and responses are conducted in a second language. Economic analysis tends to support the view that “price gouging” in response to sudden shifts in demand is the correct utilitarian response (as flexible prices help goods and services move toward those who value them most).

Did ERCOT’s shift from zonal to nodal market design reduce electric power prices?

Jay Zarnikau, C.K. Woo, and Ross Baldick have examined whether the shift from a zonal to nodal market design in the ERCOT power market had a noticeable effect on electric energy prices. The resulting article, published in the Journal of Regulatory Economics, and this post may be a bit geekier than we usually get around here. I’ll try to tone it down and explain the ERCOT change and the effect on prices as clearly as I can.

The topic is important because the shift from zonal to nodal market structure was controversial, complicated, expensive, and took longer than expected. Problems had emerged shortly after launch of the initial zonal-based market and the nodal approach was offered as a solution. Some market participants had their doubts, but rather quickly ERCOT began the move to a nodal design. Note that phrasing: “rather quickly ERCOT began the move.” It took several years for ERCOT to actually complete the process.

In part the shift was promoted as a more efficient way to run the market. Zarnikau, Woo, and Baldick looked at the effect on prices as one way to assess whether or not the resulting market has worked more efficiently. They conclude energy prices are about 2 percent lower because of the nodal market design.

Don’t get hung up on the 2 percent number itself, but think of the shift as having a modest downward pressure on prices.

The result is consistent with an understanding one would gain from the study of power systems engineering as well as with what power system simulations showed. The point of the Zarnikau et al. study was to investigate whether data analysis after the fact supported expectations offered by theory and simulation. Because there is no better empirical study (so far as I am aware) and because their results are consistent with well-founded expectations, I have no reason to doubt their result. I will contest one interpretation they offer concerning the current resource adequacy debate in Texas.

Some background (which beginners should read and others can skip).

The delivery of electric energy to consumers is a joint effort between the generators that create the power and the wires that bring it to the consumer. The wires part of the system are not simple links between generators and consumers, but rather complicated network of wires in which consumers and generators are connected in multiple ways. The added flexibility that comes with networking helps the system work more reliably and at lower cost.

The network also comes with a big coordination problem, too. Power flows on the network are not individually controllable. With many generators producing power for many consumers, parts of the power grid may become overloaded. One key job of the power system operator is to watch the power flows on the electric grid and intervene as needed to prevent a transmission line from being overloaded. The intervention generally takes the form of directing a generator (or generators) contributing to the potential overload to reduce output and directing other generators to increase output. In areas outside of regional system operators, this function is done on a piecemeal basis as problems arise. A significant benefit coming from full-scale regional power markets integrated with system operations (such as ERCOT in Texas after the switch to a nodal market and in other similar ISO/RTO markets) is that such coordination can be done in advance, with more information, mostly automatically, and more efficiently than piecemeal adjustments.

Described in simpler terms, the regional power system operator helps generators and consumers coordinate use of the power grid in the effort to efficiently satisfy consumer demands for electric energy. A zonal market design, like ERCOT started with, did minimal advance coordination. The nodal market design and related changes implemented by ERCOT allowed the market to do more sophisticated and efficient coordination of grid use.

About data challenges.

In order to assess the effects on prices, the authors couldn’t simply average prices before and after the December 1, 2010 change in the market. The power system is a dynamic thing, and many other factors known to affect electric power prices changed between the two periods. Most significantly, natural gas prices were much lower on average after the market change than during the years before. Other changes include growing consumer load, higher offer caps, and increasing amounts of wind energy capacity. In addition, the prices are generated by the system has been changed, making simple before and after comparisons insufficient. For example, rather than four zonal prices produced every 15 minutes, the nodal market yields thousands of prices every 5 minutes.

One potentially significant data-related decision was a choice to omit “outliers,” prices that were substantially higher or lower than usual. The authors explain that extreme price spikes were much more frequent in 2011, after the change, but largely due to the summer of 2011 being among the hottest on record. At the same time the offer caps had been increased, so that prices spiked higher than they could have before, but not because of the zonal-to-nodal market shift. Omitting outliers reduces the impact of these otherwise confounding changes and should produce a better sense of the effect of the market change during more normal conditions.

Their conclusion and a mistaken interpretation.

Zarnikau, Woo, and Baldick conducted their price analysis on four ERCOT sub-regions separately so as to see if the change had differing impacts resulting from the changeover. The West zone stood out in the analysis, largely because that zone has seen the most significant other changes in the power system. The two main changes: continued sizable wind energy capacity additions in the zone, and more substantially, dramatic electrical load growth in the region due to the recent oil and gas drilling boom in west Texas. Because the West results were a bit flaky, they based their conclusions on results from the other three zones. Across a number of minor variations in specifications, the authors found a price suppression effect ranging from 1.3 and 3.3 percent, the load-weighted average of which is right around 2 percent.

So far, so good.

But next they offered what is surely a misinterpretation of their results. They wrote:

[T]he reduction in wholesale prices from the implementation of the nodal market might be viewed by some as a concern. In recent years, low natural gas prices and increased wind farm generation have also reduced electricity prices in ERCOT which has, in turn, impaired the economics of power plant construction. … It appears as though the nodal market’s design may have contributed to the drop in prices that the PUCT has now sought to reverse.

Strictly speaking, the goal of the Public Utility Commission of Texas hasn’t been to reverse the drop in prices, but to ensure sufficient investment in supply resources to reliably meet projected future demand. Lower prices appear to be offer smaller investment incentives than higher prices, but there is a subtle factor in play.

The real incentive to investment isn’t higher prices, it is higher profits. Remember, one of the most important reasons to make the switch from a zonal to a nodal market is that the nodal market is supposed to operate more efficiently. Zarnikau, Woo, and Baldick notice that marginal heat rates declined after the shift, evidence consistent with more efficient operations. The efficiency gain suggests generators are operating at an overall lower cost, which means even with lower prices generator profits could be higher now than they would have been. It all depends on whether the drop in cost was larger or smaller than the drop in prices.

The cost and profit changes will be somewhat different for generators depending on where they are located, what fuel they use, and how they typically operated. I’ll hazard the guess that relatively efficient natural gas plants have seen their profits increased a bit whereas less efficient gas plants, nuclear plants, and coal plants have likely seen profits fall a little.

FULL CITE: Zarnikau, J., C. K. Woo, and R. Baldick. “Did the introduction of a nodal market structure impact wholesale electricity prices in the Texas (ERCOT) market?.”Journal of Regulatory Economics 45.2 (2014): 194-208.

Here is a link to a non-gated preliminary version if you don’t have direct access to the Journal of Regulatory Economics.

AN ASIDE: One modest irony out of Texas–the multi-billion dollar CREZ transmission line expansion, mostly intended to support delivery of wind energy from West Texas into the rest of the state, has turned out to be used more to support the import of power from elsewhere in the state to meet the demands of a rapidly growing Permian Basin-based oil and gas industry.

Texans should pay higher taxes

From Breitbart, “Drumbeat to raise gas tax extends to conservative event“:

Texans should pay higher gasoline taxes, a Texas Tech University professor advocated at a policy conference organized by the conservative Texas Public Policy Foundation in Austin on April 16. He acknowledged that how transportation dollars are spent must also be carefully considered.

Generally, I’m a “starve the beast” proponent, but I endorse the view expressed above. In fact, I said it.

“Fuel taxes serve as a road ‘user fee’,” said Michael Giberson, who serves on the faculty at Texas Tech’s College of Business. “Those who use the roads, pay for them.”

Giberson told the TPPF conference attendees that the tax should be increased to a level that brings in the same revenues as in 1991–when the tax was last increased.

Texans currently pay 20 cents per gallon, but to meet the 1991 spending power Giberson said the rate would need to be 33.7 percent. He also recommended tying the gas tax to inflation, so that it would increase automatically.

Giberson acknowledged that more fuel efficient engines and electric-powered cars mean the gas tax will continue to be a declining revenue source. He said other options, such as charging Texans on the basis of their miles-driven, should be considered even as he acknowledged concerns about privacy and practical implementation.

I’d quibble just a bit with the characterization of my presentation. I didn’t recommend a 33.7 cents/per gallon tax, but rather was illustrating the toll that inflation had taken since the state gasoline tax was last raised. I did suggest tying the tax to inflation, but commented that the current method allows the tax to diminish over time and forces the legislature into direct action to raise it. I like that latter idea better the more I think about it.

In Texas two things stand between the fuel taxes and the user fee concept. First, about half of the gasoline tax is federal, 18.4 cents/gallon for gasoline, and Texas gets only about 80 percent of the Texas-sourced federally-collected fuel taxes back from Washington DC. The money comes back with some federal strings attached and some of the money is diverted from projects that benefit fuel taxpayers. Second, the feds 20 percent cut off the top is actually better for Texas fuel taxpayers than the state’s cut. By law, 25 percent of fuel taxes collected in Texas go to state government educational funding, so Texas road users only get about 75 percent of the Texas-sourced state-collected fuel taxes back from Austin. The 25 percent cut of fuel taxes for education is enshrined in the state’s constitution (a holdover, I suspect, when fuel taxes were paid primarily by the wealthy).

In response I favored proposals circulating in Congress to radically cut the federal fuel tax and related spending, and shift the responsibility for revenue collection and spending to the states. Congress has a duty to protect interstate commerce, but that need not involve a massive federal overhead to manage. I’d like to claw back the 25 percent fuel tax take from state educational funding, too. We amend the state constitution in Texas just about every other year, so that is no big deal, but because the amendment would appear anti-education I see it as a hard sell.

I also urged more use of toll roads, which have become much more efficient these days, and congestion-based tolls on roads where congestion is a frequent issue. (Nothing annoys me more than some denizen of east coast metropolitan areas saying federal gasoline taxes ought to be higher because it will reduce congestion. For example. No amount of taxing my cross-Texas drives is going to speed your east coast metropolitan commute.)

In the Breitbart article TPPF Vice President Chuck DeVore pushed back against my tax-raising views. He hasn’t changed his views, but recently in response to President Obama’s transportation spending proposal, DeVore’s views and mine seem pretty close: cut the federal role dramatically and let the states decide the mix of taxes and tolls needed to fund transportation infrastructure for themselves.

The Texas Public Policy Foundation put together a great event, with a program organized largely by TPPF staff economist and recent Texas Tech econ PhD Vance Ginn. Happy to be part of it.

Links to video from the conference and presentations are posted, along with links to other media coverage of the event (mostly focused on the Dallas Fed chairman’s lunchtime remarks, not the “gasoline tax controversy”, but I tried). My presentation is second in the panel 1 video.

ADDED: After my presentation I had two promising suggestions from conference attendees. One is that, given that almost all of the actual wear and tear on the roads in Texas come from heavy trucks rather than cars and light trucks, we should tax large commercial vehicles more–probably on a vehicle-miles traveled basis–and the “user fee” for personal vehicles likely falls to something reflecting the modest consequences of driving relatively lightweight vehicles. Trucking companies would complain, and the political prospects of the idea are probably not good. Otherwise makes a lot of sense to me. The other suggestion was to employ certain oil and gas drilling fees currently in surplus for road work, at least for the road improvements needed in the parts of the state experiencing significant increases in commercial traffic due to the oil and gas drilling boom. The suggestion seems a bit kludge-y to me, but comes with enough symmetry between the payers and the beneficiaries to be plausible. Good enough for government work, as is said.

New York Attorney General grapples to regulate new web-based businesses in old ways

The New York Attorney General (AG) had an op-ed in the New York Times presenting a curious mix of resistance to change, insistence on regulating new things in old way, acknowledgement that web-based businesses create some value and regulators can’t always enforce rules intelligently, and sprinkled now and again with the barely disguised threat that regulators will not be refused in their efforts to assert dominance over the upstarts. Actually, the threat is not even barely disguised:

Just because a company has an app instead of a storefront doesn’t mean consumer protection laws don’t apply. The cold shoulder that regulators like me get from self-proclaimed cyberlibertarians deprives us of powerful partners in protecting the public interest online. While this may shield companies in the short run, authorities will ultimately be forced to use the blunt tools of traditional law enforcement. Cooperation is a better path.

Ah, yes, the “blunt tools of traditional law enforcement.”

The two targets of the piece are room-sharing service Airbnb, with which the AG’s office has already clashed in court, and car-finder Uber, which the AG may or may not charge with price gouging for the company’s surge pricing policy.

Another example is Uber, a company valued at more than $3 billion that has revolutionized the old-fashioned act of standing in the street to hail a cab. Uber has been an agent for change in an industry that has long been controlled by small groups of taxi owners. The regulations and bureaucracies that protect these entrenched incumbents do not, by and large, serve the public interest.

But Uber may also have run afoul of New York State laws against price gouging, which do serve the public interest. In the last year, in bad weather, Uber charged New Yorkers as much as eight times the company’s base price. We are investigating whether this is prohibited by the same laws under which I’ve sued gas stations that gouged motorists during Hurricane Sandy. Uber makes some persuasive arguments for its pricing model, but the ability to pay truly exorbitant prices shouldn’t determine someone’s ability to get critical goods and services when they’re in short supply in an emergency. I’m hopeful that the company will collaborate with us to address the problem thoughtfully.

You know the Seinfeld/Uber story, right? Last December during heavy snows in Manhattan Jessica Seinfeld used Uber to get her children to Saturday evening social obligations and, due to the company’s surge pricing policy, was charged $415. Even though the app notifies you of the price up front, before you call a car, Ms. Seinfeld felt compelled to complain on Instagram with a picture of her $415 charge and the caption, “UBER charge, during a snowstorm (to drop one at Bar Mitzvah and one child at a sleepover.) #OMG #neverforget #neveragain #real”

Uber, the AG’s office is giving you time to think it over, so what will it be: thoughtful collaboration or the “the blunt tools of traditional law enforcement”?

But I’m not sure what kind of thoughtful collaboration with the AG’s office is going to help Uber get the children of the rich and famous through the snow to their social obligations in a timely fashion. We can cap the amount that the much, much poorer private car drivers of New York City can offer to drive the offspring of the rich and famous through the snow, but that probably will lead those much, much poorer private car drivers to head home instead, and force the rich and famous to send their doormen out into the streets to compete for access to the limited supplies of well-regulated taxis.

 

Price gouging-moral insights from economics

Dwight Lee in the current issue of Regulation magazine offers “The Two Moralities of Outlawing Price Gouging.” In the article Lee endorsed economists’ traditional arguments against laws prohibiting price gouging, but argued efficiency claims aren’t persuasive to most people as they fail to address the moral issues raised surrounding treatment of victims of disasters.

Lee wrote, “Economists’ best hope for making an effective case against anti-price-gouging laws requires considering two moralities—one intention-based, the other outcome-based—that work together to improve human behavior when each is applied within its proper sphere of human activity.”

Intention-based morality, that realm of neighbors-helping-neighbors and the outpouring of charitable donations from near and far, is good and useful and honorable, said Lee, who term this as “magnanimous morality.” Such morality works great in helping family and friends and, because of the close relationship, naturally has a good idea of just what help may be needed and when and where.

When large scale disasters overwhelm the limited capabilities of the friends and families of victims, large-scale charity kicks in. Charity is the extended version magnanimous morality, but it comes a knowledge problem: how does the charity identify who needs help, and what kind, and when, and where?

The second morality that Lee’s title referenced is the morality of “respecting the rights of others and abiding by general rules such as those necessary for impersonal market exchange.” This “mundane morality” of merely respecting rules does not strike most people as too compelling, Lee observed, but economists know how powerful a little self-interest and local knowledge can be in a world in which rights are respected. Indeed, the vast successes of the modern world–extreme poverty declining, billions fed well enough, life-expectancy and literacy rising, disease rates dropping–can be attributed primarily to the social cooperation enabled by local knowledge and voluntary interaction guided by prices and profits. The value of mundane morality after a disaster is that it puts this same vast power to work in aid of recovery.

The two moralities work together Lee said. Even as friends and families reach out in magnanimous morality, perhaps each making significant sacrifices to aid those in need, the price changes produced by mundane morality will engage millions of people more to make small adjustments similarly in aid. A gasoline price increase in New Jersey after Sandy’s flooding could trickle outward and lead gasoline consumers in Pittsburgh or Chicago to cut back consumption just a little so New Jerseyans could get a little more. Similarly for gallons of water or loaves of bread or flashlights or hundreds of other goods. Millions of people beyond the magnanimous responders get pulled into helping out, even if unknowingly.

Or they would have, had prices been free to adjust. New Jersey laws prohibit significant price increases after a disaster, and post-Sandy the state has persecuted merchants who it has judged as running afoul of the price gouging law.

Surely victims of a disaster appreciate the help that comes from people who care, but they just as surely appreciate the unintended bounty that comes from that system of voluntary social interaction guided by prices and profits called the market. Laws against post-disaster price increases obstruct the workings of mundane morality, increase the burden faced by the magnanimous, and reduce the flow of resources into disaster-struck regions.

Perhaps you think that government can fill the gap? Lee noted that restricting the workings of mundane morality increases the importance of political influence and social connections, but adds the shift is unlikely to benefit the poor. On this point a few New Jersey anecdotes may inform. See these stories on public assistance in the state:

We often honor the magnanimous, but we need not honor the mundane morality-inspired benefactors of disaster victims.  While the mundanely-moral millions may provide more help in the aggregate than the magnanimous few, the millions didn’t sacrifice intentionally. They just did the locally sensible thing given their local knowledge and normal self-awareness; doing the locally sensible thing is its own reward.

We need not honor the mundanely moral, but we also ought not block them from helping.

Better red than dead, but not red yet (on solar power)

In her New York Times Economix column Nancy Folbre recently said (“The Red Faces of the Solar Skeptics,” March 10, 2014):

If the faces of renewable energy critics are not red yet, they soon will be. For years, these critics — of solar photovoltaics in particular — have called renewable energy a boutique fantasy. A recent Wall Street Journal blog post continues the trend, asserting that solar subsidies take money from the poor to benefit the rich.

But solar-generated electricity is turning into a powerful environmental and economic success story. It’s also threatening the balance sheets of electric utility companies that continue to rely heavily on fossil fuels and nuclear energy.

I don’t count myself a renewable energy critic, but I do find myself as a critic of most renewable energy policies and so feel a bit like Folbre is addressing her points to me. In response I’ll say my face isn’t red yet, and I’m not expecting it to turn red anytime soon.

Folbre is a distinguished economist at the Univ. of Massachusetts, but she isn’t a specialist in environmental or energy economics, and I think her thinking here is a little muddled. (In this muddling through she has similarly distinguished company–consider this response to a Nobel prize winner.)

So a sample of my complaints: She trumpets the fast declining price of solar panels by picking a factoid out of a story in ComputerWorld: “declined an estimated 60 percent since the beginning of 2011!” ComputerWorld? Maybe the work of the U.S. Department of Energy or other more traditional information sources wasn’t sensational enough (claiming as it does, merely that “U.S. solar industry is more than 60 percent of the way to achieving cost-competitive utility-scale solar photovoltaic electricity”).

An investment company would have to acknowledge that cherry-picked past results are no guarantee of future performance, but it isn’t even clear that she is firm on the idea of “cost.” Folbre declares that generous subsidies and feed-in tariffs have “allowed solar photovoltaics to achieve vastly lower unit costs.” Really? Well maybe if we subsidize it a little harder, it will become free for everyone!

C’mon professor, get serious! Perhaps it is true that generous subsidies and feed-in tariffs have allowed owners of solar PV systems to experience lower out-of-pocket expenses, but it is a little embarrassing to see a distinguished economist make this mistake about costs. Should we conclude congressional junkets overseas don’t cost anything because the government foots the bill?

Not until the penultimate paragraph does Folbre get back on firm ground, talking about renewable energy policy rather than technology:

Subsidies are not the ideal public policy for promoting clean energy. As a recent analysis by the Carbon Tax Center points out, a carbon tax devised to protect low-income households from bearing a disproportionate share of higher energy prices would yield more efficient overall results, as well as encouraging solar power.

But in our subsidy-encrusted energy economy, some subsidies are better than others. As farmers say, make hay while the sun shines.

Yes, as any economist ought to say, “subsidies are not the ideal public policy for promoting clean energy.” In fact, it’s been said here a time or two.

[HT to Environmental Economics.]