Building, and commercializing, a better nuclear reactor

A couple of years ago, I was transfixed by the research from Leslie Dewan and Mark Massie highlighted in their TedX video on the future of nuclear power.

 

A recent IEEE Spectrum article highlights what Dewan and Massie have been up to since then, which is founding a startup called Transatomic Power in partnership with investor Russ Wilcox. The description of the reactor from the article indicates its potential benefits:

The design they came up with is a variant on the molten salt reactors first demonstrated in the 1950s. This type of reactor uses fuel dissolved in a liquid salt at a temperature of around 650 °C instead of the solid fuel rods found in today’s conventional reactors. Improving on the 1950s design, Dewan and Massie’s reactor could run on spent nuclear fuel, thus reducing the industry’s nuclear waste problem. What’s more, Dewan says, their reactor would be “walk-away safe,” a key selling point in a post-Fukushima world. “If you don’t have electric power, or if you don’t have any operators on site, the reactor will just coast to a stop, and the salt will freeze solid in the course of a few hours,” she says.

The article goes on to discuss raising funds for lab experiments and a subsequent demonstration project, and it ends on a skeptical note, with an indication that existing industrial nuclear manufacturers in the US and Europe are unlikely to be interested in commercializing such an advanced reactor technology. Perhaps the best prospects for such a technology are in Asia.

Another thing I found striking in reading this article, and that I find in general when reading about advanced nuclear reactor technology, is how dismissive some people are of such innovation — why not go for thorium, or why even bother with this when the “real” answer is to harness solar power for nuclear fission? Such criticisms of innovations like this are misguided, and show a misunderstanding of both the economics of innovation and the process of innovation itself. One of the clear benefits of this innovation is its use of a known, proven reactor technology in a novel way and using spent fuel rod waste as fuel. This incremental “killing two birds with one stone” approach may be an economical approach to generating clean electricity, reducing waste, and filling a technology gap while more basic science research continues on other generation technologies.

Arguing that nuclear is a waste of time is the equivalent of a “swing for the fences” energy innovation strategy. Transatomic’s reactor represents a “get guys on base” energy innovation strategy. We certainly should do basic research and swing for the fences, but that’s no substitute for the incremental benefits of getting new technologies on base that create value in multiple energy and environmental dimensions.

Ben Powell on drought and water pricing

Ben Powell at Texas Tech has an essay on water scarcity at Huffington Post in which he channels David Zetland:

But water shortages in Lubbock and elsewhere are not meteorological phenomena. The shortages are a man-made result of bad economic policy.

Droughts make water scarcer, but by themselves they cannot cause shortages. To have a shortage and a risk of depletion, a resource must be mispriced.

With the freedom to choose, consumers can demonstrate whether it’s worth the cost to them to water their lawn an extra day or hose dust off of their house. Realistic pricing also incentivizes them to take account of water’s scarcity when they consume it in ways that aren’t currently prohibited. Have your long shower if you want . . . but pay the real price of it instead of the current subsidized rate.

Of course Ben is correct in his analysis and his policy recommendation, although I would nuance it with David’s “some for free, pay for more” to address some of the income distribution/regressivity aspects of municipal water pricing. Water is almost universally mispriced and wasted, exacerbating the distress and economic costs of drought.

Critiquing the theory of disruptive innovation

Jill Lepore, a professor of history at Harvard and writer for the New Yorker, has written a critique of Clayton Christensen’s theory of disruptive innovation that is worth thinking through. Christensen’s The Innovator’s Dilemma (the dilemma is for firms to continue making the same decisions that made them successful, which will lead to their downfall) has been incredibly influential since its 1997 publication, and has moved the concept of disruptive innovation from its arcane Schumpeterian origins into modern business practice in a fast-changing technological environment. Disrupt or be disrupted, innovate or die, become corporate strategy maxims under the theory of disruptive innovation.

Lepore’s critique highlights the weaknesses of Christensen’s model (and it does have weaknesses, despite its success and prevalence in business culture). His historical analysis, the case study methodology, and the decisions he made regarding cutoff points in time all leave unsatisfyingly unsystematic support for his model, yet he argues that the theory of disruptive innovation is predictive and can be used with foresight to identify how firms can avoid failure. Lepore’s critique here is apt and worth considering.

Josh Gans weighs in on the Lepore article, and the theory of disruptive innovation more generally, by noting that at the core of the theory of disruptive innovation lies a new technology, and the appeal of that technology (or what it enables) to consumers:

But for every theory that reaches too far, there is a nugget of truth lurking at the centre. For Christensen, it was always clearer when we broke it down to its constituent parts as an economic theorist might (by the way, Christensen doesn’t like us economists but that is another matter). At the heart of the theory is a type of technology — a disruptive technology. In my mind, this is a technology that satisfies two criteria. First, it initially performs worse than existing technologies on precisely the dimensions that set the leading, for want of a better word, ‘metrics’ of the industry. So for disk drives, it might be capacity or performance even as new entrants promoted lower energy drives that were useful for laptops.

But that isn’t enough. You can’t actually ‘disrupt’ an industry with a technology that most consumers don’t like. There are many of those. To distinguish a disruptive technology from a mere bad idea or dead-end, you need a second criteria — the technology has a fast path of improvement on precisely those metrics the industry currently values. So your low powered drives get better performance and capacity. It is only then that the incumbents say ‘uh oh’ and are facing disruption that may be too late to deal with.

Herein lies the contradiction that Christensen has always faced. It is easy to tell if a technology is ‘potentially disruptive’ as it only has to satisfy criteria 1 — that it performs well on one thing but not on the ‘standard’ stuff. However, that is all you have to go on to make a prediction. Because the second criteria will only be determined in the future. And what is more, there has to be uncertainty over that prediction.

Josh has hit upon one of the most important dilemmas in innovation — if the new technology is likely to succeed against the old, it must offer satisfaction on the established value propositions of the incumbent technology as well as improving upon them either in speed, quality, or differentiation. And that’s inherently unknown; the incumbent can either innovate too soon and suffer losses, or innovate too late and suffer losses. At this level, the theory does not help us distinguish and identify the factors that associate innovation with continued success of the firm.

Both Lepore and Gans highlight Christensen’s desire for his theory to be predictive when it cannot be. Lepore summarizes the circularity that indicates this lack of a predictive hypothesis:

If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation. … When an established company succeeds, that’s only because it hasn’t yet failed. And, when any of these things happen, all of them are only further evidence of disruption.

What Lepore brings to the party, in addition to a sharp mind and good analytical writing, is her background and sensibilities as an historian. A historical perspective on innovation helps balance some of the breathless enthusiasm for novelty often found in technology or business strategy writing. Her essay includes a discussion of the concept of “innovation” and how it has changed over several centuries (having been largely negative pre-Schumpeter), as has the Enlightenment’s theory of history as being one of human progress, which has since morphed into different theories of history:

The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence. …

The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.

I think there’s a lot to her interpretation (and I say that wearing both my historian hat and my technologist hat). But I think that both the Lepore and Gans critiques, and indeed Christensen’s theory of disruptive innovation itself, would benefit from (for lack of a catchier name) a Smithian-Austrian perspective on creativity, uncertainty, and innovation.

The Lepore and Gans critiques indicate, correctly, that supporting the disruptive innovation theory requires hindsight and historical analysis because we have to observe realized outcomes to identify the relationship between innovation and the success/failure of the firm. That concept of an unknown future rests mostly in the category of risk — if we identify that past relationship, we can generate a probability distribution or a Bayesian prior for the factors likely to lead to innovation yielding success.

But the genesis of innovation is in uncertainty, not risk; if truly disruptive, innovation may break those historical relationships (pace the Gans observation about having to satisfy the incumbent value propositions). And we won’t know if that’s the case until after the innovators have unleashed the process. Some aspects of what leads to success or failure will indeed be unknowable. My epistemic/knowledge problem take on the innovator’s dilemma is that both risk and uncertainty are at play in the dynamics of innovation, and they are hard to disentangle, both epistemologically and as a matter of strategy. Successful innovation will arise from combining awareness of profit opportunities and taking action along with the disruption (the Schumpeter-Knight-Kirzner synthesis).

The genesis of innovation is also in our innate human creativity, and our channeling of that creativity into this thing we call innovation. I’d go back to the 18th century (and that Enlightenment notion of progress) and invoke both Adam Smith and David Hume to argue that innovation as an expression of human creativity is a natural consequence of our individual striving to make ourselves better off. Good market institutions using the signals of prices, profits, and losses align that individual striving with an incentive for creators to create goods and services that will benefit others, as indicated by their willingness to buy them rather than do other things with their resources.

By this model, we are inherent innovators, and successful innovation involves the combination of awareness, action, and disruption in the face of epistemic reality. Identifying that combination ex ante may be impossible. This is not a strategy model of why firms fail, but it does suggest that such strategy models should consider more than just disruption when trying to understand (or dare I say predict) future success or failure.

Price gouging in a second language

Research on differences between decisions made in a person’s native tongue and decisions made in a second language reminded me of an unexplored idea in the social dynamics surrounding price gouging.

I’ve devoted a few posts to the question of whether or not price gouging laws get applied in a discriminatory fashion against “outsiders,” primarily thinking of immigrants or cultural minorities. My evidence is slim, mostly the casual reading of a handful of news stories, but consider these prior posts and possible examples from Mississippi, New Jersey, and West Virginia.

In the New Jersey article I speculated it was possible that “outsiders” were more likely to engage in price gouging behaviors, and observed, “Social distance between buyers and sellers can work both ways.”

Some support for my speculation comes through communication research by Boaz Keysar of the University of Chicago, who has documented the view, as the subtitle of an article in the journal Psychological Science puts it, “thinking in a foreign tongue reduces decision biases.” Part of the explanation Keysar and his coauthors offer is the “foreign language provides greater cognitive and emotional distance than a native tongue does.” (The work was mentioned in a recent Freakonomics podcast.) An immigrant hotelier or retailer may not connect as emotionally as a native does with laws expressed in the native’s language or with customers when transacting in that language. When exchange is seen as impersonal rather than personal, price-setters are less constrained in their pricing decisions.

Interestingly, Keysar is also coauthor on a study concluding that moral judgments are markedly more utilitarian when problems and responses are conducted in a second language. Economic analysis tends to support the view that “price gouging” in response to sudden shifts in demand is the correct utilitarian response (as flexible prices help goods and services move toward those who value them most).

The spin on wind, or, an example of bullshit in the field of energy policy

The Wall Street Journal recently opined against President Obama’s nominee for Federal Energy Regulatory Commission chairman, Norman Bay, and in the process took a modest swipe at subsidies for wind energy.

The context here is Bay’s action while leading FERC’s enforcement division, and in particular his prosecution of electric power market participants who manage to run afoul of FERC’s vague definition for market manipulation even though their trading behavior complied with all laws, regulations, and market rules.

So here the WSJ‘s editorial board pokes a little at subsidized wind in the process of making a point about reckless prosecutions:

As a thought experiment, consider the production tax credit for wind energy. In certain places at certain times, the subsidy is lucrative enough that wind generators make bids at negative prices: Instead of selling their product, they pay the market to drive prices below zero or “buy” electricity that would otherwise go unsold to qualify for the credit.

That strategy harms unsubsidized energy sources, distorts competition and may be an offense against taxpayers. But it isn’t a crime in the conventional legal sense because wind outfits are merely exploiting the subsidy in the open. The rational solution would be to end the subsidies that create negative bids, not to indict the wind farms. But for Mr. Bay, the same logic doesn’t apply to FERC.

The first quoted paragraph seems descriptive of reality and doesn’t cast wind energy in any negative light. The second quoted paragraph suggests the subsidy harms unsubsidized competitors, also plainly true, and that it “distorts competition” and “may be an offense against taxpayers.” These last two characterizations also strike me as fair descriptions of current public policy, and perhaps as mildly negative in tone.

Of course folks at the wind industry’s lobby shop are eager to challenge any little perceived slight, so the AWEA’s Michael Goggin sent a letter to the editor:

Your editorial “Electric Prosecutor Acid Test” (May 19) ignores wind energy’s real consumer benefits by mentioning the red herring of negative electricity prices. Negative prices are extremely rare and are usually highly localized in remote areas where they have little to no impact on other power plants, are caused by inflexible nuclear power plants much of the time, and are being eliminated as long-needed grid upgrades are completed.

Wind energy’s real impact is saving consumers money by displacing more expensive forms of energy, which is precisely why utilities bought wind in the first place. This impact is entirely market-driven, occurs with or without the tax credit, and applies to all low-fuel-cost sources of energy, including nuclear.

The tax relief provided to wind energy more than pays for itself by enabling economic development that generates additional tax revenue and represents a small fraction of the cumulative incentives given to other energy sources.

Michael Goggin
American Wind Energy Association
Washington, DC

Let’s just say I’ll believe the “impact is entirely market-driven” when someone produces a convincing study that shows the exact same wind energy capacity build-out would have happened over the last 20 years in the absence of the U.S. federal Production Tax Credit and state renewable energy purchase mandates. Without the tax credit, the wind energy industry likely would be (I’m guessing) less than one-tenth of its current size and without a big tax credit wouldn’t be the target of much public policy debate.

Of course, without much public policy debate, the wind energy industry wouldn’t need to hire so many lobbyists. Hence the AWEA’s urge to jump on any perceived slight, stir the pot, and keep debate going.

MORE on the lobbying against the Bay nomination. See also this WSJ op-ed.

 

Did ERCOT’s shift from zonal to nodal market design reduce electric power prices?

Jay Zarnikau, C.K. Woo, and Ross Baldick have examined whether the shift from a zonal to nodal market design in the ERCOT power market had a noticeable effect on electric energy prices. The resulting article, published in the Journal of Regulatory Economics, and this post may be a bit geekier than we usually get around here. I’ll try to tone it down and explain the ERCOT change and the effect on prices as clearly as I can.

The topic is important because the shift from zonal to nodal market structure was controversial, complicated, expensive, and took longer than expected. Problems had emerged shortly after launch of the initial zonal-based market and the nodal approach was offered as a solution. Some market participants had their doubts, but rather quickly ERCOT began the move to a nodal design. Note that phrasing: “rather quickly ERCOT began the move.” It took several years for ERCOT to actually complete the process.

In part the shift was promoted as a more efficient way to run the market. Zarnikau, Woo, and Baldick looked at the effect on prices as one way to assess whether or not the resulting market has worked more efficiently. They conclude energy prices are about 2 percent lower because of the nodal market design.

Don’t get hung up on the 2 percent number itself, but think of the shift as having a modest downward pressure on prices.

The result is consistent with an understanding one would gain from the study of power systems engineering as well as with what power system simulations showed. The point of the Zarnikau et al. study was to investigate whether data analysis after the fact supported expectations offered by theory and simulation. Because there is no better empirical study (so far as I am aware) and because their results are consistent with well-founded expectations, I have no reason to doubt their result. I will contest one interpretation they offer concerning the current resource adequacy debate in Texas.

Some background (which beginners should read and others can skip).

The delivery of electric energy to consumers is a joint effort between the generators that create the power and the wires that bring it to the consumer. The wires part of the system are not simple links between generators and consumers, but rather complicated network of wires in which consumers and generators are connected in multiple ways. The added flexibility that comes with networking helps the system work more reliably and at lower cost.

The network also comes with a big coordination problem, too. Power flows on the network are not individually controllable. With many generators producing power for many consumers, parts of the power grid may become overloaded. One key job of the power system operator is to watch the power flows on the electric grid and intervene as needed to prevent a transmission line from being overloaded. The intervention generally takes the form of directing a generator (or generators) contributing to the potential overload to reduce output and directing other generators to increase output. In areas outside of regional system operators, this function is done on a piecemeal basis as problems arise. A significant benefit coming from full-scale regional power markets integrated with system operations (such as ERCOT in Texas after the switch to a nodal market and in other similar ISO/RTO markets) is that such coordination can be done in advance, with more information, mostly automatically, and more efficiently than piecemeal adjustments.

Described in simpler terms, the regional power system operator helps generators and consumers coordinate use of the power grid in the effort to efficiently satisfy consumer demands for electric energy. A zonal market design, like ERCOT started with, did minimal advance coordination. The nodal market design and related changes implemented by ERCOT allowed the market to do more sophisticated and efficient coordination of grid use.

About data challenges.

In order to assess the effects on prices, the authors couldn’t simply average prices before and after the December 1, 2010 change in the market. The power system is a dynamic thing, and many other factors known to affect electric power prices changed between the two periods. Most significantly, natural gas prices were much lower on average after the market change than during the years before. Other changes include growing consumer load, higher offer caps, and increasing amounts of wind energy capacity. In addition, the prices are generated by the system has been changed, making simple before and after comparisons insufficient. For example, rather than four zonal prices produced every 15 minutes, the nodal market yields thousands of prices every 5 minutes.

One potentially significant data-related decision was a choice to omit “outliers,” prices that were substantially higher or lower than usual. The authors explain that extreme price spikes were much more frequent in 2011, after the change, but largely due to the summer of 2011 being among the hottest on record. At the same time the offer caps had been increased, so that prices spiked higher than they could have before, but not because of the zonal-to-nodal market shift. Omitting outliers reduces the impact of these otherwise confounding changes and should produce a better sense of the effect of the market change during more normal conditions.

Their conclusion and a mistaken interpretation.

Zarnikau, Woo, and Baldick conducted their price analysis on four ERCOT sub-regions separately so as to see if the change had differing impacts resulting from the changeover. The West zone stood out in the analysis, largely because that zone has seen the most significant other changes in the power system. The two main changes: continued sizable wind energy capacity additions in the zone, and more substantially, dramatic electrical load growth in the region due to the recent oil and gas drilling boom in west Texas. Because the West results were a bit flaky, they based their conclusions on results from the other three zones. Across a number of minor variations in specifications, the authors found a price suppression effect ranging from 1.3 and 3.3 percent, the load-weighted average of which is right around 2 percent.

So far, so good.

But next they offered what is surely a misinterpretation of their results. They wrote:

[T]he reduction in wholesale prices from the implementation of the nodal market might be viewed by some as a concern. In recent years, low natural gas prices and increased wind farm generation have also reduced electricity prices in ERCOT which has, in turn, impaired the economics of power plant construction. … It appears as though the nodal market’s design may have contributed to the drop in prices that the PUCT has now sought to reverse.

Strictly speaking, the goal of the Public Utility Commission of Texas hasn’t been to reverse the drop in prices, but to ensure sufficient investment in supply resources to reliably meet projected future demand. Lower prices appear to be offer smaller investment incentives than higher prices, but there is a subtle factor in play.

The real incentive to investment isn’t higher prices, it is higher profits. Remember, one of the most important reasons to make the switch from a zonal to a nodal market is that the nodal market is supposed to operate more efficiently. Zarnikau, Woo, and Baldick notice that marginal heat rates declined after the shift, evidence consistent with more efficient operations. The efficiency gain suggests generators are operating at an overall lower cost, which means even with lower prices generator profits could be higher now than they would have been. It all depends on whether the drop in cost was larger or smaller than the drop in prices.

The cost and profit changes will be somewhat different for generators depending on where they are located, what fuel they use, and how they typically operated. I’ll hazard the guess that relatively efficient natural gas plants have seen their profits increased a bit whereas less efficient gas plants, nuclear plants, and coal plants have likely seen profits fall a little.

FULL CITE: Zarnikau, J., C. K. Woo, and R. Baldick. “Did the introduction of a nodal market structure impact wholesale electricity prices in the Texas (ERCOT) market?.”Journal of Regulatory Economics 45.2 (2014): 194-208.

Here is a link to a non-gated preliminary version if you don’t have direct access to the Journal of Regulatory Economics.

AN ASIDE: One modest irony out of Texas–the multi-billion dollar CREZ transmission line expansion, mostly intended to support delivery of wind energy from West Texas into the rest of the state, has turned out to be used more to support the import of power from elsewhere in the state to meet the demands of a rapidly growing Permian Basin-based oil and gas industry.

Court says no to FERC’s negawatt payment rule

Jeremy Jacobs and Hannah Northey at Greenwire report “Appeals court throws out FERC’s demand-response order“:

A federal appeals court today threw out a high-profile Federal Energy Regulatory Commission order that provided incentives for electricity users to consume less power, a practice dubbed demand response.

In a divided ruling, the U.S. Court of Appeals for the District of Columbia Circuit struck a blow to the Obama administration’s energy efficiency efforts, vacating a 2011 FERC order requiring grid operators to pay customers and demand-response providers the market value of unused electricity.

Among environmentalists this demand-response enabled “unused electricity” is sometimes described as negawatts. FERC’s rule required FERC-regulated wholesale electric power markets to pay demand-response providers the full market price of electricity. It is, of course, economic nonsense pursued in the effort to boost demand response programs in FERC-regulated markets.

The court held that FERC significantly overstepped the commission’s authority under the Federal Power Act.

The Federal Power Act assigns most regulatory authority over retail electricity prices to the states, and the court said FERC’s demand response pricing rule interfered with state regulators’ authority.

Personally, I would have dinged FERC’s rule for economic stupidity, but maybe that isn’t the court’s job. Actually, I did ding the FERC’s rule for its economic stupidity. I was one of twenty economists joining in a amicus brief in the case arguing that the FERC pricing rule didn’t make sense. The court’s decision gave our brief a nod:

Although we need not delve now into the dispute among experts, see, e.g., Br. of Leading Economists as Amicus Curiae in Support of Pet’rs, the potential windfall  to demand response resources seems troubling, and the Commissioner’s concerns are certainly valid.  Indeed, “overcompensation cannot be just and reasonable,” Order 745-A, 2011 WL 6523756, at *38 (Moeller, dissenting), and the Commission has not adequately explained how their system results in just compensation.

But if this negawatt-market price idea survives the appeals court rejection and takes off in the energy policy area, I have the following idea: I’d really like a Tesla automobile, but the current price indicates that Teslas are in high demand so I’m going to not buy one today. Okay, now who is going to pay me $90,000 for the nega-Tesla I just made?

RELATED STORIES: