Energy poverty and clean technology

For the past three years, I’ve team-taught a class that’s part of our Institute for Energy and Sustainability at Northwestern (ISEN) curriculum. It’s an introductory class, primarily focused on ethics and philosophy. One of my earth science colleagues kicks us off with the carbon cycle, the evidence for anthropogenic global warming, and interpretations of that evidence. Then one of my philosophy colleagues presents moral theories that we can use to think about the morality of our relationship with nature, environmental ethics, moral obligations to future generations, and so on. Consequentialism, Kantian ethics, virtue ethics. I learn so much from my colleagues every time!

Then I, the social scientist, come in and throw cold water on everyone’s utopias and dystopias — “no, really, this is how people really are going to behave, and the likely outcomes we’ll see from political processes.” Basic economic principles (scarcity, opportunity cost, tradeoffs, incentives, property rights, intertemporal substitution, discounting), tied in with the philosophical foundations of these principles, and then used to generate an economic analysis of politics (i.e., public choice). We finish up with a discussion of technological dynamism and the role that human creativity and innovation can play in making the balance of economic well-being and environmental sustainability more aligned and harmonious.

Energy poverty emerges as an overarching theme in the course — long-term environmental sustainability is an important issue to bear in mind when we think about consumption, investment, and innovation actions we take in the near term … but so are living standards, human health, and longevity. If people in developing countries have the basic human right to the liberty to flourish and to improve their living standards, then energy use is part of that process.

Thus when I saw this post from Bill Gates on the Gates Foundation blog it caught my attention, particularly where he says succinctly that

But even as we push to get serious about confronting climate change, we should not try to solve the problem on the backs of the poor. For one thing, poor countries represent a small part of the carbon-emissions problem. And they desperately need cheap sources of energy now to fuel the economic growth that lifts families out of poverty. They can’t afford today’s expensive clean energy solutions, and we can’t expect them wait for the technology to get cheaper.

Instead of putting constraints on poor countries that will hold back their ability to fight poverty, we should be investing dramatically more money in R&D to make fossil fuels cleaner and make clean energy cheaper than any fossil fuel.

 

In it Gates highlights two short videos from Bjorn Lomborg that emphasize two things: enabling people in poverty to get out of poverty using inexpensive natural gas rather than expensive renewables will improve the lives of many millions more people, and innovation and new ideas are the processes through which we will drive down the costs of currently-expensive clean energy. The first video makes the R&D claim and offers some useful data for contextualizing the extent of energy poverty in Africa. The second video points out that 3 billion people burn dung and twigs inside their homes as fuel sources, and that access to modern energy (i.e., electricity) would improve their health conditions.

The post and videos are worth your time. I would add one logical step in the chain, to make the economics-sustainability alignment point even more explicit — the argument that environmental quality is a normal good, and that as people leave poverty and their incomes rise, at the margin they will shift toward consumption bundles that include more environmental quality. At lower income increases there may still be incrementally more emissions (offset by the reduction in emissions from dung fires in the home), but if environmental quality is a normal good, as incomes continue to rise, consumption bundles will shift. If you know the economics literature on the environmental Kuznets curve, this argument sounds familiar. One of the best summary articles on the EKC is David Stern (2004), and he shows that there is little statistical evidence for a simple EKC, although better models have been developed and if we tell a more nuanced story and use better statistical techniques we may be able to decompose all of the effects.

Gates is paying more attention to energy because he thinks the anti-poverty agenda should include a focus on affordable energy, and energy that’s cleaner than what’s currently being used indoors for cooking in many places.

“Grid defection” and the regulated utility business model

The conversations about the “utility death spiral” to which I alluded in my recent post have included discussion of the potential for “grid defection”. Grid defection is an important phenomenon in any network industry — what if you use scarce resources to build a network that provides value for consumers, and then over time, with innovation and dynamism, what if they can find alternative ways of capturing that value (and/or more or different value)? Whether it’s a public transportation network, a wired telecommunications network, a water and sewer network, or a wired electricity distribution network, consumers can and do exit when they perceive the alternatives available to them as being more valuable than the network alternative. Of course, those four cases differ because of differences in transaction costs and regulatory institutions — making exit from a public transportation network illegal (i.e., making private transportation illegal) is much less likely, and less valuable, than making private water supply in a municipality illegal. But two of the common elements across these four infrastructure industries are interesting: the high fixed costs nature of the network infrastructure and the resulting economies of scale, and the potential for innovation and technological change to change the relative value of the network.

The first common element in network industries is the high fixed costs associated with constructing and maintaining the network, and the associated economies of scale typically found in such industries. This cost structure has long been the justification for either economic regulation or municipal supply in the industry — the cheapest per-unit way to provide large quantities is to have one provider and not to build duplicate networks, and to stipulate product quality and degrees of infrastructure redundancy to provide reliable service at the lowest feasible cost.

What does that entail? Cost-based regulation. Spreading those fixed costs out over as many consumers as possible to keep the product’s regulated price as low as feasible. If there are different consumers that can be categorized into different customer classes, and if for economic or political reasons the utility and/or the regulator have an incentive to keep prices low for one class (say, residential customers), then other types of consumers may bear a larger share of the fixed costs than they would if, for example, the fixed costs were allocated according to share of the volume of network use (this is called cross-subsidization). Cost-based regulation has been the typical regulatory approach in these industries, and cross-subsidization has been a characteristic of regulated rate structures. The classic reference for this analysis is Faulhaber American Economic Review (1975).

Both in theory and in practice these institutions can work as long as the technological environment is static. But the technological environment is anything but static; it has had periods of stability but has always been dynamic, the dynamism of which is the foundation of increased living standards over the past three centuries. Technological dynamism creates new alternatives to the existing network industry. We have seen this happen in the past two decades with mobile communications eroding the value of wired communications at a rapid rate, and that history animates the concern in electricity that distributed generation will make the distribution network less valuable and will disintermediate the regulated distribution utility, the wires owner, which relies on the distribution transaction for its revenue. It also traditionally relies on the ability to cross-subsidize across different types of customers, by charging different portions of that fixed costs to different types of customers, and that’s a pricing practice that mobile telephony also made obsolete in the communications market.

Alternatives to the network grid may have higher value to consumers in their estimation (never forget that value is subjective), and they may be willing to pay more to achieve that value. This is why most of us now pay more per month for communications services than we did pre-1984 in our monthly phone bill. As customers leave the traditional network to capture that value, though, those network fixed costs are now spread over fewer network customers. That’s the Achilles heel of cost-based regulation. And that’s a big part of what drives the “death spiral” concern — if customers increasingly self-generate and leave the network, who will pay the fixed costs? This question has traditionally been the justification for regulators approving utility standby charges, so that if a customer self-generates and has a failure, that customer can connect to the grid and get electricity. Set those rates too high, and distributed generation’s economic value falls; set those rates too low, and the distribution utility may not cover the incremental costs of serving that customer. That range can be large.

This is not a new conversation in the industry or among policy makers and academics. In fact, here’s a 2003 Electricity Journal article arguing against standby charges by friend-of-KP Sean Casten, who works in recycled energy and combined heat and power (CHP). In 2002 I presented a paper at the International Association of Energy Economics annual meetings in which I argued that distributed generation and storage would make the distribution network contestable, and after the Northeast blackout in 2003 Reason released a version of the paper as a policy study. One typical static argument for a single, regulated wires network is to eliminate costly duplication of infrastructure in the presence of economies of scale. But my argument is dynamic: innovation and technological change that competes with the wires network need not be duplicative wires, and DG+storage is an example of innovation that makes a wires network contestable.

Another older conversation that is new again was the DISCO of the Future Forum, hosted over a year or so in 2001-2002 by the Center for the Advancement of Energy Markets. I participated in this forum, in which industry, regulators, and researchers worked together to “game out” different scenarios for the distribution company business model in the context of competitive wholesale and retail markets. This 2002 Electric Light & Power article summarizes the effort and the ultimate report; note in particular this description of the forum from Jamie Wimberly, then-CAEM president (and now CEO of EcoAlign):

“The primary purpose of the forum was to thoroughly examine the issues and challenges facing distribution companies and to make consensus-based recommendations that work to ensure healthy companies and happy customers in the future,” he said. “There is no question much more needs to be discussed and debated, particularly the role of the regulated utility in the provision of new product offerings and services.”

Technological dynamism is starting to make the distribution network contestable. Now what?

The “utility death spiral”: The utility as a regulatory creation

Unless you follow the electricity industry you may not be aware of the past year’s discussion of the impending “utility death spiral”, ably summarized in this Clean Energy Group post:

There have been several reports out recently predicting that solar + storage systems will soon reach cost parity with grid-purchased electricity, thus presenting the first serious challenge to the centralized utility model.  Customers, the theory goes, will soon be able to cut the cord that has bound them to traditional utilities, opting instead to self-generate using cheap PV, with batteries to regulate the intermittent output and carry them through cloudy spells.  The plummeting cost of solar panels, plus the imminent increased production and decreased cost of electric vehicle batteries that can be used in stationary applications, have combined to create a technological perfect storm. As grid power costs rise and self-generation costs fall, a tipping point will arrive – within a decade, some analysts are predicting – at which time, it will become economically advantageous for millions of Americans to generate their own power.  The “death spiral” for utilities occurs because the more people self-generate, the more utilities will be forced to seek rate increases on a shrinking rate base… thus driving even more customers off the grid.

A January 2013 analysis from the Edison Electric Institute, Disruptive Challenges: Financial Implications and Strategic Responses to a Changing Retail Electric Business, precipitated this conversation. Focusing on the financial market implications for regulated utilities of distributed resources (DER) and technology-enabled demand-side management (an archaic term that I dislike intensely), or DSM, the report notes that:

The financial risks created by disruptive challenges include declining utility revenues, increasing costs, and lower profitability potential, particularly over the long term. As DER and DSM programs continue to capture “market share,” for example, utility revenues will be reduced. Adding the higher costs to integrate DER, increasing subsidies for DSM and direct metering of DER will result in the potential for a squeeze on profitability and, thus, credit metrics. While the regulatory process is expected to allow for recovery of lost revenues in future rate cases, tariff structures in most states call for non-DER customers to pay for (or absorb) lost revenues. As DER penetration increases, this is a cost recovery structure that will lead to political pressure to undo these cross subsidies and may result in utility stranded cost exposure.

I think the apocalyptic “death spiral” rhetoric is overblown and exaggerated, but this is a worthwhile, and perhaps overdue, conversation to have. As it has unfolded over the past year, though, I do think that some of the more essential questions on the topic are not being asked. Over the next few weeks I’m going to explore some of those questions, as I dive into a related new research project.

The theoretical argument for the possibility of death spiral is straightforward. The vertically-integrated, regulated distribution utility is a regulatory creation, intended to enable a financially sustainable business model for providing reliable basic electricity service to the largest possible number of customers for the least feasible cost, taking account of the economies of scale and scope resulting from the electro-mechanical generation and wires technologies implemented in the early 20th century. From a theoretical/benevolent social planner perspective, the objective is, given a market demand for a specific good/service, to minimize the total cost of providing that good/service subject to a zero economic profit constraint for the firm; this will lead to highest feasible output and total surplus combination (and lowest deadweight loss) consistent with the financial sustainability of the firm.

The regulatory mechanism for implementing this model to achieve this objective is to erect a legal entry barrier into the market for that specific good/service, and to assure the regulated monopolist cost recovery, including its opportunity cost of capital, otherwise known as rate-of-return regulation. In return, the regulated monopolist commits to serve all customers reliably through its vertically-integrated generation, transmission, distribution, and retail functions. The monopolist’s costs and opportunity cost of capital determine its revenue requirement, out of which we can derive flat, averaged retail prices that forecasts suggest will enable the monopolist to earn that amount of revenue.

That’s the regulatory model + business model that has existed with little substantive evolution since the early 20th century, and it did achieve the social policy objectives of the 20th century — widespread electrification and low, stable prices, which have enabled follow-on economic growth and well-distributed increased living standards. It’s a regulatory+business model, though, that is premised on a few things:

  1. Defining a market by defining the characteristics of the product/service sold in that market, in this case electricity with a particular physical (volts, amps, hertz) definition and a particular reliability level (paraphrasing Fred Kahn …)
  2. The economies of scale (those big central generators and big wires) and economies of scope (lower total cost when producing two or more products compared to producing those products separately) that exist due to large-scale electro-mechanical technologies
  3. The architectural implications of connecting large-scale electro-mechanical technologies together in a network via a set of centralized control nodes — technology -> architecture -> market environment, and in this case large-scale electro-mechanical technologies -> distributed wires network with centralized control points rather than distributed control points throughout the network, including the edge of the network (paraphrasing Larry Lessig …)
  4. The financial implications of having invested so many resources in long-lived physical assets to create that network and its control nodes — if demand is growing at a stable rate, and regulators can assure cost recovery, then the regulated monopolist can arrange financing for investments at attractive interest rates, as long as this arrangement is likely to be stable for the 30-to-40-year life of the assets

As long as those conditions are stable, regulatory cost recovery will sustain this business model. And that’s precisely the effect of smart grid technologies, distributed generation technologies, microgrid technologies — they violate one or more of those four premises, and can make it not just feasible, but actually beneficial for customers to change their behavior in ways that reduce the regulation-supported revenue of the regulated monopolist.

Digital technologies that enable greater consumer control and more choice of products and services break down the regulatory market boundaries that are required to regulate product quality. Generation innovations, from the combined-cycle gas turbine of the 1980s to small-scale Stirling engines, reduce the economies of scale that have driven the regulation of and investment in the industry for over a century. Wires networks with centralized control built to capitalize on those large-scale technologies may have less value in an environment with smaller-scale generation and digital, automated detection, response, and control. But those generation and wires assets are long-lived, and in a cost-recovery-based business model, have to be paid for even if they become the destruction in creative destruction. We saw that happen in the restructuring that occurred in the 1990s, with the liberalization of wholesale power markets and the unbundling of generation from the vertically-integrated monopolists in those states; part of the political bargain in restructuring was to compensate them for the “stranded costs” associated with having made those investments based on a regulatory commitment that they would receive cost recovery on them.

Thus the death spiral rhetoric, and the concern that the existing utility business model will not survive. But if my framing of the situation is accurate, then what we should be examining in more detail is the regulatory model, since the utility business model is itself a regulatory creation. This relationship between digital innovation (encompassing smart grid, distributed resources, and microgrids) and regulation is what I’m exploring. How should the regulatory model and the associated utility business model change in light of digital innovation?

Online Library of Liberty forum on McCloskey’s Bourgeois Era

At its Online Library of Liberty, Liberty Fund hosts a monthly “Liberty Matters” forum in which a set of scholars discusses a particular set of ideas. This month’s forum features Deirdre McCloskey‘s Bourgeois Era series of books, two of which have been published (Bourgeois Virtues, Bourgeois Dignity). McCloskey’s main argument is that the various material and institutional factors that we’ve hypothesized as the causes of industrialization and the dramatic increase in living standards are insufficient for explaining why it happened when, where, and how it did — in northern Europe, particularly Britain and the Netherlands, accelerating in the 18th century from previous foundations there. The most important factor, according to McCloskey, was ideas, particularly the cultural acceptance of commerce, trade, and mercantile activity as honorable.

The forum features a lead essay from Don Boudreaux, commentary essays from Joel Mokyr and John Nye, and responses from McCloskey and the other authors. The forum will continue for the rest of the month, with further commentary certain to follow.

If you want an opportunity to think about one of the most important intellectual questions of economics, here it is. The essays, responses, and interactions are an encapsulation of a lively and important debate in economic history over the past two decades. And if you want to dig more deeply, the bibliography and the references in each essay are a reading list for a solid course in economic history. These ideas affect not only our understanding of economic history and the history of industrialization, but also how ideas and attitudes affect economic activity and living standards today. Well worth your time and consideration.

The political economy of Uber’s multi-dimensional creative destruction

Over the past week it’s been hard to keep up with the news about Uber. Uber’s creative destruction is rapid, and occurring on multiple dimensions in different places. And while the focus right now is on Uber’s disruption in the shared transportation market, I suspect that more disruption will arise in other markets too.

Start with two facts from this Wired article from last week by Marcus Wohlsen: Uber has just completed a funding round that raised an additional $1.2 billion, and last week it announced lower UberX fares in San Francisco, New York, and Chicago (the Chicago reduction was not mentioned in the article, but I am an Uber Chicago customer, so I received a notification of it). This second fact is interesting, especially once one digs in a little deeper:

With not just success but survival on the line, Uber has even more incentive to expand as rapidly as possible. If it gets big enough quickly enough, the political price could become too high for any elected official who tries to pull Uber to the curb.

Yesterday, Uber announced it was lowering UberX fares by 20 percent in New York City, claiming the cuts would make its cheapest service cheaper than a regular yellow taxi. That follows a 25 percent decrease in the San Francisco Bay Areaannounced last week, and a similar drop in Los Angeles UberX prices revealed earlier last month. The company says UberX drivers in California (though apparently not in New York) will still get paid their standard 80 percent portion of what the fare would have been before the discount. As Forbes‘ Ellen Huet points out, the arrangement means a San Francisco ride that once cost $15 will now cost passengers $11.25, but the driver still gets paid $12.

So one thing they’re doing with their cash is essentially topping off payments to drivers while lowering prices to customers for the UberX service. Note that Uber is a multi-service firm, with rides at different quality/price combinations. I think Wohlsen’s Wired argument is right, and that they are pursuing a strategy of “grow the base quickly”, even if it means that the UberX prices are loss leaders for now (while their other service prices remain unchanged). In a recent (highly recommended!) EconTalk podcast, Russ Roberts and Mike Munger also make this point.

This “grow the base” strategy is common in tech industries, and we’ve seen it repeatedly over the past 15 years with Amazon and others. But, as Wohlsen notes, this strategy has an additional benefit of making regulatory inertia and status quo protection more costly. The more popular Uber becomes with more people, the harder it will be for existing taxi interests to succeed in shutting them down.

The ease, the transparency, the convenience, the lower transaction costs, the ability to see and submit driver ratings, the consumer assessment of whether Uber’s reputation and driver certification provides him/her with enough expectation of safety — all of these are things that consumers can now assess for themselves, without a regulator’s judgment substitution for their own judgment. The technology, the business model, and the reputation mechanism diminish the public safety justification for taxi regulation. Creative destruction and freedom to innovate are the core of improvements in living standards. But the regulated taxi industry, having paid for medallions with the expectation of perpetual entry barriers, are seeing the value of the government-created entry barrier wither, and are lobbying to stem the losses in the value of their medallions. Note here the similarity between this situation and the one in the 1990s when regulated electric utilities argued, largely successfully, that they should be compensated for “stranded costs” when they were required to divest their generation capacity at lower prices due to the anticipation of competitive wholesale markets. One consequence of regulation is the expectation of the right to a profitable business model, an expectation that flies in the face of economic growth and dynamic change.

Another move that I think represents a political compromise while giving Uber a PR opportunity was last week’s agreement with the New York Attorney General to cap “surge pricing” during citywide emergencies, a policy that Uber appears to be extending nationally. As Megan McArdle notes, this does indeed make economists sad, since Uber’s surge pricing is a wonderful example of how dynamic pricing induces more drivers to supply rides when demand is high, rather than leaving potential passengers with fewer taxis in the face of a fixed, regulated price.

Sadly, no one else loves surge pricing as much as economists do. Instead of getting all excited about the subtle, elegant machinery of price discovery, people get all outraged about “price gouging.” No matter how earnestly economists and their fellow travelers explain that this is irrational madness — that price gouging actually makes everyone better off by ensuring greater supply and allocating the supply to (approximately) those with the greatest demand — the rest of the country continues to view marking up generators after a hurricane, or similar maneuvers, as a pretty serious moral crime.

Back in April Mike wrote here about how likely this was to happen in NY, and in commenting on the agreement with the NY AG last week, Regulation editor Peter Van Doren gave a great shout-out to Mike’s lead article in the Spring 2011 issue on price gouging regulations and their ethical and welfare effects.

Even though the surge pricing cap during emergencies is economically harmful but politically predictable (in Megan’s words), I think the real effects of Uber will transcend the shared ride market. It’s a flexible piece of software — an app, a menu of contracts with drivers and riders, transparency, a reputation mechanism. Much as Amazon started by disrupting the retail book market and then expanded because of the flexibility of its software, I expect Uber to do something similar, in some form.

Pauline Maier on colonial radicalism

With Independence Day upon us, my bedtime reading for the past couple of weeks has become timely. Pauline Maier, the MIT historian who unfortunately passed away last year, published From Resistance to Revolution in 1972. It’s a carefully researched and well-written account, weaving together reports from contemporaneous sources, of the increasing radicalization of American colonists from 1765 to 1776. How did the beliefs of so many colonists evolve from being loyal British subjects to supporting revolution and independence from Britain — why this radicalization?

Maier’s ultimate conclusion is overreaching and misinterpretations on the part of the British government, which is consistent with the “generally received” historical narrative. But what I have found most interesting and novel from her argument is Chapter 2: An Ideology of Resistance and Restraint. Maier grounds the intellectual origins of revolution in the 17th-18th-century English revolutionary writers — John Locke is best known among Americans, but also John Milton and Algernon Sidney (see here my summary of Sidney on illegitimate political power) and Frances Hutcheson. She describes a category of political belief called “Real Whigs”, and argued that the Real Whig beliefs in both the people as the ultimate source of legitimate political power and the value of social order meant that the colonists were inclined to resist the illegitimate exercise of authority, but not to jump quickly to a radical revolutionary position. For example:

Spokesmen for this English revolutionary tradition were distinguished in the eighteenth century above all by their outspoken defense of the people’s right to rise up against their rulers, which they supported in traditional contractural [sic] terms. Government was created by the people to promote the public welfare. If magistrates failed to honor that trust, they automatically forfeited their powers back to the people, who were free and even obliged [as per Sidney's argument -- ed.] to reclaim political authority. The people could do so, moreoever, in acts of limited resistance, intended to nullify only isolated wrongful acts of the magistrates, or ultimately in revolution, which denied the continued legitimacy of the established government as a whole. …

The fundamental values of the Radical Whigs were realized most fully in a well-ordered free society, such that obedience to the law was stressed as much or more than occasional resistance to it. (pp. 27-28)

This chapter really resonated with me as a clear explanation of the primacy of individual liberty combined with a society ordered using universally-applied general legal principles (otherwise known as the “rule of law”). This combination of resistance and restraint is the key to understanding the political philosophy underlying the American republic, and Maier’s chapter is the best articulation of it that I’ve read.

Given the fractious and polarized political climate we inhabit today, I think a refresher on these ideas and their foundations is a good idea. We should be having a larger conversation about what constitutes legitimate and illegitimate political authority, particularly in the wake of the Snowden disclosures, the expansion of federal executive branch assertion of authority over the past 14 years, the expansion of administrative regulation (which is a sub-category of executive assertion), and the ability of business interests with political power to influence that regulation’s form and scope. There are a lot of arguments from all parts of the political spectrum that mischaracterize or misunderstand the ideas that Maier lays out here so clearly. We’d still be likely to have a fractious and polarized political climate, but we’d have better-informed public debate.

Building, and commercializing, a better nuclear reactor

A couple of years ago, I was transfixed by the research from Leslie Dewan and Mark Massie highlighted in their TedX video on the future of nuclear power.

 

A recent IEEE Spectrum article highlights what Dewan and Massie have been up to since then, which is founding a startup called Transatomic Power in partnership with investor Russ Wilcox. The description of the reactor from the article indicates its potential benefits:

The design they came up with is a variant on the molten salt reactors first demonstrated in the 1950s. This type of reactor uses fuel dissolved in a liquid salt at a temperature of around 650 °C instead of the solid fuel rods found in today’s conventional reactors. Improving on the 1950s design, Dewan and Massie’s reactor could run on spent nuclear fuel, thus reducing the industry’s nuclear waste problem. What’s more, Dewan says, their reactor would be “walk-away safe,” a key selling point in a post-Fukushima world. “If you don’t have electric power, or if you don’t have any operators on site, the reactor will just coast to a stop, and the salt will freeze solid in the course of a few hours,” she says.

The article goes on to discuss raising funds for lab experiments and a subsequent demonstration project, and it ends on a skeptical note, with an indication that existing industrial nuclear manufacturers in the US and Europe are unlikely to be interested in commercializing such an advanced reactor technology. Perhaps the best prospects for such a technology are in Asia.

Another thing I found striking in reading this article, and that I find in general when reading about advanced nuclear reactor technology, is how dismissive some people are of such innovation — why not go for thorium, or why even bother with this when the “real” answer is to harness solar power for nuclear fission? Such criticisms of innovations like this are misguided, and show a misunderstanding of both the economics of innovation and the process of innovation itself. One of the clear benefits of this innovation is its use of a known, proven reactor technology in a novel way and using spent fuel rod waste as fuel. This incremental “killing two birds with one stone” approach may be an economical approach to generating clean electricity, reducing waste, and filling a technology gap while more basic science research continues on other generation technologies.

Arguing that nuclear is a waste of time is the equivalent of a “swing for the fences” energy innovation strategy. Transatomic’s reactor represents a “get guys on base” energy innovation strategy. We certainly should do basic research and swing for the fences, but that’s no substitute for the incremental benefits of getting new technologies on base that create value in multiple energy and environmental dimensions.