Social costs of oil and gas leasing on federal lands, carefully considered

OVERVIEW: A report filed with the US Department of the Interior recommended that terms governing the leasing of federal land for oil and gas development be updated to reflect social costs associated with such development. While such costs may be policy relevant, I suggest social costs are smaller than the report indicates and the recommended policy changes are not well focused.

The U.S. Department of the Interior (“Interior”) has begun an effort to update financial terms for oil and gas leases on federal lands. These financial aspects – royalties, minimum acceptable bids, annual rental rates, bonding requirements, and penalty rates – are collectively referred to as “government take.” One issue raised in the effort concerns social costs associated with oil and gas development on federal lands. (As noted earlier, Shawn Regan and I have filed a comment with Interior on the issue.)


Social costs of such development are also among issues addressed in a report filed in the Interior rulemaking docket by Jayni Foley Hein of New York University’s Institute for Policy Integrity. The report provides an overview of the legal requirements governing government take and recommends Interior’s regulations be revised to reflect option value and social costs. Here I focus on social costs.

Hein said social costs are imposed by oil and gas development on federal lands both during development and during production. She wrote:

America’s public lands offer millions of people a place to hike, camp, hunt, fish, and enjoy scenic beauty. They provide drinking water, clean air, critical habitat for wildlife, sites for renewable energy development, as well as natural resources including timber, minerals, oil, and natural gas. As soon as energy exploration begins, competing uses of federal land such as recreational enjoyment, commercial fishing, and renewable energy development are impaired, and continue to be foreclosed for the duration of production.

Hein listed the following social costs of oil and gas activity on federal lands*:

  • Loss of use values (including loss of recreational value, renewable energy development potential, timber value, scenic value, and wildlife habitat)
  • Local air pollution (local effects of methane leakage, emissions from diesel or gas-fueled pumps and other engines)
  • Global air pollution (methane leaks, carbon dioxide)
  • Induced earthquakes from disposal of hydraulic fracturing wastewater
  • Potential oil or wastewater spills and subsequent water contamination from wastewater stored in pits and tanks
  • Noise pollution
  • Increased traffic (wear and tear on roadways, traffic-related fatalities).

She recommended increasing rental rates and royalties to reflect social costs associated with development and production of oil and gas on federal lands.


Naïve application of Hein’s list would likely produce significant over-counting of social costs. Regan and I described social costs as “the sum of all future benefits foregone by one or more persons due to oil and gas development activity on federal lands.” We were imprecise. We cannot simply sum up all possible future foregone benefits, but rather we should focus on the difference in benefits between two specific cases: one case with oil and gas resources leased for development, and a second case in which the land is not leased.

The social costs of oil and gas leasing is the sum of the specific incremental differences in the stream of future benefits associated with the land leased for oil and gas development as compared to the best alternative use. Specification of the second case is key. Assume, for example, that if the property is not leased for oil and gas development, then it would be leased for PV solar power development. Leasing the land for PV solar power also involves some loss of timber value, wildlife habitat, recreational value, and so on. In counting the social costs of oil and gas leasing associated with, say, wildlife habitat, we need to focus on just the difference in wildlife habitat between the two cases. If recreational use is impaired equally, the loss of recreation value is not properly counted as a cost of oil and gas leasing.

Consequences, or rather, the differences in consequences beyond the property itself matter too. It is likely holding a specific tract of property out of oil production has no effect on total world oil production and consumption, and therefore there would be no difference in total air pollution, traffic, potential for oil leaks, and so on. Withholding a particular property out of development primarily would affect the location, not the total amount, of these costs. Location can matter: we likely do not want to increase traffic and local air pollution in already crowded areas. But location does not always matter: the greenhouse gas implications are the same whether a methane leak arises from development on federal land or elsewhere.


A careful identification of the social costs of oil and gas leasing associated with specific federal properties would reveal these social costs to be smaller than a naïve application of Hein’s list may suggest. Federal oil and gas policies governing the government take primarily affect the distribution of social costs, not the total amount. Most relevant social costs are highly localized to the area of development, a feature which should make them easier to manage.

Other issues arise with Hein’s proposal to increase rental rates and royalty rates to account for social costs. While charging a higher royalty rate, for example, would discourage development of federal lands at the margin, it would not encourage operators to minimize social costs on properties that are developed. Other policy levers may be more useful.

*NOTE: The list of social costs is my summary drawn from Hein’s report. We might dispute aspects of the list, but for purposes of this post I am more interested in the social cost concept rather than the particular items listed.

Government failure and the California drought

Yesterday the New York Times had a story about California’s four-year drought, complete with apocalyptic imagery and despair over whether conservation would succeed. Alex Tabarrok used that article as a springboard for a very informative and link-filled post at Marginal Revolution digging into the ongoing California drought, including some useful data and comment participation from David Zetland:

California has plenty of water…just not enough to satisfy every possible use of water that people can imagine when the price is close to zero. As David Zetland points out in an excellent interview with Russ Roberts, people in San Diego county use around 150 gallons of water a day. Meanwhile in Sydney Australia, with a roughly comparable climate and standard of living, people use about half that amount. Trust me, no one in Sydney is going thirsty.

California’s drought is a failure to implement institutional change consistent with environmental and economic sustainability. One failure that Alex discusses (and that every economist who pays attention to water agrees on) is the artificially low retail price of water, both to residential consumers and agricultural consumers. And Alex combines David’s insights with some analysis from Matthew Kahn to conclude that the income effect arguments against higher water prices have no analytical or moral foundation — San Diego residents pay approximately 0.5 cents per gallon (yes, that’s half a penny per gallon) for their tap water, so even increasing that price by 50% would only decrease incomes by about 1%.

There’s another institutional failure in California, which is the lack of water markets and the fact that the transfer of water across different uses has been illegal. Farmers have not been able to sell any of their agricultural allocation to other users, even if the value of the water in those other uses is higher. According to the California Water Code as summarized by the State Water Resource Board,

In recent years, temporary transfers of water from one water user to another have been used increasingly as a way of meeting statewide water demands, particularly in drought years. Temporary transfers of post 1914 water rights are initiated by petition to the State Board. If the Board finds the proposed transfer will not injure any other legal user of water and will not unreasonably affect fish, wildlife or other instream users, then the transfer is approved. If the Board cannot make the required findings within 60 days, a hearing is held prior to Board action on the proposed transfer. Temporary transfers are defined to be for a period of one year or less. A similar review and approval process applies to long-term transfers in excess of one year.

Thus in a semi-arid region like California there’s a large rice industry, represented in Sacramento by an active trade association. Think of this rule through the lens of permissionless innovation — these farmers have to ask permission before they can make temporary transfers, Board approval is not guaranteed, and they are barred from making permanent transfers of their use rights. One justification for this rule is the economic viability of small farming communities, which the water bureaucrats believe would suffer if farmers sold their water rights and exited the industry. This narrow view of economic viability, assuming away the dynamism that means that residents of those communities could create more valuable lives for themselves and others if they use their resources and talents differently, is a depressing but not surprising piece of bureaucratic hubris.

Not surprisingly in year 4 of a drought, these temporary water transfers are increasing in value. Just yesterday, the Metropolitan Water District of Southern California made an offer to the Western Canal Water District in Northern California at the highest prices yet.

The offer from the Metropolitan Water District of Southern California and others to buy water from the Sacramento Valley for $700 per acre-foot reflects how dire the situation is as the state suffers through its fourth year of drought. In 2010 — also a drought year — it bought water but only paid between $244 and $300 for the same amount. The district stretches from Los Angeles to San Diego County. …

The offer is a hard one to turn down for farmers like Tennis, who also sits on the Western Canal Water District Board. Farmers can make around $900 an acre, after costs, growing rice, Tennis said. But because each acre of rice takes a little more than 3 acre-feet of water, they could make around $2,100 by selling the water that would be used. …

If the deal is made, Tennis said farmers like himself will treat it as a windfall rather than a long-term enterprise.

“We’re not water sellers, we’re farmers,” he said.

And that’s the problem.

Solar generation in key states

I’ve been playing around with some ownership type and fuel source data on electricity generation, using the EIA’s annual data going back to 1990. I looked at solar’s share of the total MWH of generated electricity in eight states (AZ CA IL NC NJ NY OH TX), 1990-2012, and express it as a percentage of that total, here’s what I got:

solar share since 1990

In looking at the data and at this graph, a few things catch my attention. California (the green line) clearly has an active solar market throughout the entire period, much of which I attribute to the implementation of PURPA qualifying facilities regulations starting in 1978 (although I’m happy to be corrected if I’m mistaken). The other seven states here have little or no solar market until the mid-2010s; Arizona (starts having solar in 2001) and Texas (some solar before restructuring, then none, then an increase) are exceptions to the general pattern.

Of course the most striking pattern in these data is the large uptick in solar shares in 2011 and 2012. That uptick is driven by several factors, both economic and regulatory, and trying to distentangle that is part of what I’m working on currently. I’m interested in the development and change in residential solar market, and how the extent and type of regulatory policy influences the extent and type of innovation and changing market boundaries that ensue. Another way to parse the data is by ownership type, and how that varies by state depending on the regulatory institutions in place. In a state like North Carolina (teal), still vertically-integrated, both the regulated utility and independent power producers own solar. The path to market, and indeed whether or not you can actually say that a residential solar market qua market exists, differs in a vertically-integrated state from, say, New Jersey (orange) or Illinois (purple, but barely visible), where thus far the residential solar market is independent, and the regulated utility does not participate (again, please correct me if I’m mistaken).

It will be interesting to see what the 2013 data tell us, when the EIA release it in November. But even in California with that large uptick, solar’s share of total MWH generated does not go above 2 percent, and is substantially smaller in other states.

What do you see here? I know some of you will want to snark about subsidies for the uptick, but please keep it substantive :-).

The “utility death spiral”: The utility as a regulatory creation

Unless you follow the electricity industry you may not be aware of the past year’s discussion of the impending “utility death spiral”, ably summarized in this Clean Energy Group post:

There have been several reports out recently predicting that solar + storage systems will soon reach cost parity with grid-purchased electricity, thus presenting the first serious challenge to the centralized utility model.  Customers, the theory goes, will soon be able to cut the cord that has bound them to traditional utilities, opting instead to self-generate using cheap PV, with batteries to regulate the intermittent output and carry them through cloudy spells.  The plummeting cost of solar panels, plus the imminent increased production and decreased cost of electric vehicle batteries that can be used in stationary applications, have combined to create a technological perfect storm. As grid power costs rise and self-generation costs fall, a tipping point will arrive – within a decade, some analysts are predicting – at which time, it will become economically advantageous for millions of Americans to generate their own power.  The “death spiral” for utilities occurs because the more people self-generate, the more utilities will be forced to seek rate increases on a shrinking rate base… thus driving even more customers off the grid.

A January 2013 analysis from the Edison Electric Institute, Disruptive Challenges: Financial Implications and Strategic Responses to a Changing Retail Electric Business, precipitated this conversation. Focusing on the financial market implications for regulated utilities of distributed resources (DER) and technology-enabled demand-side management (an archaic term that I dislike intensely), or DSM, the report notes that:

The financial risks created by disruptive challenges include declining utility revenues, increasing costs, and lower profitability potential, particularly over the long term. As DER and DSM programs continue to capture “market share,” for example, utility revenues will be reduced. Adding the higher costs to integrate DER, increasing subsidies for DSM and direct metering of DER will result in the potential for a squeeze on profitability and, thus, credit metrics. While the regulatory process is expected to allow for recovery of lost revenues in future rate cases, tariff structures in most states call for non-DER customers to pay for (or absorb) lost revenues. As DER penetration increases, this is a cost recovery structure that will lead to political pressure to undo these cross subsidies and may result in utility stranded cost exposure.

I think the apocalyptic “death spiral” rhetoric is overblown and exaggerated, but this is a worthwhile, and perhaps overdue, conversation to have. As it has unfolded over the past year, though, I do think that some of the more essential questions on the topic are not being asked. Over the next few weeks I’m going to explore some of those questions, as I dive into a related new research project.

The theoretical argument for the possibility of death spiral is straightforward. The vertically-integrated, regulated distribution utility is a regulatory creation, intended to enable a financially sustainable business model for providing reliable basic electricity service to the largest possible number of customers for the least feasible cost, taking account of the economies of scale and scope resulting from the electro-mechanical generation and wires technologies implemented in the early 20th century. From a theoretical/benevolent social planner perspective, the objective is, given a market demand for a specific good/service, to minimize the total cost of providing that good/service subject to a zero economic profit constraint for the firm; this will lead to highest feasible output and total surplus combination (and lowest deadweight loss) consistent with the financial sustainability of the firm.

The regulatory mechanism for implementing this model to achieve this objective is to erect a legal entry barrier into the market for that specific good/service, and to assure the regulated monopolist cost recovery, including its opportunity cost of capital, otherwise known as rate-of-return regulation. In return, the regulated monopolist commits to serve all customers reliably through its vertically-integrated generation, transmission, distribution, and retail functions. The monopolist’s costs and opportunity cost of capital determine its revenue requirement, out of which we can derive flat, averaged retail prices that forecasts suggest will enable the monopolist to earn that amount of revenue.

That’s the regulatory model + business model that has existed with little substantive evolution since the early 20th century, and it did achieve the social policy objectives of the 20th century — widespread electrification and low, stable prices, which have enabled follow-on economic growth and well-distributed increased living standards. It’s a regulatory+business model, though, that is premised on a few things:

  1. Defining a market by defining the characteristics of the product/service sold in that market, in this case electricity with a particular physical (volts, amps, hertz) definition and a particular reliability level (paraphrasing Fred Kahn …)
  2. The economies of scale (those big central generators and big wires) and economies of scope (lower total cost when producing two or more products compared to producing those products separately) that exist due to large-scale electro-mechanical technologies
  3. The architectural implications of connecting large-scale electro-mechanical technologies together in a network via a set of centralized control nodes — technology -> architecture -> market environment, and in this case large-scale electro-mechanical technologies -> distributed wires network with centralized control points rather than distributed control points throughout the network, including the edge of the network (paraphrasing Larry Lessig …)
  4. The financial implications of having invested so many resources in long-lived physical assets to create that network and its control nodes — if demand is growing at a stable rate, and regulators can assure cost recovery, then the regulated monopolist can arrange financing for investments at attractive interest rates, as long as this arrangement is likely to be stable for the 30-to-40-year life of the assets

As long as those conditions are stable, regulatory cost recovery will sustain this business model. And that’s precisely the effect of smart grid technologies, distributed generation technologies, microgrid technologies — they violate one or more of those four premises, and can make it not just feasible, but actually beneficial for customers to change their behavior in ways that reduce the regulation-supported revenue of the regulated monopolist.

Digital technologies that enable greater consumer control and more choice of products and services break down the regulatory market boundaries that are required to regulate product quality. Generation innovations, from the combined-cycle gas turbine of the 1980s to small-scale Stirling engines, reduce the economies of scale that have driven the regulation of and investment in the industry for over a century. Wires networks with centralized control built to capitalize on those large-scale technologies may have less value in an environment with smaller-scale generation and digital, automated detection, response, and control. But those generation and wires assets are long-lived, and in a cost-recovery-based business model, have to be paid for even if they become the destruction in creative destruction. We saw that happen in the restructuring that occurred in the 1990s, with the liberalization of wholesale power markets and the unbundling of generation from the vertically-integrated monopolists in those states; part of the political bargain in restructuring was to compensate them for the “stranded costs” associated with having made those investments based on a regulatory commitment that they would receive cost recovery on them.

Thus the death spiral rhetoric, and the concern that the existing utility business model will not survive. But if my framing of the situation is accurate, then what we should be examining in more detail is the regulatory model, since the utility business model is itself a regulatory creation. This relationship between digital innovation (encompassing smart grid, distributed resources, and microgrids) and regulation is what I’m exploring. How should the regulatory model and the associated utility business model change in light of digital innovation?

Building, and commercializing, a better nuclear reactor

A couple of years ago, I was transfixed by the research from Leslie Dewan and Mark Massie highlighted in their TedX video on the future of nuclear power.


A recent IEEE Spectrum article highlights what Dewan and Massie have been up to since then, which is founding a startup called Transatomic Power in partnership with investor Russ Wilcox. The description of the reactor from the article indicates its potential benefits:

The design they came up with is a variant on the molten salt reactors first demonstrated in the 1950s. This type of reactor uses fuel dissolved in a liquid salt at a temperature of around 650 °C instead of the solid fuel rods found in today’s conventional reactors. Improving on the 1950s design, Dewan and Massie’s reactor could run on spent nuclear fuel, thus reducing the industry’s nuclear waste problem. What’s more, Dewan says, their reactor would be “walk-away safe,” a key selling point in a post-Fukushima world. “If you don’t have electric power, or if you don’t have any operators on site, the reactor will just coast to a stop, and the salt will freeze solid in the course of a few hours,” she says.

The article goes on to discuss raising funds for lab experiments and a subsequent demonstration project, and it ends on a skeptical note, with an indication that existing industrial nuclear manufacturers in the US and Europe are unlikely to be interested in commercializing such an advanced reactor technology. Perhaps the best prospects for such a technology are in Asia.

Another thing I found striking in reading this article, and that I find in general when reading about advanced nuclear reactor technology, is how dismissive some people are of such innovation — why not go for thorium, or why even bother with this when the “real” answer is to harness solar power for nuclear fission? Such criticisms of innovations like this are misguided, and show a misunderstanding of both the economics of innovation and the process of innovation itself. One of the clear benefits of this innovation is its use of a known, proven reactor technology in a novel way and using spent fuel rod waste as fuel. This incremental “killing two birds with one stone” approach may be an economical approach to generating clean electricity, reducing waste, and filling a technology gap while more basic science research continues on other generation technologies.

Arguing that nuclear is a waste of time is the equivalent of a “swing for the fences” energy innovation strategy. Transatomic’s reactor represents a “get guys on base” energy innovation strategy. We certainly should do basic research and swing for the fences, but that’s no substitute for the incremental benefits of getting new technologies on base that create value in multiple energy and environmental dimensions.

The spin on wind, or, an example of bullshit in the field of energy policy

The Wall Street Journal recently opined against President Obama’s nominee for Federal Energy Regulatory Commission chairman, Norman Bay, and in the process took a modest swipe at subsidies for wind energy.

The context here is Bay’s action while leading FERC’s enforcement division, and in particular his prosecution of electric power market participants who manage to run afoul of FERC’s vague definition for market manipulation even though their trading behavior complied with all laws, regulations, and market rules.

So here the WSJ‘s editorial board pokes a little at subsidized wind in the process of making a point about reckless prosecutions:

As a thought experiment, consider the production tax credit for wind energy. In certain places at certain times, the subsidy is lucrative enough that wind generators make bids at negative prices: Instead of selling their product, they pay the market to drive prices below zero or “buy” electricity that would otherwise go unsold to qualify for the credit.

That strategy harms unsubsidized energy sources, distorts competition and may be an offense against taxpayers. But it isn’t a crime in the conventional legal sense because wind outfits are merely exploiting the subsidy in the open. The rational solution would be to end the subsidies that create negative bids, not to indict the wind farms. But for Mr. Bay, the same logic doesn’t apply to FERC.

The first quoted paragraph seems descriptive of reality and doesn’t cast wind energy in any negative light. The second quoted paragraph suggests the subsidy harms unsubsidized competitors, also plainly true, and that it “distorts competition” and “may be an offense against taxpayers.” These last two characterizations also strike me as fair descriptions of current public policy, and perhaps as mildly negative in tone.

Of course folks at the wind industry’s lobby shop are eager to challenge any little perceived slight, so the AWEA’s Michael Goggin sent a letter to the editor:

Your editorial “Electric Prosecutor Acid Test” (May 19) ignores wind energy’s real consumer benefits by mentioning the red herring of negative electricity prices. Negative prices are extremely rare and are usually highly localized in remote areas where they have little to no impact on other power plants, are caused by inflexible nuclear power plants much of the time, and are being eliminated as long-needed grid upgrades are completed.

Wind energy’s real impact is saving consumers money by displacing more expensive forms of energy, which is precisely why utilities bought wind in the first place. This impact is entirely market-driven, occurs with or without the tax credit, and applies to all low-fuel-cost sources of energy, including nuclear.

The tax relief provided to wind energy more than pays for itself by enabling economic development that generates additional tax revenue and represents a small fraction of the cumulative incentives given to other energy sources.

Michael Goggin
American Wind Energy Association
Washington, DC

Let’s just say I’ll believe the “impact is entirely market-driven” when someone produces a convincing study that shows the exact same wind energy capacity build-out would have happened over the last 20 years in the absence of the U.S. federal Production Tax Credit and state renewable energy purchase mandates. Without the tax credit, the wind energy industry likely would be (I’m guessing) less than one-tenth of its current size and without a big tax credit wouldn’t be the target of much public policy debate.

Of course, without much public policy debate, the wind energy industry wouldn’t need to hire so many lobbyists. Hence the AWEA’s urge to jump on any perceived slight, stir the pot, and keep debate going.

MORE on the lobbying against the Bay nomination. See also this WSJ op-ed.


AWEA brags about wind energy’s mediocre performance

On May 2 The Hill published a column by AWEA data spinner Michael Goggin, “Wind energy protects consumers,” in which the reader is regaled by tales of great service and low, low prices provided by the wind energy industry.

Sorting through the claims led me back to the AWEA blog, where among other things Goggin applauds the industry that pays his salary for its grand performance in trying times this past January in New York. Goggin exclaimed the New York grid operator “received very high wind output when it needed it most during the last cold snap, while other forms of generation experienced a variety [of] problems.”

Following the link provided to the NYISO press release I find the claim, “On Tuesday, the NYISO had the benefit of more than 1,000 MW of wind power throughout much of the day.” The New York grid operator reported peak demand during the day (January 7, 2014) at 25,738 MW, so wind energy’s contribution was in the 4 percent range. Another way to say that is that other forms of generation, despite experiencing a variety of problems, provided about 96 percent of the energy New York consumers received when they “needed it most.”

The AWEA website indicates that New York has an installed capacity of 1,722 MW of wind power. Doing the math reveals that about 40 percent of the wind energy industry’s generating capability failed to show when New York electric power consumers “needed it most.”

Impressive? Not really.

To more fully consider the situation, we’d have to ask just how much non-wind electric generating capacity has been driven from the New York market by subsidized wind power. It is part of the AWEA storyline that clean, low-cost wind energy “displace[s] output from the most expensive and least efficient power plants,” and obviously over time frequently displaced units are driven from the market. One may reasonably wonder how much generation capacity was driven from the market before that cold January day when New York electric power consumers “needed it most.”

In related news, the National Renewable Energy Lab just produced an exploration of the wind energy industry’s future with and without the Production Tax Credit. In brief, if the PTC is not revived once again, the industry will likely shrink by about half over the next several years, kept in business mostly by state renewable energy purchase requirements. Indirectly the study concedes that NREL doesn’t think wind power is cost competitive with alternative electric energy supplies, but under the best possible wind resource and grid access conditions.

Please note my occasional wind energy disclaimer: I am not against wind energy (a technology which can contribute real value in the right places), just against bad policy (which takes real value created by other people and shovels it in the direction of investors in wind energy assets and people who happen to own windy plots of land with good grid access).