Social costs of oil and gas leasing on federal lands, carefully considered

OVERVIEW: A report filed with the US Department of the Interior recommended that terms governing the leasing of federal land for oil and gas development be updated to reflect social costs associated with such development. While such costs may be policy relevant, I suggest social costs are smaller than the report indicates and the recommended policy changes are not well focused.

The U.S. Department of the Interior (“Interior”) has begun an effort to update financial terms for oil and gas leases on federal lands. These financial aspects – royalties, minimum acceptable bids, annual rental rates, bonding requirements, and penalty rates – are collectively referred to as “government take.” One issue raised in the effort concerns social costs associated with oil and gas development on federal lands. (As noted earlier, Shawn Regan and I have filed a comment with Interior on the issue.)

THE HEIN REPORT

Social costs of such development are also among issues addressed in a report filed in the Interior rulemaking docket by Jayni Foley Hein of New York University’s Institute for Policy Integrity. The report provides an overview of the legal requirements governing government take and recommends Interior’s regulations be revised to reflect option value and social costs. Here I focus on social costs.

Hein said social costs are imposed by oil and gas development on federal lands both during development and during production. She wrote:

America’s public lands offer millions of people a place to hike, camp, hunt, fish, and enjoy scenic beauty. They provide drinking water, clean air, critical habitat for wildlife, sites for renewable energy development, as well as natural resources including timber, minerals, oil, and natural gas. As soon as energy exploration begins, competing uses of federal land such as recreational enjoyment, commercial fishing, and renewable energy development are impaired, and continue to be foreclosed for the duration of production.

Hein listed the following social costs of oil and gas activity on federal lands*:

  • Loss of use values (including loss of recreational value, renewable energy development potential, timber value, scenic value, and wildlife habitat)
  • Local air pollution (local effects of methane leakage, emissions from diesel or gas-fueled pumps and other engines)
  • Global air pollution (methane leaks, carbon dioxide)
  • Induced earthquakes from disposal of hydraulic fracturing wastewater
  • Potential oil or wastewater spills and subsequent water contamination from wastewater stored in pits and tanks
  • Noise pollution
  • Increased traffic (wear and tear on roadways, traffic-related fatalities).

She recommended increasing rental rates and royalties to reflect social costs associated with development and production of oil and gas on federal lands.

GETTING SOCIAL COSTS RIGHT

Naïve application of Hein’s list would likely produce significant over-counting of social costs. Regan and I described social costs as “the sum of all future benefits foregone by one or more persons due to oil and gas development activity on federal lands.” We were imprecise. We cannot simply sum up all possible future foregone benefits, but rather we should focus on the difference in benefits between two specific cases: one case with oil and gas resources leased for development, and a second case in which the land is not leased.

The social costs of oil and gas leasing is the sum of the specific incremental differences in the stream of future benefits associated with the land leased for oil and gas development as compared to the best alternative use. Specification of the second case is key. Assume, for example, that if the property is not leased for oil and gas development, then it would be leased for PV solar power development. Leasing the land for PV solar power also involves some loss of timber value, wildlife habitat, recreational value, and so on. In counting the social costs of oil and gas leasing associated with, say, wildlife habitat, we need to focus on just the difference in wildlife habitat between the two cases. If recreational use is impaired equally, the loss of recreation value is not properly counted as a cost of oil and gas leasing.

Consequences, or rather, the differences in consequences beyond the property itself matter too. It is likely holding a specific tract of property out of oil production has no effect on total world oil production and consumption, and therefore there would be no difference in total air pollution, traffic, potential for oil leaks, and so on. Withholding a particular property out of development primarily would affect the location, not the total amount, of these costs. Location can matter: we likely do not want to increase traffic and local air pollution in already crowded areas. But location does not always matter: the greenhouse gas implications are the same whether a methane leak arises from development on federal land or elsewhere.

SUMMING UP

A careful identification of the social costs of oil and gas leasing associated with specific federal properties would reveal these social costs to be smaller than a naïve application of Hein’s list may suggest. Federal oil and gas policies governing the government take primarily affect the distribution of social costs, not the total amount. Most relevant social costs are highly localized to the area of development, a feature which should make them easier to manage.

Other issues arise with Hein’s proposal to increase rental rates and royalty rates to account for social costs. While charging a higher royalty rate, for example, would discourage development of federal lands at the margin, it would not encourage operators to minimize social costs on properties that are developed. Other policy levers may be more useful.

*NOTE: The list of social costs is my summary drawn from Hein’s report. We might dispute aspects of the list, but for purposes of this post I am more interested in the social cost concept rather than the particular items listed.

Elementary error misleads APPA on electricity pricing in states with retail electric choice

The American Public Power Association (APPA) recently published an analysis of retail power prices, but it makes an elementary mistake and gets the conclusion wrong.

The APPA analysis, “2014 Retail Electric Rates in Deregulated and Regulated States,” uses U.S. Energy Information Administration data to compare retail electric prices in “deregulated” and “regulated” states. The report itself presents its analysis without much in the way of evaluation, but the APPA blog post accompanying its release was clear on the message:

after nearly two decades of retail and wholesale electric market restructuring, the promise of reduced rates has failed to materialize. In fact, customers in states with retail choice programs located within RTO-operated markets are now paying more for their electricity.

In 1997, the retail electric rate in deregulated states — the ones offering retail choice and located within an RTO — was 2.8 cents per kilowatt-hour (kWh) higher than rates in the regulated states with no retail choice. The gap has increased over the last two decades. In 2014, customers in deregulated states paid, on average, 3.3 cents per kWh more than customers in regulated states.

But the APPA neglects the effects of inflation over the 17 year period of analysis. It is an elementary mistake. Merely adjusting for inflation from 1997 to 2014 reverses the conclusion.

The elementary mistake is easily corrected: Inflation data can be found at the St. Louis Fed site. Using the 2014 value of the dollar, average prices per kwh in the APPA-regulated states were 8.4 cents in 1997 and 9.4 cents in 2014. In the APPA-deregulated states the average prices per kwh were 12.5 cents in 1997 and 12.7 cents in 2014.

Prices were up for both groups after adjusting for inflation, but prices increased more in their regulated states (1 cent per kwh, so up about 11.3 percent) than in their deregulated states (0.2 cents; up about 1.4 percent). The inflation-adjusted “gap” fell from nearly 4.1 cents in 1997 to 3.3 cents in 2014.

ADDENDUM

Surprisingly, the APPA knows that an inflation adjustment would change their answer. The report ignores the issue completely; the APPA blog said:

For example, a recent analysis by the Compete Coalition finds that, after accounting for inflation, rates in restructured states decreased by 1.3 percent and increased by 9.8 percent in regulated states since 1997. The data in the APPA study, which does not account for inflation, show that rates in the deregulated states grew by 48 percent compared to a 62 percent increase for the regulated states.

However, a percentage-based comparison obscures the important fact that the 1997 rates in deregulated states were much greater than those in regulated states.

The Compete Coalition report is not linked in the APPA post, but the data points mentioned are here: “Consumers Continue To Fare Better With Competitive Markets, Both at Retail and Wholesale.”

The remaining differences between my inflation-adjusted APPA values and those of the Compete Coalition likely arise because Texas is in the Compete Coalition’s restructured states category, but not in the APPA’s deregulated states category. Seems an odd omission given that most power in Texas is sold in a quite competitive retail power market. APPA does not say why Texas is excluded from their deregulated category.

According to EIA data [XLS], average power prices in Texas were 9 cents per kwh in 1997, but in 2013 had fallen to 8.7 cents. Both numbers have been adjusted for inflation using CPI-U values from the St. Louis Fed website and reported using the 2014 value of a dollar. The 2013 numbers were the latest shown in the EIA dataset.

Forthcoming paper: Implications of Smart Grid Innovation for Organizational Models in Electricity Distribution

Back in 2001 I participated in a year-long forum on the future of the electricity distribution model. Convened by the Center for the Advancement of Energy Markets, the DISCO of the Future Forum brought together many stakeholders to develop several scenarios and analyze their implications (and several of those folks remain friends, playmates in the intellectual sandbox, and commenters here at KP [waves at Ed]!). As noted in this 2002 Electric Light and Power article,

Among the 100 recommendations that CAEM discusses in the report, the forum gave suggestions ranging from small issues-that regulators should consider requiring a standard form (or a “consumer label”) on pricing and terms and conditions of service for small customers to be provided to customers at the tie of the initial offer (as well as upon request)-to larger ones, including the suggestions that regulators should establish a standard distribution utility reporting format for all significant distribution upgrades and extensions, and that regulated DISCOs should be permitted to recover their reasonable costs for development of grid interface designs and grid interconnect application review.

“The technology exists to support a competitive retail market responsive to price signals and demand constraints,” the report concludes. “The extent to which the market is opened to competition and the extent to which these technologies are applied by suppliers, DISCOS and customers will, in large part, be determined by state legislatures and regulators.”

Now in 2015, technological dynamism has brought to a head many of the same questions, regulatory models, and business models that we “penciled out” 14 years ago.

In a new paper, forthcoming in the Wiley Handbook of Smart Grid Development, I grapple with that question: what are the implications of this technological dynamism for the organizational form of the distribution company? What transactions in the vertically-integrated supply chain should be unbundled, what assets should it own, and what are the practical policy issues being tackled in various places around the world as they deal with these questions? I analyze these questions using a theoretical framework from the economics of organization and new institutional economics. And I start off with a historical overview of the industry’s technology, regulation, and organizational model.

Implications of Smart Grid Innovation for Organizational Models in Electricity Distribution

Abstract: Digital technologies from outside the electricity industry are prompting changes in both regulatory institutions and electric utility business models, leading to the disaggregation or unbundling of historically vertically integrated electricity firms in some jurisdictions and not others, and simultaneously opening the door for competition with the traditional electric utility business. This chapter uses the technological and organizational history of the industry, combined with the transactions cost theory of the firm and of vertical integration, to explore the implications of smart grid technologies for future distribution company business models. Smart grid technologies reduce transactions costs, changing economical firm boundaries and reducing the traditional drivers of vertical integration. Possible business models for the distribution company include an integrated utility, a network manager, or a coordinating platform provider.

The New York REV and the distribution company of the future

We live in interesting times in the electricity industry. Vibrant technological dynamism, the very dynamism that has transformed how we work, play, and live, puts increasing pressure on the early-20th-century physical network, regulatory model, and resulting business model of the vertically-integrated distribution utility.

While the utility “death spiral” rhetoric is overblown, these pressures are real. They reflect the extent to which regulatory and organizational institutions, as well as the architecture of the network, are incompatible with a general social objective of not obstructing such innovation. Boosting my innovation-focused claim is the synthesis of relatively new environmental objectives into the policy mix. Innovation, particularly innovation at the distribution edge, is an expression of human creativity that fosters both older economic policy objectives of consumer protection from concentrations of market power and newer environmental policy objectives of a cleaner and prosperous energy future.

But institutions change slowly, especially bureaucratic institutions where decision-makers have a stake in the direction and magnitude of institutional change. Institutional change requires imagination to see a different world as possible, practical vision to see how to get from today’s reality toward that different world, and courage to exercise the leadership and navigate the tough tradeoffs that inevitably arise.

That’s the sense in which the New York Reforming the Energy Vision (REV) proceeding of the New York State Public Service Commission (Greentech) is compelling and encouraging. Launched in spring 2014 with a staff paper, REV is looking squarely at institutional change to align the regulatory framework and the business model of the distribution utility more with these policy objectives and with fostering innovation. As Katherine Tweed summarized the goals in the Greentech Media article linked above,

The report calls for an overhaul of the regulation of the state’s distribution utilities to achieve five policy objectives:

  • Increasing customer knowledge and providing tools that support effective management of their total energy bill
  • Market animation and leverage of ratepayer contributions
  • System-wide efficiency
  • Fuel and resource diversity
  • System reliability and resiliency

The PSC acknowledges that the current ratemaking procedure simply doesn’t work and that the distribution system is not equipped for the changes coming to the energy market. New York is already a deregulated market in which distribution is separated from generation and there is retail choice for electricity. Although that’s a step beyond many states, it is hardly enough for what’s coming in the market.

Last week the NY PSC issued its first order in the REV proceeding, that the incumbent distribution utilities will serve as distributed system platform providers (DSPPs) and should start planning accordingly. As noted by RTO Insider,

The framework envisions utilities serving a central role in the transition as distributed system platform (DSP) providers, responsible for integrated system planning and grid and market operations.

In most cases, however, utilities will be barred from owning distributed energy resources (DER): demand response, distributed generation, distributed storage and end-use energy efficiency.

The planning function will be reflected in the utilities’ distributed system implementation plan (DSIP), a multi-year forecast proposing capital and operating expenditures to serve the DSP functions and provide third parties the system information they need to plan for market participation.

A platform business model is not a cut and dry thing, though, especially in a regulated industry where the regulatory institutions reinforced and perpetuated a vertically integrated model for over a century (with that model only really modified due to generator technological change in the 1980s leading to generation unbundling). Institutional design and market design, the symbiosis of technology and institutions, will have to be front and center, if the vertically-integrated uni-directional delivery model of the 20th century is to evolve into a distribution facilitator of the 21st century.

In fact, the institutional design issues at stake here have been the focus of my research during my sabbatical, so I hope to have more to add to the discussion based on some of my forthcoming work on the subject.

Geoff Manne in Wired on FCC Title II

Friend of Knowledge Problem Geoff Manne had a thorough opinion piece in Wired yesterday on the FCC’s Title II Internet designation. Well worth reading. From the “be careful what you wish for” department:

Title II (which, recall, is the basis for the catch-all) applies to all “telecommunications services”—not just ISPs. Now, every time an internet service might be deemed to transmit a communication (think WhatsApp, Snapchat, Twitter…), it either has to take its chances or ask the FCC in advance to advise it on its likely regulatory treatment.

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

And that’s even at the much-vaunted edge of the network that such regulation purports to protect.

Asking in advance. Nope, that’s not gonna slow innovation, not one bit …

FCC Title II and raising rivals’ costs

As the consequences of the FCC vote to classify the Internet as a Title II service start to sink in, here are a couple of good commentaries you may not have seen. Jeffrey Tucker’s political economy analysis of the Title II vote as a power grab is one of the best overall analyses of the situation that I’ve seen.

The incumbent rulers of the world’s most exciting technology have decided to lock down the prevailing market conditions to protect themselves against rising upstarts in a fast-changing market. To impose a new rule against throttling content or using the market price system to allocate bandwidth resources protects against innovations that would disrupt the status quo.

What’s being sold as economic fairness and a wonderful favor to consumers is actually a sop to industrial giants who are seeking untrammeled access to your wallet and an end to competitive threats to market power.

What’s being sold as keeping the Internet neutral for innovation at the edge of the network is substantively doing so by encasing the existing Internet structure and institutions in amber, which yields rents for its large incumbents. Some of those incumbents, like Comcast and Time-Warner, have achieved their current (and often resented by customers) market power not through rivalrous market competition, but through receiving municipal monopoly cable franchises. Yes, these restrictions raise their costs too, but as large incumbents they are better positioned to absorb those costs than smaller ISPs or other entrants would be. It’s naive to believe that regulations of this form will do much other than softening rivalry in the Internet itself.

But is there really that much scope for innovation and dynamism within the Internet? Yes. Not only with technologies, but also with institutions, such as interconnection agreements and peering agreements, which can affect packet delivery speeds. And, as Julian Sanchez noted, the Title II designation takes these kinds of innovations off the table.

But there’s another kind of permissionless innovation that the FCC’s decision is designed to preclude: innovation in business models and routing policies. As Neutralites love to point out, the neutral or “end-to-end” model has served the Internet pretty well over the past two decades. But is the model that worked for moving static, text-heavy webpages over phone lines also the optimal model for streaming video wirelessly to mobile devices? Are we sure it’s the best possible model, not just now but for all time? Are there different ways of routing traffic, or of dividing up the cost of moving packets from content providers, that might lower costs or improve quality of service? Again, I’m not certain—but I am certain we’re unlikely to find out if providers don’t get to run the experiment.

When does state utility regulation distort costs?

I suspect the simplest answer to the title question is “always.” Maybe the answer depends on your definition of “distort,” but both the intended and generally expected consequences of state utility rate regulation has always been to push costs to be something other than what would naturally emerge in the absence of rate regulation.

More substantive, though, is the analysis provided in Steve Cicala’s article in the January 2015 American Economic Review, “When Does Regulation Distort Costs? Lessons from Fuel Procurement in US Electricity Generation.” (here is an earlier ungated version of the paper.)

Here is a summary from the University of Chicago press release:

A study in the latest issue of the American Economic Review used recent state regulatory changes in electricity markets as a laboratory to evaluate which factors can contribute to a regulation causing a bigger mess than the problem it was meant to fix….

Cicala used data on almost $1 trillion worth of fuel deliveries to power plants to look at what happens when a power plant becomes deregulated. He found that the deregulated plants combined save about $1 billion a year compared to those that remained regulated. This is because a lack of transparency, political influence and poorly designed reimbursement rates led the regulated plants to pursue inefficient strategies when purchasing coal.

The $1 billion that deregulated plants save stems from paying about 12 percent less for their coal because they shop around for the best prices. Regulated plants have no incentive to shop around because their profits do not depend on how much they pay for fuel. They also are looked upon more favorably by regulators if they purchase from mines within their state, even if those mines don’t sell the cheapest coal. To make matters worse, regulators have a difficult time figuring out if they are being overcharged because coal is typically purchased through confidential contracts.

Although power plants that burned natural gas were subject to the exact same regulations as the coal-fired plants, there was no drop in the price paid for gas after deregulation. Cicala attributed the difference to the fact that natural gas is sold on a transparent, open market. This prevents political influences from sneaking through and allows regulators to know when plants are paying too much.

What’s different about the buying strategy of deregulated coal plant operators? Cicala dove deep into two decades of detailed, restricted-access procurement data to answer this question. First, he found that deregulated plants switch to cheaper, low-sulfur coal. This not only saves them money, but also allows them to comply with environmental regulations. On the other hand, regulated plants often comply with regulations by installing expensive “scrubber” technology, which allows them to make money from the capital improvements.

“It’s ironic to hear supporters of Eastern coal complain about ‘regulation’: they’re losing business from the deregulated plants,” said Cicala, a scholar at the Harris School of Public Policy.

Deregulated plants also increase purchases from out-of-state mines by about 25 percent. As mentioned, regulated plants are looked upon more favorably if they buy from in-state mines. Finally, deregulated plants purchase their coal from more productive mines (coal seams are thicker and closer to the surface) that require about 25 percent less labor to extract from the ground and that pay 5 percent higher wages.

“Recognizing that there are failures in financial markets, health care markets, energy markets, etc., it’s critical to know what makes for ‘bad’ regulations when designing new ones to avoid making the problem worse,” Cicala said. [Emphasis added.]