Geoff Manne in Wired on FCC Title II

Friend of Knowledge Problem Geoff Manne had a thorough opinion piece in Wired yesterday on the FCC’s Title II Internet designation. Well worth reading. From the “be careful what you wish for” department:

Title II (which, recall, is the basis for the catch-all) applies to all “telecommunications services”—not just ISPs. Now, every time an internet service might be deemed to transmit a communication (think WhatsApp, Snapchat, Twitter…), it either has to take its chances or ask the FCC in advance to advise it on its likely regulatory treatment.

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

And that’s even at the much-vaunted edge of the network that such regulation purports to protect.

Asking in advance. Nope, that’s not gonna slow innovation, not one bit …

FCC Title II and raising rivals’ costs

As the consequences of the FCC vote to classify the Internet as a Title II service start to sink in, here are a couple of good commentaries you may not have seen. Jeffrey Tucker’s political economy analysis of the Title II vote as a power grab is one of the best overall analyses of the situation that I’ve seen.

The incumbent rulers of the world’s most exciting technology have decided to lock down the prevailing market conditions to protect themselves against rising upstarts in a fast-changing market. To impose a new rule against throttling content or using the market price system to allocate bandwidth resources protects against innovations that would disrupt the status quo.

What’s being sold as economic fairness and a wonderful favor to consumers is actually a sop to industrial giants who are seeking untrammeled access to your wallet and an end to competitive threats to market power.

What’s being sold as keeping the Internet neutral for innovation at the edge of the network is substantively doing so by encasing the existing Internet structure and institutions in amber, which yields rents for its large incumbents. Some of those incumbents, like Comcast and Time-Warner, have achieved their current (and often resented by customers) market power not through rivalrous market competition, but through receiving municipal monopoly cable franchises. Yes, these restrictions raise their costs too, but as large incumbents they are better positioned to absorb those costs than smaller ISPs or other entrants would be. It’s naive to believe that regulations of this form will do much other than softening rivalry in the Internet itself.

But is there really that much scope for innovation and dynamism within the Internet? Yes. Not only with technologies, but also with institutions, such as interconnection agreements and peering agreements, which can affect packet delivery speeds. And, as Julian Sanchez noted, the Title II designation takes these kinds of innovations off the table.

But there’s another kind of permissionless innovation that the FCC’s decision is designed to preclude: innovation in business models and routing policies. As Neutralites love to point out, the neutral or “end-to-end” model has served the Internet pretty well over the past two decades. But is the model that worked for moving static, text-heavy webpages over phone lines also the optimal model for streaming video wirelessly to mobile devices? Are we sure it’s the best possible model, not just now but for all time? Are there different ways of routing traffic, or of dividing up the cost of moving packets from content providers, that might lower costs or improve quality of service? Again, I’m not certain—but I am certain we’re unlikely to find out if providers don’t get to run the experiment.

You should probably raise prices a bit during emergencies

At the Master Resource blog today: “In Defense of Price ‘Gouging’ (lines and shortages are uneconomic, discriminatory).”

In the essay I emphasize the unintended bias that results when consumer demand surges and supplies are tight, as for example when winter storm forecasts lead consumers to rush to the grocery store for bread and milk. Because retailers rarely raise prices in such situations, shortages are the predictable result. The burden of those shortages isn’t spread randomly, however, but rather tends to fall more heavily on certain segments of the population.

When emergencies happen (or are first forecasted) some consumers are readily able to rush to the store while other consumers are not so lucky. The early bird gets the bread and milk and eggs, the late arrival finds little or nothing available….

Households with … “high opportunity costs of shopping,” for example those households with infants or with both parents working full time, were more likely to miss out on the opportunity to buy foods before they ran out. It is easy to see that elderly and mobility-impaired consumers, too, would be more likely to be shut out by any sudden rush of consumers to the store after a disaster.

Higher prices would discourage over-buying and help ensure that useful consumer goods get distributed to more households, not just the households best able to rush to the store.

We can debate how significant the effect is, and I do not argue that raising prices solves all interesting problems, but a modest increase in consumer prices would likely be an improvement.

How about grocery stores imposing a 5 percent storm surcharge that goes to charity, with an ability to opt out? Maybe a sign next to the bread shelves saying “Help those most hurt by the storm: We recommend you donate 25 cents to [name of charity] for every loaf of bread you buy. Our checkout staff will be happy to help you with your donation!”

While I have targeted many complaints at anti-price gouging laws here at Knowledge Problem, in the Master Resource post I broaden my focus a bit to encompass the consumer sentiment against price increases during emergencies. We need a social change to enable markets to work as best they can in such situations. More effective markets aid by reducing the scope of the problems to be addressed by charitable work (as Dwight Lee argues in his essay in Regulation magazine – see the related discussion and link in the Master Resource post).

The quoted part above in part relies on research done on consumer purchases in Japan after the Great East Japan Earthquake of March 2011:  Naohito Abe, Chiaki Moriguchi, and Noriko Inakura, “The Effects of Natural Disasters on Prices and Purchasing Behaviors: The Case of Great East Japan Earthquake.” DP14-1, RCESR Discussion Paper Series, September 2014.

How can the market price of oil fall so far so fast?

If the oil market is reasonably efficient, then the price of a barrel of oil should reflect something like the cost of production of the highest-cost barrel of oil needed to just satisfy demand. In other words, the market price of oil should reflect the marginal cost of production.

The price of oil on the world market was about $110 per barrel in June 2014 and now sits just under $50 per barrel. Can it be possible that the marginal cost of producing oil was $110 per barrel in June 2014 and is only $50 per barrel in January 2015?

Yes.

Here is how: in the first half of June 2014 oil consumption was very high relative to the then-existing world oil production capability. In addition, existing oil production capability is always declining as producing fields deplete. The marginal cost of a barrel of oil under such tight market conditions has to cover the capital cost of developing new resources as well as the operating costs.

Toward the end of 2014 additions to world oil production capability exceeded growth in consumption, meaning additions to production capability were no longer necessary, meaning the marginal cost of producing the last barrel of oil no longer needed to cover that capital cost. Sure, some oil company somewhere had to make the capital investment necessary to develop the resource, but most of those costs are sunk and competition in the market means they cannot make some consumer cover those costs. The market price under today’s looser market conditions only needs to cover the operating costs of production.

Given the large sunk cost component of investment in developing oil production capability, it is quite possible that the oil market was efficient at $110 per barrel and remains operating efficiently today with prices under $50 per barrel.

NOTE: Related data on world oil production and consumption is available in the U.S. Department of Energy’s Short Term Energy Outlook. Commentary prompting this explainer comes from the UC-Berkeley Energy Institute at Haas blog.

When does state utility regulation distort costs?

I suspect the simplest answer to the title question is “always.” Maybe the answer depends on your definition of “distort,” but both the intended and generally expected consequences of state utility rate regulation has always been to push costs to be something other than what would naturally emerge in the absence of rate regulation.

More substantive, though, is the analysis provided in Steve Cicala’s article in the January 2015 American Economic Review, “When Does Regulation Distort Costs? Lessons from Fuel Procurement in US Electricity Generation.” (here is an earlier ungated version of the paper.)

Here is a summary from the University of Chicago press release:

A study in the latest issue of the American Economic Review used recent state regulatory changes in electricity markets as a laboratory to evaluate which factors can contribute to a regulation causing a bigger mess than the problem it was meant to fix….

Cicala used data on almost $1 trillion worth of fuel deliveries to power plants to look at what happens when a power plant becomes deregulated. He found that the deregulated plants combined save about $1 billion a year compared to those that remained regulated. This is because a lack of transparency, political influence and poorly designed reimbursement rates led the regulated plants to pursue inefficient strategies when purchasing coal.

The $1 billion that deregulated plants save stems from paying about 12 percent less for their coal because they shop around for the best prices. Regulated plants have no incentive to shop around because their profits do not depend on how much they pay for fuel. They also are looked upon more favorably by regulators if they purchase from mines within their state, even if those mines don’t sell the cheapest coal. To make matters worse, regulators have a difficult time figuring out if they are being overcharged because coal is typically purchased through confidential contracts.

Although power plants that burned natural gas were subject to the exact same regulations as the coal-fired plants, there was no drop in the price paid for gas after deregulation. Cicala attributed the difference to the fact that natural gas is sold on a transparent, open market. This prevents political influences from sneaking through and allows regulators to know when plants are paying too much.

What’s different about the buying strategy of deregulated coal plant operators? Cicala dove deep into two decades of detailed, restricted-access procurement data to answer this question. First, he found that deregulated plants switch to cheaper, low-sulfur coal. This not only saves them money, but also allows them to comply with environmental regulations. On the other hand, regulated plants often comply with regulations by installing expensive “scrubber” technology, which allows them to make money from the capital improvements.

“It’s ironic to hear supporters of Eastern coal complain about ‘regulation’: they’re losing business from the deregulated plants,” said Cicala, a scholar at the Harris School of Public Policy.

Deregulated plants also increase purchases from out-of-state mines by about 25 percent. As mentioned, regulated plants are looked upon more favorably if they buy from in-state mines. Finally, deregulated plants purchase their coal from more productive mines (coal seams are thicker and closer to the surface) that require about 25 percent less labor to extract from the ground and that pay 5 percent higher wages.

“Recognizing that there are failures in financial markets, health care markets, energy markets, etc., it’s critical to know what makes for ‘bad’ regulations when designing new ones to avoid making the problem worse,” Cicala said. [Emphasis added.]

Moody’s concludes: mass grid defection not yet on the horizon

Yes, solar power systems are getting cheaper and battery storage is improving. The combination has many folks worried (or elated) about the future prospects of grid-based electric utilities when consumers can get the power they want at home. (See Lynne’s post from last summer for background.)

An analysis by Moody’s concludes that battery storage remains an order of magnitude too high, so grid defections are not yet a demonstrable threat. Analysis of consumer power use data leads them to project a need for a larger home system than other analysts have used. Moody’s further suggests that consumers will be reluctant to make the lifestyle changes–frequent monitoring of battery levels, forced conservation during extended low-solar resource periods–so grid defection may be yet slower than the simple engineering economics computation would suggest.

COMMENT: I’ll project that in a world of widespread consumer power defections, we will see two developments to help consumers avoid the need to face forced conservation. Nobody will have to miss watching Super Bowl LXXX because it was cloudy the week before in Boston. First, plug-in hybrid vehicles hook-ups so the home batteries can be recharged by the consumer’s gasoline or diesel engine. Second, home battery service companies will provide similar mobile recharge services (or hot-swapping home battery systems, etc.) Who knows, in a world of widespread defection, maybe the local electric company will offer spot recharge services at a market-based rate?

[HT to Clean Beta]

FERC’s Clark looks to states for help with regional markets

EnergyWire reports, “FERC’s Clark looks to states for help fixing dysfunctional markets.”

It is, I guess, a reasonable impulse. Given the way regulatory authority over the electric power industry is currently divided between the feds and the states, there are limits on what the one can do without the other. We saw in the fate of FERC Order 745, on demand response, evidence of the conflict: the court found that FERC’s rule encroached on states’ exclusive jurisdiction. States have also seen their efforts constrained (as, for example, the failure of New Jersey’s effort to subsidize investment in generating capacity).

FERC Commissioner Tony Clark is no stranger to state regulatory perspectives, having served nearly a dozen years on the utility commission in North Dakota, finishing as chair, and a year as president of the National Association of Regulatory Utility Commissioners (NARUC). Still, looking to state regulators, each elected or appointed within the political environment of local state capitals, to adjudge matters of costs and benefits on a regional basis seems to me a case of looking for love in all the wrong places.

RELATED NEWS: The Newest FERC Commissioner, Collette Honorable, was sworn in on January 5, 2015. Like Clark, Honorable was formerly chair of a state utility regulatory commission (Arkansas) and former president of NARUC.