Regulation’s effects on innovation in energy technologies: the experimentation connection

Lynne Kiesling

Remember the first time you bought a mobile phone (which in my case was 1995). You may have been happy with your land line phone, but this new mobile phone thing looks like it would be really handy in an emergency, so you-in-1995 said sure, I’ll get a cell phone, but not really use it that much. Then, the technology improved, and more of your friends and family got phones, so you used it more. Then you saw others with cool flip phones, in colors, and you did some searching to see if other phones had features you might like. Then came text messaging, and you experimented with learning a new shorthand language (or, if you’re like me, you stayed a pedant about spelling even in text messages that you had to tap out on number pad keys). You adopted text messaging, or not. Then came the touch screen, largely via the disruptive iPhone, and the cluster of smartphone innovation was upon us.  Maybe you have a smartphone, maybe you don’t; maybe your smartphone is an iPhone, maybe it isn’t. But since 1995, your choice of communication technology, and the set from which you can choose, has changed dramatically.

This change didn’t happen overnight, and for most people was not a discrete move from old choice to new choice, A to B, without any other choices along the way. Similarly for technological change and the production of goods and services. For both consumers and producers, our choices in markets are the consequence of a process of experimentation, trial and error, and learning. Indeed, whether your perspective on dynamic competition is based on Schumpeter or Hayek or Kirzner (or all of the above), the fundamental essence of competition in market processes is that it’s a process of experimentation, trial and error, and learning, on the part of both producers and consumers. That’s how we get new products and services, that’s how we signal to producers whether their innovations are valuable to us as consumers, that’s how innovation creates economic growth and vibrancy, through the application of our creativity and our taste for creating and experiencing novelty.

This kind of dynamism is common in our world, and is increasingly an aspect of our lives that creates value for us; mobile telephony is the most obvious example, but even in products as mundane as milk, the fundamental aspect of the market process is this experimentation, trial and error, and learning. How else would Organic Valley have started coming out with a line of milk that is entirely from pasture-raised cows? (I am happily consuming this milk; pasture-raised cows make milk with more essential fatty acids and conjugated linoleic acid, very important for health)

But this kind of dynamism, while common, is not pervasive. Institutions matter, and in particular, various forms of government regulation can influence the extent to which such technological dynamism occurs in a market. The example I have in mind as a counterpoint, the example I want to explain and understand, is consumer-facing electricity technologies, like thermostats and home energy management systems. For the past several years there has been considerable innovation in this space, due to the application and extension of digital communication technology innovations. But despite the frequent claims over the past few years that this year will be the year of the consumer energy technology, it keeps not happening.

Tomorrow in New Orleans, at the Southern Economic Association meetings, I’ll be presenting a paper that grapples with this question. My argument is that traditional economic regulation of the electricity industry slows or stifles innovation because regulation undercuts the experimentation, trial and error, and learning of both producers and consumers. As I state in the abstract:

Persistent regulation in potentially competitive markets can undermine consumer benefits when technological change both makes those markets competitive and creates new opportunities for market experimentation. This paper applies the Bell Doctrine precedent of “quarantine the monopoly” to the electricity industry, and extends the Bell Doctrine by analyzing the role of market experimentation in generating the benefits of competition. The general failure to quarantine the monopoly wires segment and its regulated monopolist from the potentially competitive downstream retail market contributes to the slow pace and lackluster performance of retail electricity markets for residential customers. The form of this failure to quarantine the monopoly is the persistence of an incumbent default service contract that was intended to be a transition mechanism to full retail competition, coupled with the regulatory definition of product characteristics and market boundaries that is necessary to define the default product and evaluate the regulated monopolist’s performance in providing it. The consequence of the incumbent’s incomplete exit from the retail market suggests that as regulated monopolists and regulators evaluate customer-facing smart grid investments, regulators and other policymakers should consider the potential anti-competitive effects of the failure to quarantine the monopoly with respect to the default service contract and in-home energy management technology.

In August 2011 I wrote about the Bell Doctrine, Baxter’s precedent from the U.S. v. AT&T divestiture case, and how we have failed to quarantine the monopoly in electricity. This paper is an extension of that argument, and I welcome comments!

If you’ll be at the SEA meetings, I hope to see you there; I am headed to NOLA tonight, and look forward to a fun weekend full of good economic brain candy.

Antitrust and Google search bias

Lynne Kiesling

For the past year and a half the Federal Trade Commission has been investigating the potential anti-competitive effects of Google’s search-based business model. The European Union has also been pursuing antitrust complaints against Google. The main accusation is Google search bias — Google’s algorithm prioritizes links both to paid advertisers (which are shaded and labeled to indicate the payment) and to affiliated content sites. Google’s competitors complained to the FTC … but they do the same thing! For example, if you do a stock ticker search for, say AAPL (yes, I’m being cheeky), on Google, Bing, and Yahoo, each one will prioritize its own affiliated finance site, before then listing the sites of their competitors, Wall Street Journal, and so on.

We had a panel discussion on this issue at the Northwestern University-Searle Center annual conference on antitrust economics and competition policy last Friday afternoon, with panelists including Stanford’s Susan Athey (who has done some work with Microsoft) and Google’s Hal Varian and Preston McAfee. The discussion was as informed, informative, and lively as you’d expect.

There’s also a panel debate going on right now in DC in on the issue, hosted by Tech Freedom and including friend-of-Knowledge-Problem Geoff Manne, who has written extensive criticisms of the FTC investigation. I think Geoff has an important point when he asks for evidence of consumer harm in comparison to what a likely antitrust remedy would be. If, for example, the FTC required Google to modify its search algorithm to randomize the top results rather than prioritizing their affiliated content sites, then wouldn’t Microsoft and Yahoo have to implement that randomization as well? And if that’s the remedy, does that make consumers better off or worse off?

I think Bob Hahn and Peter Passell get it right when they say, as they did in a post yesterday at their blog regulation 2point0,

Indeed, anybody who’s been paying attention ought to have figured out by now that information technology is simply moving too fast to allow even the most nimble companies to grab the market goodies and lock the door behind them. …

In a hypercompetitive environment like this, where the product mix sometimes changes faster than Lady Gaga’s wardrobe, antitrust regulators would do well to pick and choose their interventions carefully. And to help get them there, academics really need to provide a more careful accounting of the state of competition in IT.

Hear, hear.

What do you think? Do you think you are “locked in” when you perform a search on a specific platform? Do you just click on the top link? Or when presented with search results do you look for specific sources that have a particular reputation or credibility to you?

Another good response to the Obama administration’s mistaken antitrust policy

Michael Giberson

George Priest, professor of economics and law at Yale, clearly outlines the main errors of the Obama administration’s decision to oppose the AT&T/T-Mobile merger and cites relevant evidence backing the view:

It is very difficult at an abstract level to know what the effects of a merger or acquisition will be on competition within an industry. Firms may merge to create market power and increase prices, though they may also merge to create efficiencies that lower prices.

The Justice Department presumes that the acquisition of T-Mobile (the fourth largest wireless provider) by AT&T (the second largest) will lead to “higher prices . . . and lower quality products” based on the high market share that would result. But market share is a very rough proxy for market power and essentially meaningless in a network industry.

There are strong reasons to predict that AT&T’s acquisition will lower prices and improve product quality. First, there’s lots of competition in the wireless market. Prices have been declining progressively over time. There are many local market competitors with discount and pre-paid plans….

Second, the best evidence of the prospective effect of a proposed acquisition is the response of competitors that will face the combined firms. The chief competitor, Sprint, the third largest wireless company, has been lobbying to stop the merger from its first announcement.

If the acquisition would lead to increased prices and lower quality products as the Justice Department has claimed, Sprint would be better off after the acquisition… Sprint would oppose the acquisition—as it has—only if it thought that the merger would put it in a worse position by increasing the competitive pressures that it already faces.

The market—though not the Obama administration—understands this point. On the day that the Justice Department announced its opposition to the acquisition, Sprint’s share price rose 5.9%, reflecting investors’ belief that Sprint will be in a better competitive position without the acquisition.

The Obama administration also claimed blocking the merger would protect jobs; Priest nails the response:

The Obama administration’s emphasis on job maintenance is even more confused. The administration has argued that the acquisition should be opposed because mergers reduce employment by eliminating redundant jobs. But a sound economy is not built on redundant jobs. An economy becomes stronger as redundant jobs are eliminated, costs and prices reduced, and the effective wealth of the nation enhanced. A major reason that the Obama administration’s efforts to stimulate the economy have failed is that it has consistently poured money into negative-value investments.

[See also the Streetwise Professor on Priest’s article.]

Suderman on spectrum

Lynne Kiesling

As an addendum to my earlier post on the DOJ’s challenge of the AT&T/T-Mobile merger, Peter Suderman at Reason has an informative post (with good links@) making essentially the same point as mine. The more the merrier!

Quality, broadband, and spectrum: What the DOJ’s AT&T/T-Mobile lawsuit misses

Lynne Kiesling

Yesterday’s announcement that the US DOJ would challenge the merger of AT&T’s wireless business with T-Mobile’s was surprising, and their approach to the merger seems to be more conventional and rooted in old HHI-market share and price effect metrics. Their analysis suggests that due to the substantial overlap in the existing separate AT&T and T-Mobile networks, the merger would lead to higher HHIs and larger market shares, and most national consumers would experience higher prices; therefore the merger would have anti-competitive effects.

The complaint has more depth than that (separate analysis for business markets and consumer markets, for example), but that seems to be the core of the argument. I think that argument misses a few important points in such a dynamic market.

The first important point is the quality dimension of the wireless service, and the costs associated with providing quality service(s). In this market some of the most important categories of quality are speed, signal strength, lack of latency, and lack of dropped calls. Providing such quality means building bandwidth and more dense network towers. In a lot of locations, building more towers is increasingly costly due to siting, zoning, permitting costs and lags. Building more bandwidth to add to existing capacity is also not cheap. Thus both AT&T and T-Mobile have been suffering in quality comparisons to their largest competitor, Verizon (I don’t know much about Sprint’s quality, or the smaller regional providers), and are looking at substantial investments and time lags to expand and improve their bandwidth capacity and other features that contribute to quality services. I’ve seen a couple of commentators point to the challenges both companies face as stand-alone competitors if they are going to upgrade their networks from 3G to compete with Verizon’s 4G LTE; in fact, Deutsche Telecom has been trying to sell T-Mobile for a while and has balked at the costs of these investments. In this cost environment, the lower-cost way to increase quality is likely to be the merger, which would allow the merged firm to combine their bandwidth and towers to get that capacity without additional siting, permitting, etc. Thus on a cost basis there’s room to argue that prices could fall for higher-quality (LTE) services, and there’s also room to argue that if prices do go up, it’s a reflection of the consumer demand for high-speed LTE service for their smartphones. It’s hard to capture that quality dimension, and how rapidly it can change in such a dynamic technological environment, when your definition of anti-competitive focuses primarily on the effect on prices. This is a very Schumpeterian point.

That point leads to the second point not to overlook: market definition. Every antitrust challenge of a merger is going to hinge in some way on market definition. If you define the market as national wireless telephony, then the merger would create essentially a duopoly with Sprint as a small third firm. But, as Geoff Manne points out, there are two dimensions on which that’s not the correct market definition — there are regional competitors, and there are other competitors in the broader market, the more relevant market, which is broadband:

Meanwhile, even on a national level, the blithe dismissal of a whole range of competitors is untenable.  MetroPCS, Cell South and many other companies have broad regional coverage (MetroPCS even has next-gen LTE service in something like 17 cities) and roaming agreements with each other and with the larger carriers that give them national coverage.  Why they should be excluded from consideration is baffling.  Moreover, Dish has just announced plans to build a national 4G network (take that, DOJ claim that entry is just impossible here!).  And perhaps most important the real competition here is not for mobile telephone service.  The merger is about broadband.  Mobile is one way of getting broadband.  So is cable and DSL and WiMax, etc.  That market includes such insignificant competitors as Time Warner, Comcast and Cox.  Calling this a 4 to 3 merger strains credulity, particularly under the new merger guidelines.

Yes, particularly the point about how this merger, and the broader evolution of the industry, is about broadband. The distinctions among wireless telephony, satellite, and cable are diminishing as we see convergence across the communications platforms. This is another Schumpeterian point.

Which leads to the final point that cannot be overlooked in analyzing this merger and the DOJ’s challenge of it. It’s about broadband, and thus in large part given the wireless companies involved, that means it’s about spectrum. As I argued in one of the first Knowledge Problem posts back in 2002, federal spectrum policy of the past 75 years has led to distortion, delay, rent-seeking, and political manipulation by incumbent rights holders.

Because of the politics of spectrum rights and the lack of private spectrum ownership, resources might not get to move to higher-valued uses. The FCC is not going to be as impartial a rights arbiter as the combination of well-defined spectrum ownership and a court system using the rule of law. The absence of spectrum privatization may slow or deter potentially beneficial technological change, and leaves in place a political process more prone to financial and other manipulation than one based on markets and law.

If we had alienable spectrum property rights, as Ronald Coase laid out in his spectacular 1959 article on the FCC and spectrum policy, then it’s highly likely that AT&T and T-Mobile would have alternative investment opportunities to gain more spectrum to increase their wireless broadband capacity in their stand-alone networks. Failing that, as a consequence of our turgid spectrum policy, their most attractive feasible alternative for acquiring more spectrum rights is to merge. Do not overlook the effects of poor spectrum policy on the business models of these companies. This is a Coasian point. And, to make a related Doug North point, institutions matter. I made this point initially in March when the merger was first announced.

Being an optimist, let me invoke Jerry Ellig’s point about the DOJ challenge:

For once, the high-profile action everyone pays attention to will occur in an antitrust forum where the decision criterion is the effects of the merger on consumer welfare, period. Regardless of what one thinks about the merger, it’s nice to see that we’ll finally have a knock-down, drag-out fight based on whether a big telecommunications merger harms consumers and competition.  That’s the antitrust standard the Department of Justice has to satisfy in order to prevent the merger.

And to do that it’s going to have to engage these spectrum policy issues, which it has thus far not done.

I am far from expert on all of these issues, so for more I commend to your attention this post from Josh Wright on how the DOJ challenge affected Sprint’s share price yesterday, in addition to Geoff’s post above. Our friends at Truth on the Market and The Technology Liberation Front will be worth reading regularly as this unfolds.

Regulatory inertia, antitrust edition

Lynne Kiesling

This article in the Wall Street Journal last week got less attention than I expected (perhaps because of budget, Libya, etc. news). It’s a very good analysis of bureaucrat v. bureaucrat competition between the DOJ and the FTC on which agency will take the lead in prosecuting antitrust cases:

Both agencies are charged with enforcing antitrust law, a situation that has prevailed for almost a century, and it’s up to them to sort out disputes. Neither will disclose how often they occur, but antitrust lawyers and agency officials say they have been rising in number and intensity, in part because converging industries—especially in the realm of technology—have blurred the agencies’ traditional lines of responsibility. …

Some methods used to resolve agency disputes belie the stakes involved. In addition to the most recent coin toss—which several people familiar with the matter said the Justice Department won—the agencies have employed the “possession arrow” system borrowed from college basketball, in which they take turns. That prevents either agency from claiming jurisdiction over a company or industry sector in the future. …

The two agencies have different legal procedures for challenging business deals or practices they believe to be anticompetitive. The Justice Department must work through the federal court system and face judges who are often skeptical of antitrust law. The FTC, by contrast, tries cases in its own administrative law system. This, many lawyers believe, provides a significant home-court advantage.

I’ve always classified the FTC’s jurisdiction as being more focused on mergers in consumer retail products and industries.

The article goes on to describe the real costs that this jurisdictional squabbling creates for firms, adding time and expense to an already long and expensive merger process. It concludes with observations from FTC commissioner and former FTC chair William Kovacic, who argues that it’s time to revamp the structure of the federal government’s antitrust prosecution.

I think there’s a crucially important, more general, broader insight to draw from this story: the inevitability of regulatory inertia relative to the underlying dynamism and change in the economy. Note in the quote above what is cited as an impetus for this conflict: “… converging industries—especially in the realm of technology—have blurred the agencies’ traditional lines of responsibility.”

By its nature regulation (including antitrust enforcement) relies on establishing definitions, guidelines, limits on behavior (of an agency as well as of firms), and legalistic, administrative procedures for carrying them out. In the case of antitrust and the split jurisdiction between the DOJ and the FTC, these strictures also include stipulations of who will concentrate on which industries. One of the hallmarks of economic and technological dynamism is the Schumpeterian creative destruction of industry boundaries — whole new industries exist that could not have been imagined a century ago, in the heyday of establishing regulatory agencies.

Why are regulatory agencies slow to adapt to such organic, evolutionary changes in technology and the economy? Here I think Schumpeter meets Buchanan and Tullock — once established, those working in agencies have an interest in maintaining the status quo jurisdiction and budget of the agency, despite any apparent mismatch of its functions with changes in the underlying economy. Changes in regulation require changes in legal procedures, and may even require changes in enabling legislation, adding yet another layer of inertia to the process. In addition, I think that legal procedures intended to increase agency transparency and accountability, such as the Administrative Procedure Act, exacerbate the legalistic bureaucracy of regulation and reinforce the slow pace of regulatory adaptation to underlying dynamic change.

One unfortunate consequence of such regulatory inertia is the potential for reduced welfare/total surplus, through both reduced consumer surplus and producer surplus. Such regulatory agencies were established primarily to protect consumer surplus, but one consequence of technological change has been how it enhances consumers’ abilities to investigate and protect their own interests, as personally and subjectively defined rather than as bureaucratically defined (which is usually defined as lower prices). But regulatory institutions adapt slowly to such change, as this example illustrates, to the detriment of total welfare.

This observation extends to other forms of regulation, including state-level public utility regulation. One of the things I find most paradoxical in the current approach to smart grid investment is how the technology adoption decision has been incorporated into the regulatory process, which the above analysis suggests is counter-productive.

Another court dismisses price fixing, price gouging claims against Martha’s Vineyard gasoline retailers

Michael Giberson

The Martha’s Vineyard Times:

A panel of judges sitting in the federal First Circuit Court of Appeals has upheld a lower court ruling that gasoline prices on Martha’s Vineyard have not been illegally inflated by a conspiracy among retailers, according to a report by “The Docket,” the news blog of Massachusetts Lawyers Weekly. The decision was entered a week ago.

Plaintiffs had complained that four of the Vineyard’s nine gas stations entered into a price-fixing conspiracy and engaged in price gouging in the aftermath of hurricanes Katrina and Rita in 2005.

Chief Judge Sandra Lynch, writing for Judges Bruce M. Selya and Jeffrey R. Howard, held that the defendant gasoline retailers did nothing that violated either the Sherman Antitrust Act or a price-gouging regulation promulgated under the Massachusetts consumer protection statute.

(Link to the decision in William White, et al. v. R. M. Packer Co., et al. by the U.S. Court of Appeals, First District.)

The ruling upholds the decision made a year ago in U.S. District Court. On the price fixing claim, both courts concluded that the plaintiffs’ evidence only suggested the existence of parallel pricing and didn’t show direct evidence of price fixing. The law doesn’t insist firms in the same market compete heavily on price, just that they don’t conspire to restrain trade. On the price gouging claim, the district court found that the price changes observed were “consistent with the normal operation of the market.” The appeals court said plaintiffs “have not shown a ‘gross disparity’ in prices under the state price-gouging rule.”

A year ago I commented:

My general reaction from reading parts of Gollop’s testimony [for the plaintiff] was that it was very basic industrial organization analysis – all comparative price movements and changing margins – and neglected completely the extensive economics literature on retail gasoline pricing. The law likely makes no special distinction for gasoline pricing cases, so the analysis wouldn’t have to address what is known about gasoline prices, but neglecting the literature may have led plaintiff’s to mistake common retail gasoline price patterns as evidence of price fixing.

Indeed the courts made no special use of gasoline pricing literature. But plaintiffs pursued the appeal in a way that attempted to take advantage of the relatively normal phenomena of asymmetric price adjustment in retail gasoline markets. In short, the plaintiffs wanted the court to find retailers were price gouging because they failed to reduce retail prices as fast as wholesale prices were falling. The court didn’t buy it.

Typically in retail gasoline, profit margins are higher when prices are falling and lower when prices are rising. Plaintiffs charged that price gouging took place over a period beginning with Hurricane Katrina and ending three months later on December 1, 2005. Generally speaking: prices rose sharply with Katrina, began dropping, rose again around Hurricane Rita, then fell for the next several weeks. The significant times with high gross margins were, not surprisingly, periods of falling prices.

Both courts struggled a bit with the definition of price gouging, finding little direct guidance in Massachusetts law. But one thing the courts saw pretty clearly: price gouging laws are about unconscionably high prices, and prices can’t become unconscionably high when they are falling. Price gouging law does not require retailers to pass along falling wholesale prices.

NOTE: See my post of last year for links to both the plaintiffs and defendants expert testimony.