Complexity, heuristics, and the traveling salesman problem

Add this one to your long reads queue, because it’s well worth it: Tom Vanderbilt writes in Nautilus about the traveling salesman problem and how algorithmic optimization helps us understand human behavior more deeply. It’s a thorough and nuanced analysis of the various applications of algorithms to solve the traveling salesman problem — what’s the most efficient way (which you of course have to define, either in time or money or gasoline etc.) to deliver some number n of packages to some number d of destinations, given your number t of trucks/drivers? This is a tough problem for several reasons, and Vanderbilt’s discussion of those reasons is clear and interesting.

We can start with a simple transportation model with small numbers of packages, destinations, and trucks. But as the number of them increases, the problem to solve becomes increasingly complex, increasing at least exponentially if not more. Then think about what happens when the locations of the destinations change every day, as is the case for UPS and FedEx deliveries. Then think about what happens when you add in heterogeneity of the deliveries; Vanderbilt opens with a girl pointing out that her mother would never buy perishables and then leave them in the car all day, so the nature of the item changes the constraints on the definition of the efficient route.

Her comment reflects a basic truth about the math that runs underneath the surface of nearly every modern transportation system, from bike-share rebalancing to airline crew scheduling to grocery delivery services. Modeling a simplified version of a transportation problem presents one set of challenges (and they can be significant). But modeling the real world, with constraints like melting ice cream and idiosyncratic human behavior, is often where the real challenge lies. As mathematicians, operations research specialists, and corporate executives set out to mathematize and optimize the transportation networks that interconnect our modern world, they are re-discovering some of our most human quirks and capabilities.

One of the most intriguing aspects of implementing logistics that reflect good solutions to the TSP that Vanderbilt highlights is the humanity of the driver. Does the dispatcher know the driver, know that s/he is reliable or not? That may affect how to define the route, and how many stops or changes to put on that particular person’s schedule. Is the driver prone to fatigue, and how does that fatigue affect the driver’s decision-making? What are the heuristics or rules of thumb that different drivers use to make decisions in the face of uncertainty given the cognitive limitations of humans? How different will the different heuristics of different drivers be, and how to they affect the logistics of the system?

What Vanderbilt finds is that good logistics systems take the organic, emergent system that incorporates those heuristics into account when devising the TSP algorithm. They leave a human component in the logistics, but also use the human component to inform and change the algorithm. Another important element is data, because all such algorithms are going to work in conjunction with such data as location. GIS mapping capabilities improve the data used to establish, test, and monitor TSP algorithms.

Economic theories as recipes

Michael Giberson

Today marks the 100th anniversary of the birth of Julia Child, and surprisingly, it was a blog post on economic theorizing reminded me of the famous cookbook author this morning. I’m sure it wasn’t quite the metaphor Rajiv Sethi had in mind when he posted “On Prices, Narratives, and Market Efficiency,” but the article suggested to me that just maybe economic theories are a bit like recipes.

Sethi’s article reacts to John Kay’s response to Robert Lucas’s article addressing the Queen of England’s question: Why had economists failed to predict the financial crisis? More directly, Sethi explores the issue of how economists should change the way they work in order to better understand economic fluctuations. Sethi likes Kay’s formulation, of prices as the “product of a clash between competing narratives,” and he ties it usefully to some existing theorizing by Michael Harrison and David Kreps on speculators with heterogeneous beliefs.

The idea [from Harrison and Kreps] that prices are “obtained through market aggregation of diverse investor assessments” is not too far from Kay’s more rhetorically powerful claim that they are “the product of a clash between competing narratives”.  What Harrison and Kreps do not consider is how diverse investor assessments change over time, since beliefs about transition probabilities are exogenously given in their analysis. But Kay’s formulation suggests how progress on this front might be made.

Kay suggests that more than just deductive logic will be needed to understand the variety of conflicting narratives and how they change; Sethi, while agreeing, adds that deductive reasoning could be pushed still further.

The economic theories as recipes metaphor suggests that economics is just a way to get from a bundle of raw elements, different kinds of data and interpretations, to a more useful product. Unsettling in such a metaphor is that recipes are many and diverse, and unifying principles appear to be hard to find. There is no general theory of cooking that will tell us the one best way to combine a given set of raw materials into the perfect finished product. It depends on what you want to produce, and what the tools are at hand.

Perhaps too there is no one best way to analyze a given set of observations into the perfect slice of economic understanding; perhaps there is no general theory of economics. This is not Sethi’s argument, and it is not even my own settled view. Yet if the economy itself is always open to entrepreneurial insight, which is one way to say we are never at a point in the economy where improvement is impossible, then economics itself is similarly always open to insight and improvement.

Doing what seems like it should work: Experiments, tests, and social progress

Michael Giberson

My title is a little grand, at least the “and social progress,” but maybe it will be justified in some later, more carefully worked out version of the ideas clashing about in my head. As this is a blog, I’m sharing the more immediate, less carefully worked out version. ;-)

I’ve been reading Redirect: The Surprising New Science of Psychological Change. My wife brought it home from the library and then recommended it to me. (Thanks!) The book makes some surprisingly strong claims for personal improvements from what the author calls “story editing,” a bundle of techniques that subtly (or sometimes not so subtly) get people to revise their self narratives. (More from the Scientific American blog, more from Monitor on Psychology.)

Counterpart to that focus of the book is an emphasis on testing social and psychological interventions to discover what actually seems work. Author Timothy Wilson details numerous self-help and social change projects, some of which capture millions or even billions of dollars in public support, which seem like they should work but when subjected to careful evaluation show no evidence of success. In fact, some very expensive programs actually seem to worsen the problem that the program was designed to fix: programs to fight teenage smoking that lead to higher smoking rates, programs to discourage teenage pregnancy that lead to higher pregnancy rates, efforts to discourage littering – or cheating – on campus that have the opposite effects. Wilson advocates a strong preference for testing social interventions with randomized control experiments when possible (and ethical). When randomized control tests are not possible, then other attempts at measurement and replication are important even though difficult to do well.

Whether or not “story editing” is key to successful personal and social change – Wilson makes a strong case, but he could be cherry picking his evidence and I’m sure he has his professional critics – the emphasis on experimentation and testing interventions is an important one.

Lynne’s posts last week on experimentation in social contexts are related: Economic experimentation, economic growth, and regulation and Experimentation, Jim Manzi, and regulation/deregulation. I’m most of the way through Russ Robert’s EconTalk interview with Jim Manzi that Lynne mentioned in second-listed link (recommended); Manzi makes related arguments in favor of well-designed experiments where possible, and for trial and error experimentation where controlled experimentation is not possible.

In both Wilson’s book and the Manzi interview (and apparently in Manzi’s book Uncontrolled, which I haven’t read yet), the limits of multivariate analysis of naturally generated data – i.e., almost all econometric analysis – are examined and found wanting. As Manzi explains, “omitted variable bias” is massive when examining data on human systems; the systems are simply too complex to produce reliable, non-obvious predictions via multivariate analysis because you cannot control for all of the possible effects and interactions influencing the data. He suggests that while 90 percent of studies relying on well-designed randomized control experiments are subsequently replicated, that figure drops to 20 percent or so for studies relying primarily on well-designed multivariate analysis.

In a post on the deterrence effect of the death penalty, Timothy Taylor provides an example of the difficulties of using multivariate analysis to examine social policy. Taylor draws on a recent National Research Council study on the topic, which like a similar study published in 1978 has concluded “available studies provide no useful evidence on the deterrent effect of capital punishment.” Taylor then explains several reasons why it has been hard to draw firm conclusions from the data. While he doesn’t use the term “omitted variable bias,” it is among the problems that the NRC study finds hampering results in this area.

The views of both Wilson and Manzi, and the case study on the effects of the death penalty, all point to a certain humility concerning our claims to understand how the world works. But humility isn’t the end of the story, it isn’t an argument to stop; it is an argument to trust our beliefs about the social world less conclusively and also to trust them selectively: trust knowledge derived from replicated randomized-control experiments most, trust knowledge from replicated multivariate analysis much less, trust knowledge based on trial and error learning less as well.

These ideas will, once better worked out in my head, probably also mention Vernon Smith’s work on constructivist and ecological rationality. Of course, V. Smith is known to be a fan of experimental approaches to understanding social phenomena as well.

The constructivist way forward: experimentation! testing! social progress!

New paper: Knowledge Problem

Lynne Kiesling

I have a new paper that may be of interest to KP readers, since the subject of the paper is the same as the name of this site: Knowledge Problem. I am honored to have been invited to contribute this paper to the forthcoming Oxford Encyclopedia of Austrian Economics (Peter Boettke and Chris Coyne, eds.). Here’s the abstract:

Hayek’s (1945) elaboration of the difficulty of aggregating diffuse private knowledge is the best-known articulation of the knowledge problem, and is an example of the difficulty of coordinating individual plans and choices in the ubiquitous and unavoidable presence of dispersed, private, subjective knowledge; prices communicate some of this private knowledge and thus serve as knowledge surrogates. The knowledge problem has a deep provenance in economics and epistemology. Subsequent scholars have also developed the knowledge problem in various directions, and have applied it to areas such as robust political economy. In fact, the knowledge problem is a deep epistemological challenge, one with which several scholars in the Austrian tradition have grappled. This essay analyzes the development of the knowledge problem in its two main categories: the complexity knowledge problem (coordination in the face of diffuse private knowledge) and the contextual knowledge problem (some knowledge relevant to such coordination does not exist outside of the market context). It also provides an overview of the development of the knowledge problem as a concept that has both complexity and epistemic dimensions, the knowledge problemʼs relation to and differences from modern game theory and mechanism design, and its implications for institutional design and robust political economy.

In this paper I analyze the development of the two categories of the knowledge problem — the complexity knowledge problem and the contextual knowledge problem — and explore both the history of the development of these concepts and their application in robust political economy and new institutional economics. As is the hallmark of a good research project, I think on balance I learned more than I created in the process of writing this paper.

One other thing I made sure to include was a discussion of how the knowledge problem and its development relates to game theory and mechanism design, through the work of Oskar Morgenstern (and then through some of the work of Herb Simon and Vernon Smith, among others).

Tying together economics, institutional design, history of thought, and epistemology, I hope you find this paper informative and useful! I’ll also make sure to update when the full volume is available.


Music, harmony, and social cooperation

Lynne Kiesling

I am a big fan of English renaissance choral music, particularly sacred polyphony from Tallis and Byrd (and stretching back to Taverner, but he’s not as distinctively polyphonic). One of the best ensembles performing such music is Stile Antico, a group of 13 British singers who do an outstanding job with this music, and whose recordings I have recommended here before. Especially at this time of year, their music really resonates and adds joy and beauty to life.

A couple of weeks ago we got to hear Stile Antico perform live in Milwaukee: Thomas Tallis’ Puer Natus Est mass interspersed with pieces from Byrd, White, and Taverner. The music was gorgeous, the voices delightful, and the artists charming and gracious.

But what really struck me was their method of decentralized coordination. Typically when we think of musical performance beyond, say, a chamber quintet, coordination involves hierarchy in the form of a conductor, to “keep everyone on the same page”. The larger the number of performers doing different things, the harder to coordinate, and therefore the greater need for a conductor … right?

Not so in this case. 13 singers, each with a particular part, bringing a distinctive element to the work. But in some ways the music is simultaneously so lush and yet so spare that if their timing is off, the beauty of the result is diminished. 13 singers with no conductor, and they coordinate by taking their visual and verbal cues from each other in a dynamic and evolutionary manner. This is a vivid example of decentralized coordination.

Of course the goal is harmony (in the general sense). If each individual acts and reacts to the actions of the other individuals in a way that produces a harmonious outcome, that’s beauty. And it’s an emergent outcome; each has his or her own score and acts accordingly, adapting to the actions of the others in a way that creates emergent harmony.

The music metaphor illustrates achieving emergent order through decentralized coordination, and it’s a metaphor for social cooperation too. Adam Smith employs the harmony metaphor for social cooperation in The Theory of Moral Sentiments, in which he invokes harmony as a desirable outcome of social interaction repeatedly (and refers to the music metaphor directly in the last reference). Note the emphasis on harmony as distinct from uniformity — each individual brings personal, private, heterogeneous features to social interaction (whether musical or economic), and they are not the same, not uniform. Each has an incentive, a desire to coordinate, to harmonize; in music it’s finding the complementary notes, in social systems it’s grounded in our innate desire for sympathy and mutual sympathy, according to Smith. Each individual brings something different to the party/performance/market.  The most beautiful and sublime outcomes emerge when each acts on its individual traits with a view toward creating harmony and sympathy. And it does not necessarily require the top-down imposition of control or system-wide hierarchy, but can be achieved through decentralized coordination.

Of course there are limits to applying the music metaphor to institutional design and social cooperation, such as the scale/number of actors. But it reminds us of the possibility of cooperation and harmony through decentralized coordination, without the need for imposed system-level control.

 

An example of what not to do in persuasion

Lynne Kiesling

Alex Tabarrok has an excellent post this morning at Marginal Revolution:

David Warsh and Paul Krugman try to write Hayek out of the history of macroeconomics. …

It is true that many of Hayek’s specific ideas about business cycles vanished from the mainstream discussion under the Keynesian juggernaut but what Krugman and Warsh miss is that Hayek’s vision of how to think about macroeconomics came back with a vengeance in the 1970s. …

… Hayek was an important inspiration in the modern program to build macroeconomics on microfoundations. The major connecting figure here is Lucas who cites Hayek in some of his key pieces and who long considered himself a kind of Austrian.

I offer this as a cautionary “what not to do” note to students in particular, but also to all of us. In the piece to which Alex is responding Krugman chooses his definition of “modern macroeconomics” in a way that clearly maps into his preconceptions and reflects his confirmation bias. Such a rhetorical stratagem is unscientific and anti-intellectual. It’s also easy to critique (no disrespect intended for Alex’s good, pointed critique) by simply looking at the literature and seeing that modern macro encompasses a breadth of ideas and approaches, many of which are substantially informed by models and methodological approaches that Krugman chooses to reject.

Thus both on intellectual grounds and with a view toward crafting an argument that is persuasive to those who don’t already agree with you and share your worldview, don’t do this. Being more ecumenical and treating the contributions of your intellectual opponents with respect will make your arguments more thorough, effective, and persuasive.

On a substantive note, I’d like to echo the recommendation that Jacob Levy made in the comments on Alex’s post; the conclusion of Warsh’s essay is a good one, and suggests that incorporating more of a complexity approach into macro would enable us to build better models:

That said, it is pleasing to think that Hayek himself may yet turn out to have been a very great economist after all, far more significant than Myrdal or Robinson, when seen against the background of a broader canvas. The proposition that markets are fundamentally evolutionary mechanisms runs through Hayek’s work. Caldwell, of Duke University, notes that, starting with the Constitution of Liberty, “the twin ideas of evolution and spontaneous order” become prominent, especially the idea of cultural evolution, with its emphasis on rules, norms, and decentralization.

These are today lively concepts in laboratories and universities around the world. “It could have been that Hayek was running a different race, and the fact that he didn’t do well in the Walrasian race was that he wasn’t running in it—he was running in the complexity race,” says David Colander, of Middlebury College. Hayek may yet enter history as a prophet of evolutionary economics, a discipline dreamt of since the days of Thorstein Veblen and Alfred Marshall in the late nineteenth century but not yet forged, whose great days lie ahead.

UPDATE: See also Pete Boettke on this same theme, motivated by Alex’s post.

“Death of a currency”

Lynne Kiesling

One of the great topics of discussion with my in-laws over the holidays was the impending demise of the euro, and whether there was any hope for, or reason to, maintain the euro given the sovereign fiscal challenges of the member countries. The disastrous German and Italian bond auctions, and Spain’s cancellation of its sovereign bond auction, seems to portend “eurogeddon”. One of the articles that helped me interpret these events was this column from Jeremy Warner in the Telegraph:

No, what this is about is the markets starting to bet on what was previously a minority view – a complete collapse, or break-up, of the euro. Up until the past few days, it has remained just about possible to go along with the idea that ultimately Germany would bow to pressure and do whatever might be required to save the single currency.

The prevailing view was that the German Chancellor didn’t really mean what she was saying, or was only saying it to placate German voters. When finally she came to peer over the precipice, she would retreat from her hard line position and compromise. Self interest alone would force Germany to act.

But there comes a point in every crisis where the consensus suddenly shatters. That’s what has just occurred, and with good reason. In recent days, it has become plain as a pike staff that the lady’s not for turning.

In addition to the striking parallel images of Merkel and Thatcher as women who are heads of state fighting (almost too late) for fiscal responsibility, Warner’s column does a good job of pointing to the kind of market and policy movements we can expect in the next couple of weeks. Clearly many parties behaving responsibly have already laid out some contingency plans to mitigate the effects.

But I have a simple-minded question to ask, perhaps one that I should have asked two years ago: why are so many people so worried about contagion from sovereign default in the eurozone? Should they be worried?

Typically, interconnected financial markets have negative feedback loops that lead to the dampening of propagation; price changes as investors move money around in response to changes in relative risk are an example of such a negative feedback. But with so many policies designed to insulate, protect, bail out parties, policies that introduce asymmetries by insuring against losses, have these negative feedback loops been distorted and replaced or outweighed by positive feedback loops that amplify effects? That’s how I’ve been thinking about the bailouts and subsidies and loan guarantees in both the EU and the US — policies that distort the negative feedback effects that can be equilibrating and introduce asymmetries that create destructive positive feedback effects, whereas before any disequilibrating events or shocks could have been smaller and dampened by the normal negative feedback effects in markets. So I would normally say that the forces of self-organization exist to buffer and counter the forces of contagion, but the political rules in operation have stifled the forces of self-organization and exacerbated contagion.

One of those forces of self-organization and negative feedback is bankruptcy and default, both private and sovereign. I wonder if the EU will be able to activate the salutary re-equilibrating benefits of bankruptcy and default while simultaneously being able to either stem contagion or have the political fortitude to carry on through the pain and cost that is larger than it might have been otherwise.

Students for Liberty talk: economics and complexity

Lynne Kiesling

UPDATE: I’ve had a report that the link to the slides is not working, but I can’t get it not to work … so if you are having trouble please let me know in the comments and I’ll go bug-stomping. UPDATED UPDATE: mischief managed, I think!

On Saturday I was honored to be the morning keynote speaker at the Students for Liberty Chicago Regional Conference. In only four years SFL has grown into a large and effective organization for bringing together students who share an interest in exploring and promoting individual liberty, classical liberal ideas, and public policy that reflects those principles.

My talk, Beneficial Complexity (pdf of slides), took some basic economic concepts and looked at them through the lens of complexity science. My main objective was to encourage the attendees to see ways to integrate some core classical liberal ideas into their own thinking, their own work, their persuasive discussions with others, and their advocacy activities.

Start with a story: a flower market selling calla lilies grown in Colombia. Lots of buyers and sellers, lots of parties involved in getting the flowers from places like Colombia to your neighborhood flower shop. Mostly impersonal exchange, but not entirely devoid of personal relationships. And without any one person knowing how to do so in its entirety, the flowers get from Colombia to my flower shop for me to buy. If you are familiar with Leonard Reed’s I, Pencil essay about the highly decentralized yet coordinated processes that combine to bring pencils to consumers, you recognize this theme. One of the fundamental economic and epistemological concepts that I, Pencil illustrates is the knowledge problem — no one person knows how to make a pencil, or how to grow and sell calla lilies globally. We associate this idea primarily with the work of F.A. Hayek, as stated in his famous essay “The Use of Knowledge in Society” in 1945 (which also inspires the title of this blog).

Knowledge is dispersed and, importantly, it is also private and often subjective. Your willingness to use some of your resources to buy a cup of coffee, or a pencil or a calla lily bouquet, is known only to you and is context dependent, which means that much of the knowledge that goes into individual decisions is local and hard to centralize. Yet calla lilies show up in Chicago shops, as do pencils, in ways that satisfy the preferences of consumers and profit producers along the supply chain. How does that happen? Markets and prices create networks of dispersed, local, private knowledge that connects and aggregates that knowledge and sends valuable signals to economic actors about the relative benefits and costs of the decisions they take.

When we think about economic activity taking place in networks, and how exchange and prices connect networks and extend and deepen them, we are using the tools of complexity science to understand economic behavior. Think back to the flower market and the pencil. As Eric Beinhocker writes in The Origin of Wealth, “The complexity of all this activity is mind-boggling. … all the jobs that must get done, all the things that need to get coordinated, and the timing and sequence of everything. … there is no one in charge of the global to-do list. … Yet, extraordinarily, these sorts of things happen every day in a bottom-up, self-organized way. … The economy is a marvel of complexity.” (2006, pp. 5-6)

Technically speaking, what is a complex system? It’s a system or arrangement of many component parts, and those parts interact. These interactions generate outcomes that you could not necessarily have predicted in advance. In other words, it’s a non-deterministic system. Scholars in many different fields use this general idea to study a range of systems and phenomena, from molecular and cellular interactions in physics, chemistry, or biology, to the organization of the brain in neuroscience, to species and environment interactions in ecosystems, to cascading network failures in electric power systems, to networks of co-author collaborations in particular fields of research, and many other applications.

These applications share an interest in three features of a complex system: its structure (how are the parts connected, how do they interact?), its rules (physical or human-generated), and its behavior. apply this idea of a complex system to our economic and social interactions. Go back to Beinhocker’s description of the market economy as “a marvel of complexity” in which all sorts of activities get coordinated in a “bottom-up, self-organized way”. Think of the economy as a complex system, in which individuals are the agents, the component parts, with dispersed private knowledge. We are connected in many ways — social relations, economic exchange, organizations, and so on — and our interactions shape individual decisions, individual behavior, and ultimately overall system behavior. The profound insights of writers like Ferguson, Smith, Hayek, and others is that individual agents have their own preferences and private knowledge, but we interact, and in so doing we generate system-level behavior that is generally self-organized — emergent order. In this unplanned order no one person can predict the specific outcome we’ll achieve, but we still experience over and over and over in human history that order generally does emerge.

But order doesn’t necessarily always emerge, and the order that emerges sometimes isn’t pretty. That observation leads to the elephant in the room with respect to emergent order in social system: the rules. All exchange takes place within a framework of rules, an institutional context. Those institutions include formal laws enforcing property ownership, contracts, and punishment for theft and fraud, as well as informal social norms and peer pressure that may, for example, affect how the bargaining and negotiation in the exchange take place. Rules shape how agents interact, and shape their incentives … and as a result they can also affect system behavior. One important implication of studying economies as complex systems is that when we design institutions, we should model and test and strive for rules that enable order to emerge, which means an emphasis on process rather than using rules to achieve some specific outcome. That’s one important intersection of classical liberal principles with complexity science.

I’d like to raise a point that I didn’t in my remarks on Saturday, but builds on a discussion in a later session at the conference. One challenge that we often face as classical liberals is putting “the human face” on our ideas, countering the perception that a libertarian society would be cold, calculating, and lacking in compassion or personal relationships. Nothing is further from the truth, and the language of complexity science gives us a way to communicate that reality, because it frames economic/social interactions as relationships and connections. It emphasizes the mutuality of exchange and the multiple dimensions of our relationships with others in our voluntary associations.

Resiliency comes from more risk of bank failure, not less

Lynne Kiesling

In the always-smart-and-interesting City AM paper from London, Anthony Evans makes an important argument that has been overlooked in financial regulation debates: risk of failure is what creates system resilience, and regulation creates brittle monocultures. He writes in the context of last week’s Independent Commission on Banking (ICB) recommendations for creating regulatory divisions between retail banking and investment banking and implementing other structural changes, with the objective of a more resilient financial system. Evans critiques the over-simplified concept of risk that the report employs:

We can’t say that one thing is more risky than another – only that different activities expose people to different types of risk. Bodies like the ICB needs to shift from trying to – impossibly – reduce risk to placing responsibility on those who are choosing between different risks.

For example, ordinary depositors should not be protected from risk – they need to confront it. It can seem counterintuitive, but the genuine threat of bank runs is probably the best disciplinary device to prevent them from happening.

Evans’ argument stems from an assertion that he makes later in his column, that risk cannot be reduced but can only be transferred from one party to another. While I think that assertion is debatable, the important insight from this part of his argument is that resiliency in complex market systems arises from agents having responsibility for losses associated with taking additional risks, in addition to their receiving profits associated with taking additional risks. Breaking that connection among risk, profit, and loss is one of the core causes of the brittleness of the financial system over the past two decades, and the transmission and magnification of those losses.

Evans makes a second important observation: when regulation imposes a higher degree of uniformity in a complex system, it reduces resilience of the overall system by creating separated monocultures:

By making arbitrary decisions about what must stay within fences and what doesn’t, or about the level of equity capital that banks will be required to keep, regulators make banking more homogenous. Banks are already free to set up their own ring fences, and a competitive system would be one where they can experiment with different ones. …

All regulations create clusters of errors – by their nature they harmonise behaviour and therefore increase systemic dangers. Policy efforts need to focus on reducing barriers to exit, making it easier for banks to fail, making the costs of failure more visible and ensuring they fall on those who make bad decisions – bankers, regulators, or even the public.

We see this paradox of control in all forms of economic regulation; in this case in financial regulation, but also in the electricity regulation that is the focus of my attention. Regulators believe that by increasing control, by limiting the range of actions that agents can take in complex systems, they are reducing the risk of bad outcomes. But what they do not realize (or choose to ignore) is, as Evans points out here, that by imposing more top-down centralized control on their actions and interactions, they reduce the incentives of the agents to develop their own forms of individual control based on their local knowledge and their own experimentation. Thus regulation makes this complex system more rigid, more brittle, less resilient, and therefore regulation does not achieve its stated goals.

Note here that I am using the tools and language of complexity science and complexity economics, but you can see in this discussion where moral hazard shows up, where you could talk about the failures of corporate governance (as does Charlie Calomiris), etc. Framing the objective as a resilient system broadens the focus beyond top-down regulation to include the individual, decentralized institutions that can keep dangers from becoming systemic. Thinking about regulation in terms of the locus of control and the consequences of the imposition of control in a complex system is more likely to enable us to incorporate the costs of imposing control into the analysis, and to harness decentralized institutions to enable a more resilient system.

Worried about too much demand elasticity in electric power markets

Michael Giberson

Will electric power consumers facing smart-grid enabled real time prices have the potential to accidentally destabilize the power grid and cause a blackout?  A paper presented at a recent IEEE conference says it is a possibility. The surprising culprit? Too much price elasticity in the market demand function.

It is a surprising culprit because consumer demand for electricity is currently notoriously inelastic (that is to say, not responsive to changing prices) in the short run, in part due to the way standard regulatory rate structures end up with consumers being presented with relatively unchanging prices reflecting a longer-term average cost of production. Prices don’t change much, so consumers don’t watch prices much. But this price inelasticity of demand doesn’t mean the quantity of electricity consumers want to consume is unchanging – consumers want more or less electricity throughout the day in response to ordinary household schedules and in response to outside temperatures and building heating and cooling demands. Consumer demand for power responds to a lot of things, but rarely to changes in the price of power itself.

Because of the way the current grid is designed, the quantity of energy supplied and demanded must be balanced continuously. Therefore, the grid is typically operated to take the quantity of power demanded as a given and make whatever adjustments in the quantity supplied to maintain system balance. (In brief, because prices can’t do much work coordinating supply and demand in the short-run, all of the coordination must be done by adjusting quantities. Grid operators can typically control suppliers but not consumers, so quantity-based supply side adjustment does most of the work of keeping the market balanced.)

The authors, three engineers at MIT, worry that if too many consumers facing real time prices pick similar high price points at which to cycle off appliances (or low prices as which to charge electric vehicles), that the market demand function will acquire highly price elastic segments in which quantity demanded will suddenly drop off (or spike up) at rates faster than the supply side can safely accommodate. Therefore, a blackout risk. To counter this possible risk, the authors suggest diversifying price signals sent to consumers, or employing hourly instead of 5-minute price signals, or using rolling-average prices to consumers rather than location-specific current marginal price. They admit their safeguards would hamper the efficiency of market results, the efficiency loss essentially the price paid to mitigate the possibility of a price-responsive demand shock to the system.

In my view, the idea of having so many real-time price-aware consumers responding in the market remains so far-fetched that I’m not willing to worry about that so many of them will coordinate their home energy management systems on the same price points and unwittingly bring down the system.

And well before this possibility of too-much consumer responsiveness comes about, I suspect most RTOs will be paying suppliers for ramping capability and charging consumers for using it in ways that will enable sufficient short-run system responsiveness. So I’m not ready to worry now about this problem, and don’t think that I’ll need to worry about it later, either.

(See MIT media relations summary here, HT to Scientific American via Economist’s View.)