Experimentation, Jim Manzi, and Regulation/deregulation

Lynne Kiesling

Think consciously about a decision you contemplated recently. As you were weighing your options, how much did you really know that you could bring to bear, definitively, on your decision? Was the outcome pre-determined, or was it unknown to you? For most of the decision-making situations we confront regularly, we don’t have full information about all of the inputs, causal factors, and consequent outcomes. Whether it’s due to costly information, imperfect foresight, the substantial role of tacit knowledge, the inability to predict the actions of others, or other cognitive or environmental factors, our empirical knowledge has significant limits. And yet we make decisions ranging from the color of socks to wear today to whether or not to bail out Bear Stearns or Lehman Brothers. But we do so despite these significant limits of our empirical knowledge.

We build, test, and apply models to try to reduce this knowledge constraint. Models hypothesize causal relationships, and in social science we test those models largely using quantitative data and statistical tests. But when we build formal models, we make simplifying assumptions to make sure that the model is mathematically tractable, and we test those models for causality using incomplete data because we can’t capture or quantify all potentially causal factors. Sometimes these simplifying assumptions and omitted variables are innocuous, but then how useful will such models be in helping us to understand and predict outcomes in complex systems? Complex systems are characterized by interdependence and interaction among decisions of agents in ways that are non-deterministic, and specific outcomes in complex systems are typically not predictable (although analyses of complex phenomena like networks can reveal patterns of interactions or patterns of outcomes).

One person who’s been thinking carefully through these questions is Jim Manzi, whose new book Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society is generating a lot of discussion (and is on my summer reading list). On EconTalk this week he and Russ Roberts talked about the ideas in the book, and their implications for “business, politics, and society”. Russ summarizes the books focus as

Manzi argues that unlike science, which can produce useful results using controlled experiments, social science typically involves complex systems where system-wide experiments are rare and statistical tools are limited in their ability to isolate causal relations. Because of the complexity of social environments, even narrow experiments are unlikely to have the wide application that can be found in the laws uncovered by experiments in the physical sciences. Manzi advocates a trial-and-error approach using randomized field trials to verify the usefulness of many policy proposals. And he argues for humility and lowered expectations when it comes to understanding causal effects in social settings related to public policy.

Experimentation in complex social environments is a theme on which I am writing this summer, with application to competition and deregulation in retail electricity markets. Manzi’s ideas certainly flesh out the argument for experimentation as an approach to implementing institutional change that can identify unintended consequences and head costly design choices off at the pass before they become costly or disruptive. I made similar arguments in an article in Electricity Journal in 2005 for using economic experiments to test electricity policy institutional designs, and Mike and I discussed those issues here and here. In broad brushstroke, traditional cost-based economic regulation typically stifles experimentation, because to implement it the regulator has to define the characteristics of the product, define the boundaries of the market, and erect a legal entry barrier to create a monopoly in that market. Experimentation occurs predominantly through entry, by product differentiation that consequently changes the market boundaries. To the extent that experimentation does occur in regulated industries, it’s very project-based, with preferred vendor partners and strict limits on what the regulated firm can and cannot do. So even when regulation doesn’t stifle experimentation, it does narrow and truncate it.

Recently Manzi wrote some guest posts at Megan McArdle’s blog at The Atlantic, including this one summarizing his book and providing an interesting case study to illustrate it. His summary of the book’s ideas is relevant and worth considering:

  1. Nonexperimental social science currently is not capable of making useful, reliable, and nonobvious predictions for the effects of most proposed policy interventions.
  2. Social science very likely can improve its practical utility by conducting many more experiments, and should do so.
  3. Even with such improvement, it will not be able to adjudicate most important policy debates.
  4. Recognition of this uncertainty calls for a heavy reliance on unstructured trial-and-error progress.
  5. The limits to the use of trial and error are established predominantly by the need for strategy and long-term vision.

That post is rich with ideas, and I suspect Mike and I will want to pursue them here as we delve into the book.

3 thoughts on “Experimentation, Jim Manzi, and Regulation/deregulation”

  1. Pingback: Economic experimentation, economic growth, and regulation « Knowledge Problem

  2. Pingback: Before It's News

  3. Pingback: Doing what seems like it should work: Experiments, tests, and social progress « Knowledge Problem

Comments are closed.