Experimentation, Jim Manzi, and regulation/deregulation

Lynne Kiesling

Think consciously about a decision you contemplated recently. As you were weighing your options, how much did you really know that you could bring to bear, definitively, on your decision? Was the outcome pre-determined, or was it unknown to you? For most of the decision-making situations we confront regularly, we don’t have full information about all of the inputs, causal factors, and consequent outcomes. Whether it’s due to costly information, imperfect foresight, the substantial role of tacit knowledge, the inability to predict the actions of others, or other cognitive or environmental factors, our empirical knowledge has significant limits. And yet we make decisions ranging from the color of socks to wear today to whether or not to bail out Bear Stearns or Lehman Brothers. But we do so despite these significant limits of our empirical knowledge.

We build, test, and apply models to try to reduce this knowledge constraint. Models hypothesize causal relationships, and in social science we test those models largely using quantitative data and statistical tests. But when we build formal models, we make simplifying assumptions to make sure that the model is mathematically tractable, and we test those models for causality using incomplete data because we can’t capture or quantify all potentially causal factors. Sometimes these simplifying assumptions and omitted variables are innocuous, but then how useful will such models be in helping us to understand and predict outcomes in complex systems? Complex systems are characterized by interdependence and interaction among decisions of agents in ways that are non-deterministic, and specific outcomes in complex systems are typically not predictable (although analyses of complex phenomena like networks can reveal patterns of interactions or patterns of outcomes).

One person who’s been thinking carefully through these questions is Jim Manzi, whose new book Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society is generating a lot of discussion (and is on my summer reading list). On EconTalk this week he and Russ Roberts talked about the ideas in the book, and their implications for “business, politics, and society”. Russ summarizes the books focus as

Manzi argues that unlike science, which can produce useful results using controlled experiments, social science typically involves complex systems where system-wide experiments are rare and statistical tools are limited in their ability to isolate causal relations. Because of the complexity of social environments, even narrow experiments are unlikely to have the wide application that can be found in the laws uncovered by experiments in the physical sciences. Manzi advocates a trial-and-error approach using randomized field trials to verify the usefulness of many policy proposals. And he argues for humility and lowered expectations when it comes to understanding causal effects in social settings related to public policy.

Experimentation in complex social environments is a theme on which I am writing this summer, with application to competition and deregulation in retail electricity markets. Manzi’s ideas certainly flesh out the argument for experimentation as an approach to implementing institutional change that can identify unintended consequences and head costly design choices off at the pass before they become costly or disruptive. I made similar arguments in an article in Electricity Journal in 2005 for using economic experiments to test electricity policy institutional designs, and Mike and I discussed those issues here and here. In broad brushstroke, traditional cost-based economic regulation typically stifles experimentation, because to implement it the regulator has to define the characteristics of the product, define the boundaries of the market, and erect a legal entry barrier to create a monopoly in that market. Experimentation occurs predominantly through entry, by product differentiation that consequently changes the market boundaries. To the extent that experimentation does occur in regulated industries, it’s very project-based, with preferred vendor partners and strict limits on what the regulated firm can and cannot do. So even when regulation doesn’t stifle experimentation, it does narrow and truncate it.

Recently Manzi wrote some guest posts at Megan McArdle’s blog at The Atlantic, including this one summarizing his book and providing an interesting case study to illustrate it. His summary of the book’s ideas is relevant and worth considering:

  1. Nonexperimental social science currently is not capable of making useful, reliable, and nonobvious predictions for the effects of most proposed policy interventions.
  2. Social science very likely can improve its practical utility by conducting many more experiments, and should do so.
  3. Even with such improvement, it will not be able to adjudicate most important policy debates.
  4. Recognition of this uncertainty calls for a heavy reliance on unstructured trial-and-error progress.
  5. The limits to the use of trial and error are established predominantly by the need for strategy and long-term vision.

That post is rich with ideas, and I suspect Mike and I will want to pursue them here as we delve into the book.

John List’s $10 million crazy idea field experiment in education

Michael Giberson

Bloomberg Markets Magazine has a feature on economist John List and his $10 million research project on education. Along the way we get an introduction to List’s work on field experiments in economics, a splash of lab-based economics back story, and the reaction of education specialists who think List’s project is wholly off target.

List, along with collaborators Steven Levitt and Roland Fryer, has obtained a $10 grant for a program which randomly assigned 3-5 year old students to one of three groups: (1) free all-day preschool, (2) “parenting academy” for the student’s parent or guardian, or (3) a control group with neither intervention. The program intends for follow the students into adulthood in order to assess the long-term effects of the intervention.

List says he doesn’t know much about education theory, so he enlisted specialists to consult on the preschool curriculum. One such consultant, Clancy Blair, a New York University professor of applied psychology, says he was astonished by the size of the project and by how it focuses on financial incentives without looking at such variables as how the parents interact with their children.

“That’s a crazy idea,” says Blair, who studies how young children learn. “It’s not based on any prior research. This isn’t the incremental process of science. It’s ‘I have a crazy idea and I convinced someone to give me $10 million.’”

List says too many decisions in fields from education to business to philanthropy are made without any scientific basis. Without experimenting, you can’t evaluate whether a program is effective, he says.

“We need hundreds of experiments going on at once all over the country,” he says. “Then we can understand what works and what doesn’t.” …

“What educators need to know are what are the best ways to educate kids, and this is trying to short-circuit that,” Blair says. “We have fundamental problems in education, and this is sort of a distraction.”

List says he understands the objections. “If I was in the field, I’d hate me, too,” List says in November while driving to his sons’ indoor baseball practice in one of Chicago’s south suburbs. “There should be skeptics.”

Easterly on the civil war in development economics

Michael Giberson

William Easterly writes, “Few people outside academia realize how badly Randomized Evaluation has polarized academic development economists for and against.”

That claim seems reasonable enough. I’d bet few people outside academia know what randomized evaluation is. Frankly, I’d bet you could survey economists on the floor of the upcoming American Economic Association meetings in Atlanta and, for non-development specialist, find that fewer than 50 percent “realize how badly Randomized Evaluation has polarized academic development economists.”

Easterly raises the point as a way to introduce a conference and now edited book volume — he helped organize the conference and edit the book — which brought together the fors and againsts for dialog.