Doing What Seems Like It Should Work: Experiments, Tests, and Social Progress

Michael Giberson

My title is a little grand, at least the “and social progress,” but maybe it will be justified in some later, more carefully worked out version of the ideas clashing about in my head. As this is a blog, I’m sharing the more immediate, less carefully worked out version. 😉

I’ve been reading Redirect: The Surprising New Science of Psychological Change. My wife brought it home from the library and then recommended it to me. (Thanks!) The book makes some surprisingly strong claims for personal improvements from what the author calls “story editing,” a bundle of techniques that subtly (or sometimes not so subtly) get people to revise their self narratives. (More from the Scientific American blog, more from Monitor on Psychology.)

Counterpart to that focus of the book is an emphasis on testing social and psychological interventions to discover what actually seems work. Author Timothy Wilson details numerous self-help and social change projects, some of which capture millions or even billions of dollars in public support, which seem like they should work but when subjected to careful evaluation show no evidence of success. In fact, some very expensive programs actually seem to worsen the problem that the program was designed to fix: programs to fight teenage smoking that lead to higher smoking rates, programs to discourage teenage pregnancy that lead to higher pregnancy rates, efforts to discourage littering – or cheating – on campus that have the opposite effects. Wilson advocates a strong preference for testing social interventions with randomized control experiments when possible (and ethical). When randomized control tests are not possible, then other attempts at measurement and replication are important even though difficult to do well.

Whether or not “story editing” is key to successful personal and social change – Wilson makes a strong case, but he could be cherry picking his evidence and I’m sure he has his professional critics – the emphasis on experimentation and testing interventions is an important one.

Lynne’s posts last week on experimentation in social contexts are related: Economic experimentation, economic growth, and regulation and Experimentation, Jim Manzi, and regulation/deregulation. I’m most of the way through Russ Robert’s EconTalk interview with Jim Manzi that Lynne mentioned in second-listed link (recommended); Manzi makes related arguments in favor of well-designed experiments where possible, and for trial and error experimentation where controlled experimentation is not possible.

In both Wilson’s book and the Manzi interview (and apparently in Manzi’s book Uncontrolled, which I haven’t read yet), the limits of multivariate analysis of naturally generated data – i.e., almost all econometric analysis – are examined and found wanting. As Manzi explains, “omitted variable bias” is massive when examining data on human systems; the systems are simply too complex to produce reliable, non-obvious predictions via multivariate analysis because you cannot control for all of the possible effects and interactions influencing the data. He suggests that while 90 percent of studies relying on well-designed randomized control experiments are subsequently replicated, that figure drops to 20 percent or so for studies relying primarily on well-designed multivariate analysis.

In a post on the deterrence effect of the death penalty, Timothy Taylor provides an example of the difficulties of using multivariate analysis to examine social policy. Taylor draws on a recent National Research Council study on the topic, which like a similar study published in 1978 has concluded “available studies provide no useful evidence on the deterrent effect of capital punishment.” Taylor then explains several reasons why it has been hard to draw firm conclusions from the data. While he doesn’t use the term “omitted variable bias,” it is among the problems that the NRC study finds hampering results in this area.

The views of both Wilson and Manzi, and the case study on the effects of the death penalty, all point to a certain humility concerning our claims to understand how the world works. But humility isn’t the end of the story, it isn’t an argument to stop; it is an argument to trust our beliefs about the social world less conclusively and also to trust them selectively: trust knowledge derived from replicated randomized-control experiments most, trust knowledge from replicated multivariate analysis much less, trust knowledge based on trial and error learning less as well.

These ideas will, once better worked out in my head, probably also mention Vernon Smith’s work on constructivist and ecological rationality. Of course, V. Smith is known to be a fan of experimental approaches to understanding social phenomena as well.

The constructivist way forward: experimentation! testing! social progress!