Lynne Kiesling
Steve Horwitz has a great Freeman column today, inspired by reading Dan Ariely’s Predictably Irrational. Steve starts by pointing out that the definition of “rational” is not uniform, which matters a great deal because of the theoretical, empirical, and policy implications one draws from the definition used:
People act “irrationally,” in the sense of not picking the utility-maximizing (that is, money-maximizing) choice, all the time. (Of course this notion of rationality is much more stringent than the Misesian idea of rationality as choosing the appropriate means for a desired end.) But, as his title suggests, the experimental evidence is also clear that these irrationalities are not random, but predictable. Our reasoning processes are subject to a variety of what seem to be built-in biases that lead us to deviate from the rational-actor model. Ariely doesn’t discuss the sources of these biases that much, but other literature on cognition indicates that they may be features of the very structure of our brains that reflect the long evolutionary path that created modern humans.
If you use the “money-maximizing” definition of rationality to evaluate individual choices, many things are going to fail to meet that definition that would still meet the more general conception of rationality as “choosing the appropriate means for a desired end”. Steve attaches that idea to Mises, correctly, but I’d also attach it to Vernon Smith’s ecological rationality (and through him to David Hume and the psychologist Gerd Gigerenzer and his evaluation of “fast and frugal” heuristics), Herb Simon, and Thomas Schelling. Not all desired ends can be captured neatly or observed as being “money-maximizing”, so that narrow definition of rationality is quite restrictive.
Note also the difference in focus between the two concepts. The “money-maximizing” definition of rationality emphasizes outcomes, and outcomes as measured using a particular unit of account. The “choosing the appropriate means for a desired end” definition still poses a desired outcome, but note how the locus of evaluation of the action shifts back from the demonstrated outcome toward the process of choice. It’s a subtle shift; even in the theoretical literature grounded in the “money-maximizing” concept of rationality the point is to evaluate the choices individuals make. But that concept relies more on using the outcome to evaluate the rationality of the choice process ex post, while the “choosing the appropriate means for a desired end” framing of the concept takes a more process-oriented, ex ante evaluation of whether or not the choice process appears to make sense.
The second valuable point from Ariely’s work that Steve emphasizes is the “predictably” part of “predictably irrational”. Our cognitive biases are substantially consistent and systematic. Those who are inclined to see our human cognitive characteristics as “market failures” look at this predictability as an opportunity to use government intervention to overcome those biases, and presumably to make better overall choices, with the better being evaluated using the “money-maximizing” rationality concept discussed above. But what Steve points out that is really important is that how well we learn, respond, and adapt to biases is part of that broader conception of rationality:
Even if people make “mistakes” by not acting as the strict model would suggest, they will receive feedback from the competitive marketplace that will demonstrate their errors and give them the incentive and knowledge to correct them. Those who can recognize their biases and correct for them will do better than those who can’t, and markets enable us to do that when they are genuinely free and competitive. This is what Nobel laureate Vernon Smith calls “ecological rationality.” Even if individuals are irrational, the system as a whole produces rational outcomes.
This is one powerful argument for why institutions matter. The institutional environment affects those feedback effects and whether they are going to enable individuals to profit from error correction or not. This observation also feeds into the entire process of innovation and technological change.
Finally, Steve points out what continues to be a striking omission from the policy conclusions drawn based on behavioral economics — political actors are human too, and thus possess the same cognitive biases in those political roles as they do in other decision-making roles in their lives. But political institutions and processes do not embed the same high-powered and robust error-correction incentives that markets do, so political actors don’t have the same high-powered incentives to recognize their own biases and correct them. Thus political institutions based on attempts to correct “predictable irrationality” are likely to result in unintended consequences and not to achieve the desired ends of policy makers. That’s the conclusion you draw when you synthesize behavioral economics and public choice.