Fear is a strong motivating factor, having evolved over millennia as we have protected ourselves against predators. Fear supports self-preservation by making us risk-averse and cautious. But such a deep, visceral, evolved emotion does not always serve our long-term objectives of thriving; it leads to maximin outcomes, and it is often mismatched to the actual threats to our self-preservation. As our environments change around us, we can fear things we shouldn’t and may not fear things that we should; we overthink everything and tend toward a “precautionary principle” approach, making us risk-averse and cautious.
I think such fear is a component in the persistence of regulation when it’s maladaptive to technological change, so I was happy to read Adam Thierer’s new Mercatus working paper, Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle. Adam lays out a framework for analyzing fear-based attitudes toward technology and technological change that’s informed by economics, sociology, psychology, and rhetoric. He tackles the question of why, and how, participants in public policy debates use appeals to fear to sway opinion toward anticipatory regulation and forms of censorship:
While cyberspace has its fair share of troubles and troublemakers, there is no evidence that the Internet is leading to greater problems for society than previous technologies did. That has not stopped some from suggesting there are reasons to be particularly fearful of the Internet and new digital technologies. There are various individual and institutional factors at work that perpetuate fear-based reasoning and tactics.
He analyzes the use of “appeal to fear” and “appeal to force” logic in the construction of arguments in favor of regulation and censorship, focusing on case studies of online child safety and violent media and online privacy and cybersecurity. In deconstructing these arguments he identifies four ways that fear can be a myth: it may be empirically unfounded and lacking evidence, other variables may be more important in affecting behavior than the feared variable, not all individuals have the same reaction to the feared variable, and other approaches than regulation exist that can mitigate the consequences of the feared variable (pp. 5-6).
Adam introduces the phenomenon of the “technopanic”, which is “… a moral panic centered on societal fears about a particular contemporary technology” (p. 7). Because culture often evolves more slowly than technology, as we are adapting culturally to the new technology we can see these panic phenomena, which can result in demonizing the technology and can lead to calls to “do something”, typically some form of control-based anticipatory regulation or censorship. A crucial part of manipulating individual attitudes to tap into fear and create advocacy for and acceptance of such regulation is what Adam calls “threat inflation”:
Thus, fear appeals are facilitated by the use of threat inflation. Specifically, threat inflation involves the use of fear-inducing rhetoric to inflate artificially the potential harm a new development or technology poses to certain classes of the population, especially children, or to society or the economy at large. These rhetorical flourishes are empirically false or at least greatly blown out of proportion relative to the risk in question. (p. 9)
Allowing threat inflation and technopanics to drive policy outcomes is socially corrosive and wasteful; it diverts resources from their higher-valued uses in dealing with actual risks rather than inflated ones, and it creates an environment of suspicion and social control, particularly censorship and information control. After analyzing six factors that create conditions favorable for the development of threat inflation and technopanics regarding Internet technology (nostalgia, special interests, etc., well worth reading in detail), he proposes two categories of policy response that we should pursue instead of prohibition and anticipatory regulation: resiliency and adaptation. We build resiliency to threats through education, transparency, labeling, etc., and we adapt to living with risk through experimentation, trial-and-error, experience, and social norms. These two are complementary; information-sharing about best practices can shape social norms and get people to change their behavior without regulation. For example, I don’t sign my credit cards, but instead write “CHECK ID” in the signature line and present a photo ID when using them. Having store clerks and other shoppers witness my behavior to protect my identity may lead to their replication of it, and has led over time to a change in behavior (remember back in the 1990s when they used to write your phone number on the receipt? Yikes! But that behavior’s gone extinct.).
We cannot eliminate risk through resilience and adaptation, but we can’t eliminate it through regulation either. Better to have strong, flexible, adaptable institutions and practices that enable us to continue thriving in unknown and changing conditions, while we enjoy the substantial benefits of technological creativity. While I heartily recommend Adam’s paper to you all as a good and thought-provoking read, he also summarizes it in this recent Forbes column.
I would extend Adam’s argument to apply to two case studies. The first is smart grid technology. Fear-based arguments abound in electricity, usually grounded (pun intended!) in the physical reality that electricity is dangerous. But after a century of economic regulation to serve particular social policy objectives, fear-based arguments also show up in arguments against moving away from the status quo both technologically and more economically in general; in my experience these fear-based arguments are used most to advocate for the status quo on behalf of low-income consumers and the elderly, and for that reason I find the use of fear-based arguments heart-wrenching, because when they succeed they deprive vulnerable populations of the benefits of innovation. Another current example is the arguments that digital meters, which transmit data using radio frequency wireless networks and thus emit low-level electromagnetic fields, are making people sick. Despite the absence of any scientific evidence consistent with this hypothesis, California and Maine are using these fear-based claims as a basis for allowing customers to opt out of having a digital meter installed (I have other analyses of this phenomenon, but that’s for another time …).
The second case is threat inflation and the exaggeration of fear to extend the security state. Each of Adam’s six factors contributing to threat inflation is applicable to the growth of the security state — nostalgia, pessimistic bias, “bad news sells”, the political power of the military-security-industrial complex, and so on. The persistence of threat inflation enables these special interests to use fear-based arguments to perpetuate the false belief that we are under constant, persistent threat beyond the actual threat level; this false belief creates the incentives in politicians to “do something” so that they don’t appear “soft on terror” and therefore risk not getting reelected; that political incentive enables security and defense companies to lobby politicians to buy their cutting-edge technologies at very great taxpayer expense to demonstrate to voters that they are “doing something” (even though the technologies have high false positive rates, can be fooled easily, and are more for symbolic security theater than for addressing the most relevant risks that we actually do face).
In both cases, a resiliency-oriented public policy approach would be a substantial improvement on the control-oriented regulation that is not focused on the most meaningful or relevant threats, be they health threats, economic threats, or security threats, from technological dynamism.