Lynne Kiesling
If you have not been following the story of leaked emails and documents from the University of East Anglia’s Climate Research Unit after their computers were hacked, Maggie Koerth-Baker’s Boing Boing post provides an overview with lots of supporting links. A couple of good overview stories are from the Economist’s most recent issue and from the New York Times on Friday, which summarizes the issues:
The most serious criticisms leveled at the authors of the e-mail messages revolve around three issues.
One is whether the correspondence reveals efforts by scientists to shield raw data, gleaned from tree rings and other indirect indicators of climate conditions, preventing it from being examined by independent researchers. Among those who say it does is Stephen McIntyre, a retired Canadian mining consultant who has a popular skeptics’ blog, climateaudit.org. A second issue is whether disclosed documents, said to be from the stolen cache, prove that the data underlying climate scientists’ conclusions about warming are murkier than the scientists have said. The documents include files of raw computer code and a computer programmer’s years-long log documenting his frustrations over data gathered from countries in the Northern Hemisphere.
Finally, questions have been raised about whether the e-mail messages indicated that climate scientists tried to prevent the publication of papers written by climate skeptics, which were described by the scientists in the e-mail messages as “garbage” and “fraud.”
On the first issue, the availability of the raw data to enable replicability of the analysis, the Times of London reminds us that the CRU’s raw data were thrown out back in the 1980s. There are other historical temperature series available, and the disposal of the data in the 1980s has been known for some time, but the reason I wanted to mention this controversy really hinges more on the practice of science, the scientific method, and the intersection of science and politics that is unavoidable but should be minimized. One reason why this intersection becomes problematic is the confirmation bias of scientists, politicians, and voters — even when we believe we are being dispassionate and objective, we make analyses and take actions that tend to confirm our ex ante beliefs. The best way to ameliorate such biases is by insisting on open data availability.
The three issues highlighted above show areas where the politicization of science can occur and, clearly, has occurred in this case. I agree with Jonathan Adler’s remarks in his excellent post on the documents and their implications; with respect to the IPCC process he observes that
The effort to compile an “official” scientific “consensus” into a single document, approved by governments, has exacerbated the pressures to politicize policy-relevant science. So too has been the tendency to pretend as if resolving the scientific questions will resolve policy disputes. This is a dangerous pretense. Science can — indeed must — inform policy judgments, but it does not determine such judgments. It can tell us what is, and perhaps what will be, but it cannot tell us what should be. A more honest climate policy debate would acknowledge that there are uncertainties, acknowledge that there are risks of action and inaction alike, and focus on the relative merits of different ways to address the real, albeit necessarily uncerain, risks of climate change.
This messy, complex web of climate science and policy is indeed complicated. Scientists first and foremost perform analyses and test hypotheses, but they are also human, and bring their own biases and beliefs to their analyses (I am also aware of my own biases and beliefs and how they affect my own analyses). These biases and beliefs make us prone to confirmation bias, to paying more attention to results that conform to our beliefs. Add to that the drive to create new knowledge and to frame our research proposals in contexts that will get grant funding (what some have prosaically referred to as the “they have to scare the bejeebus out of us to get funding”). At the margin those incentives combine to amplify the apocalyptic form of the climate change narrative among scientists.
Then layer in the political process, both in (private and government) funding of research and in the thorny questions of what policies should be implemented given the results of scientific research. That political process involves individuals who are elected representatives working from the premise that the way they deliver results to their constituents is by “doing something”, which takes the form of passing legislation. Typically, though, the political process cannot tolerate the nuance, the uncertainty, and the falsification methodology associated with scientific research — political processes demand certainty and proof, and politicians often try to persuade themselves that more certainty and proof exist than actually do, another form of confirmation bias.
The thing that I find the most disturbing about this whole episode is what it indicates about the attitudes of researchers toward sharing their data; these documents indicate that researchers have destroyed data (as mentioned above) and do not make data available as a default when they publish papers. In some cases there are property rights issues — not all researchers have the rights to make all of their inputs publicly available — and I think resolving some of these rights to increase transparency should be a top priority. One way to do that is for funding sources to mandate data availability as a condition for funding, which the National Science Foundation does and others should follow.
Two British publications had pointed editorials on this point with respect to the CRU controversy. This Financial Times commentary from MIT’s Michael Schrage is very explicit on the data point:
Science may be objective; scientists emphatically are not. This episode illustrates what too many universities, professional societies, and research funders have irresponsibly allowed their scientists to become. Shame on them all.
The source of that shame is a toxic mix of institutional laziness and complacency. Too many scientists in academia, industry and government are allowed to get away with concealing or withholding vital information about their data, research methodologies and results. That is unacceptable and must change. …
The issue here is not about good or bad science, it is about insisting that scientists and their work be open and transparent enough so that research can be effectively reviewed by broader communities of interest. Open science minimises the likelihood and consequences of bad science.
Schrage’s whole argument is eloquent and, to my mind, quite accurate, and I encourage you to click through and read it. The other editorial that caught my eye was from the Economist, which sounded a cautionary note about the effects of politics on the toleration of dissent in science:
There is no doubt that politics and science make uncomfortable bedfellows. Politicians sell certainty. Science lives off doubt. The creation of the Intergovernmental Panel on Climate Change to establish a consensus on the science was an excellent idea for policymakers, who needed a strong scientific foundation for their deliberations, but it sits uncomfortably with a discipline that advances by disproving accepted theories and overturning orthodoxies.
I hope that this CRU document controversy turns out to be a salutary inflection point, leading to greater transparency in both data availability and peer review. We need those in order to have better science and better policy.
Excellent post, Lynne. I think what you wrote here basically spells out exactly what I’ve been thinking/observing with respect to the arrogance of some researchers, but these guys took it to a new low.
Looking at the emails makes it clear that some of the researchers were subjected to what appears to amount to harassment, maybe even threats. That would make their responses understandable, even if they weren’t entirely professional.
And I have seen absolutely no published evidence – apart from personal interpretations of quotes in the emails, which are certainly subject to confirmation bias – that data was thrown out (apparently old media, such as paper and magnetic tapes were disposed of for space reasons), refused to serious researchers or distorted to fit predetermined conclusions. Perhaps the accusations of unavailability of data are because the people making them can’t get the results they want?