Who can you believe when everyone claims that “science” is on their side? This question has been preoccupying me lately. It strikes me as a fundamental issue at the heart of many policy debates today, especially environmental ones.
Quite by accident, this week I came across an article entitled Between complacency and panic by Philip Handler and Alexander Zucker. (It seems to be publicly unavailable online, but I was able to dig up a copy from the archives at the university.) Published in 1973, the article is a little dated, but highly prescient of the role that science might go on to play in political and ideological standoffs.
Consider just about any hot-button issue related to environmental policy:
Out of a great melange of brutal immediacies, conflicting theses, and sometimes sheer nonsense, one must try to extract some generalized approaches to the problems of the environment. The first impression is an abundance of unrelated issues, a babble of voices, some raised in protest, others reassuring in calming tones; prophecies of doom–and grand schemes to alleviate these problems. (p. 1748)
When it comes to systemic, long-term environmental risks and threats, scientists find themselves in a permanently awkward position: raw data is never prescriptive until it is interpreted by a human, and yet data is not valuable to humans unless it is interpreted.
Unfortunately, “risk/benefit analysis” is a facile phrase rather than an established science and, in the end, even with adequate data will usually turn on value judgements. (Ibid 1749)
Ergo, science finds itself inseparable from morality: the genuine scientist must walk a line of “absolute integrity” by clearly emphasizing the limit of her knowledge to the issue at hand. What she admits to not knowing is equally as important as what she declares to understand.
Scientists are now asked to forecast answers which they not only do not have, but see little hope of obtaining within the limits of present knowledge and technique. The individual scientist must then frankly admit the limits of his present knowledge and understanding, but at the same time, so as not to be counted out of the councils of decision-makers, he must say, “I don’t know the answer, but I know the reasons for my ignorance and I do know quite a bit about how the problem should be approached.” This difficult role demands absolute integrity on the part of the scientist. If not clearly understood, science, as well as the scientist, becomes vulnerable to attack by policy makers. (Ibid 1752)
When science fails to declare its ignorance on issues, it risks devolving into a blunt, ideological weapon. This, argue the writers, will lead to a situation that seems eerily similar to our world today: policy-makers and the public at large are left to choose between opposing “scientific views” on almost every issue. “The science clearly shows…” is quoted by both sides, with equal conviction and vigour. Predictive models predict opposing outcomes. The validity of peer-reviewed science becomes solely a question of who one’s peers are. The nature of scientific inquiry becomes an inquiry into who funded the research.
Science, in other words, becomes a battleground. “The science says” turns into a rhetorical tool for just about any argument. What good is “science”, then, if it fits in the arsenal of every propagandist and can be used to validate just about any proposition?
Here we reach the crux of the problem: if science can be so confidently cited and quoted by opposite arguments, why bother listening to science at all? How can you and I, average citizens that we are, ever hope to tease out the honest science from all the pseudo science? How can those of us without expertise choose which expert to listen to?
Is there an antidote to pseudo science?
True expertise on a subject not only includes comprehensive knowledge, it also entails an equally extensive understanding of the questions, gaps, and holes in the data… and an unyielding appreciation for the exhaustive nature of the unknown beyond them. These points of ignorance are as important to the scientist as her hypotheses, discoveries, and conclusions. When she speaks “from the pulpit of science” — especially to the public and to policy makers — she must disclose her lack of expertise as well as to her knowledge.
This overt disclosure of uncertainty is the moral code of science. Handler and Zucker argue that this “ethic” must be “enforced by the scientific fraternity” if science is to continue to have any relevance at all. (Ibid 1752) When science shirks away from its uncertainties, it loses validity. Ultimately, the consequence of making science exclusively about answers and expertise is that science will ultimately become pointless in the public sphere, for it devolve into a meaningless game of “My science is right. His science is wrong. Believe my science.”
we run the risk that scientific advice will no longer be sought because responsible laymen will find it too difficult to establish who represents science on which occasion–a steep price indeed! (Ibid)
Those of who us who wish to integrate some semblance of scientific rigour into policy consideration need to do a difficult thing: admit everything we don’t know. We need to not only admit uncertainty, we need to advertise our ignorance. Admittedly, in a world of polarized opinion on issues with high-stake consequences, this strategy might seem counter-intuitive. Indeed, many of us have become obsessed with the cause of bringing robust empiricism to triumph over cherry-picked results. But this is just the point: if we want science — true, honest, open science — on the table of policy-makers, we need to get over our preoccupation with winning arguments. This only makes our discipline irrelevant.
Science is laced with contingencies. Pseudo science claims to possess indisputable answers.
The antidote to the declining relevance of science in public discourse is not to madly insist that “science has the answers”, but to also unapologetically expound the woeful inadequacy of what we presently know.
At least, this is one hypothesis.