Critical Thinking: A Cheatsheet

  1. What would it take to convince me that I am wrong? (Falsifiability)

  2. How could I empirically prove the exact opposite of what I suspect to be true? (Null hypothesis)

  3. How could someone else argue that my position is illogical or irrational? (Self-debate)

  4. Who benefits the most when I hold this belief? (Critical discourse analysis)

  5. How would a rational person who holds an opposing viewpoint explain and justify their position? (Empathy)

  6. Can I conceptualize an alternative position that does not yield a binary ‘true or false’ dichotomy? (Non-dualism)

  7. How does my position and experience in society inform my assumptions and perspective? (Reflexive intersectionality)

  8. What unconscious mental shortcuts can I identify in my reasoning and rationale? (Cognitive bias mitigation)

  9. How can I guard myself against the illusion that I am reasoning objectively? (Skepticism)

  10. What beliefs have I already unconsciously accepted in order to arrive at my present position? (Presuppositions, tacit assumptions)

  11. What do the words that I use to express my beliefs connote implicitly that they do not denote explicitly? (Semantics, pragmatics)

  12. What are the psychological, social, institutional, or cultural costs of changing my mind? (Motivated reasoning)

  13. How would my identity be threatened if my beliefs or reasoning were shown to be flawed? (Externalize epistemology)

  14. If faced with sufficient counter-evidence, would I care about truth enough to abandon my present beliefs? (Ideological commitments)

  15. Who is framing, shaping, and informing the questions that I can even think to ask? (Social influence)

  16. What questions am I most afraid to ask? (Courage)

Groupthink: The Human Evolutionary Superpower?

I recently listened to a lecture, A New Theory of Human Understanding, by Hugo Mercier. In the talk, he discusses the research and findings behind his new book, The Enigma of Reason.

It all starts with a question: how and why did human beings acquire the ability to reason? The standard answer is that reason exists to assist with decision making. We assume that reason emerged under selective pressure favouring smartness. It’s survival of the most accurate! The ‘fittest’ are those most adapted to figuring out what is real and true about the world. As a Darwinian explanation, this seems reasonable enough.

Or is it?

With his colleague Dan Sperber, Mercier proposes a counter hypothesis: what if the original function of reason was not to make ‘right’ or ‘correct’ predictions or observations about the world, but to convince other people to think or act differently?

If you want someone else to do something, walking up to them and barking an order is highly ineffective. Even if commanding from a hierarchical position of authority, you still accept that influencing the actions of another human is a nuanced affair. To change what someone does, you have to change something about the way they think. You have to give them an insight into why your idea is a good idea. They need to buy it. Trust is crucial. In short, even the simplest act of influencing another person requires you to reason with them.

In this account, reason emerges primarily to serve a social function. The original, adaptive purpose of reason was not to reach correct and factual conclusions about the world but to convince one another about our ideas — beliefs about how we should organize, behave, and work together. For Mercier, this theory goes a long way to explaining why our reasoning is so prone to bias and groupthink. Reason evolved socially, not by solitary philosophers roaming around and coming up with scientifically factual observations about the world. (The practice of intentional, self-aware objectivity is only a particular ‘use case’ for reason that develops far down the evolutionary line.)

This social model for the evolutionary origin of reason goes a long to explaining why we tend to strongly favour what we already believe and why we excel at using reason to justify our prejudice and bias. After all, if the original purpose of ‘reason 1.0’ was to enable us to function and communicate as a group, we should not be unsurprised that our capacity to self-organize around ideas far outperforms our devotion to sober, detached analysis. In evolutionary terms, group inclusion equals survival, making group identity primal and paramount.

Consider the modern implications. For instance, we might look at the concept of ‘conclusive evidence‘ itself as a socially constructed and group-dependent cognitive framework: using evidence to make decisions is only an effective strategy to the extent that you are part of a group that shares your convictions about evidence in the first place.

The ramifications of Mercier and Sperber’s theory are significant. It invites us to critically rethink what it even means ‘to reason with someone’ in the first place. To suppose that presenting another person ‘with the facts’ is a coherent strategy for changing their minds is, perhaps, a mythology its own.

Post-Truth. Alternative Facts. Fake News. Blimey!

To what extent did ‘truth’ and ‘fact’ ever exist in politics and broadcast media before? How do the algorithms of social media fit into an evolving definition of propaganda today? Is society more ideologically ‘polarized’ than it has been in the past — and what would be the benchmark to measure this? How can accusations of practicing ‘post-truth politics’ and broadcasting ‘fake news’ be abused as politically rhetorical devices in their own right?

It boils down to a timeless question: what is truth and why does it matter?

Tim Blackmore is a Professor in the Faculty of Information & Media Studies at Western University. He has researched and written at length about war, war technology, propaganda and popular culture. His book, War X, focuses on the way humans understand the world of industrial warfare. Tim is especially interested in understanding how we use images and media to make war look attractive to ourselves as societies.

Related Blog Posts

Thoughts about depolarizing contentious arguments

Traders and investors rely on data. They need criteria for weighing the benefit of buying or selling. They use hard numbers: predictive analysis, historical trend patterns, algorithmic modeling, etc. At the same time, inescapable ‘tacit’ variables informing their decision: personal tolerance for risk, goals of their portfolio or clients, and so on. The choice to buy or sell at any given moment is data-driven, yes, but it ultimately rests on a human logic of value.

Consider another example: a scientist working on an empirical question is principally concerned with an objective understanding of the phenomenon in front of them. They are looking for evidence that is falsifiable and testing measurable outcomes against a null hypothesis. But why are they studying this question? Why do they investigate macrophages instead of the atmosphere? What motivates them to get out of bed in the morning and go to work? Are they driven to find a cure for a disease? A pursuit of knowledge to further the wellbeing of humanity? Compelled by personal devotion to a career path? Boredom? Obsessive compulsive curiosity?

Whatever the reason might be, it is a human reason.

The point? There could be a billion explanations why the investor and the scientist do what they do. But no matter how devoted they might be to the principles of data, evidence, objectivity, and the refutability of hypothesis, their devotion is no less a human devotion. To the extent that a commitment to evidenced-based decision-making reflects a commitment to a normative idea, it could be described as equally ‘ideological’ as any other human commitment to a normative idea. Everyone is presumably motivated by something, lest they be automatons or robots.

Describing evidence-based decision-making as an ‘ideology’ does not deter me from my commitment to it. In fact, I am ready and happy to take up the debate in defense of evidence at any opportunity. But my commitment to ideas like ‘truth,’ ‘fact,’ and ‘evidence,’ does not endow me with a special warrant to walk around and accuse everyone else of being ‘ideological’ — as if I am a rare, enlightened creature that has somehow transcended all human limits of comprehension. No, if I define ‘ideology’ as a system of beliefs and ideals, then my commitment to evidence and fact is as ‘ideological’ as any other belief out there.

Describing another person as an ‘ideologue’ seems like a rather hypocritical strategy of rhetoric. Is not every debate is a match between at least two or more ideologues? The concept of truth itself is an ideological proposition. Pretending that logic and evidence are illimitable artifacts that live somewhere beyond the realm of the species defining them is to only call science ‘divine’ by another name. To say, “I don’t have an agenda; I only follow the evidence,” is another way of saying, “My agenda is to follow the evidence as I understand the evidence.”

Pragmatic honesty demands that I acknowledge that a) my commitment to evidence is a normative human idea, b) which intrinsically comes with an agenda when confronted with decision-points or conflicting viewpoints, and c) runs concurrent with my inherently limited understanding of the data at play (conclusive evidence is only ‘conclusive’ to the extent that I conclude the absence and nonexistence of all further evidence).

How would I engage with the public sphere of debate differently if I could hold these thoughts at front-of-mind?

Scientific consensus and social values are distinct

This lecture by Sir Peter Gluckman is thought-provoking.

For a moment, consider genetically modified foods. Let’s say, for the sake of illustration, that the overwhelming consensus of the scientific community points to the conclusion that GMOs are categorically safe for human consumption. Now, the question Gluckman presents: should science also make a decision about the prevalence of GMO in our food supply?

He concludes, no.

We need to differentiate between scientific knowledge and social values. Just because science might reach the consensus that GMOs are safe, this does not somehow require society to rejig its policies to embrace genetically modified foods. What we do with GMOs is not only a scientific debate, but a debate about what we collectively value as a society. In other words: even if GMOs are safe, there may be other reasons why a society would choose not to use them.

We’ve seen many values debates obscured by inappropriate co-option of science to avoid the values debate… I think this issue of science being misused as a proxy for societal values-based debate is very bad. I think it short-changes democracy.

Gluckman says that if we want science to remain relevant in society, scientists must act as knowledge brokers, not social policy advocates. When science becomes advocacy, it simply becomes another voice in the values debate, thereby surrendering its deference to objectivity: “scientific knowledge is imperative for consideration at every level of government, but all science is conducted by humans, and human interactions and negotiations survive only on trust.”

It boils down to a simple social hypothesis: if you want people to respect your opinion when you claim to present material facts, don’t follow up your data with your social, political, or ideological agenda.

When science purports to be the decision-makers, they set themselves up to the charges of elitism that are prevalent today.

In the GMO example, then, the role of scientists to learn and inform, not make value judgments about society’s use of GMOs one way or another. In the end, what we do collectively is a decision that is related but ultimately conceptually distinct from the scientific analysis of the issue.

Listen to the whole lecture for Gluckman’s full argument.

Anti-Vaxxer Evidence

Virtually no one goes through life thinking, “My beliefs about the world are driven by irrationality and heuristics.”

No, we all fancy ourselves to be rational.

Regardless of what we believe, we believe the evidence is on our side.

Take John and I, for example.

John is certain that the measles, mumps, and rubella vaccination causes autism. But I believe John is wrong. I have evidence that John is wrong: if you look at the people in a population who are diagnosed with autism, there is no statistical difference between those who received vaccinations as children and those who did not. Epidemiologically, there is nothing that connects vaccinations to autism. (i.e. Taylor et. al 1999, etc.)

What is correlated with parents not vaccinating their children? Well, outbreaks of measles, for one. (CDC)

But my evidence is meaningless to John. Why? He believes different evidence. John’s evidence might be anecdotal (à la Jenny McCarthy), or intuitive (he’s got a hunch), ideological (don’t interfere with ‘nature’), conspiratorial (it’s all a government cover up), or unfortunately widespread misinformation (as per Andrew Wakefield). Whatever the case, John does not think to himself, My beliefs about vaccinations are irrational. No, he’s got his reasons. And he is convinced by them.

He is committed to his evidence as much as I am to mine.

As far as I can tell, given the data and evidence as I see it, John’s refusal to vaccinate his child is both irrational and irresponsible. But according to John’s evidence, he believes he is making the best decision for his child. He believes it dearly.

John and I both have our evidence, and this is the problem. Scientific, peer-reviewed, data-driven evidence is clearly only one kind of evidence — and it is a kind of evidence that John out-rightly rejects. John is so convinced by his reasons — whatever they are — that it doesn’t matter what I think.

Therefore, it is pointless for John and I to debate vaccinations. I think he is ignoring basic science. He suspects that I am deluded by blind trust in the scientific establishment. We are at an impasse. It does not matter how emphatically I drone on about ‘evidence-based medicine’, falsifiable propositions, and the null hypothesis. John is having none of it. All of this is important to me, but not to him.

John is no sooner going to entertain the validity of my evidence than I am going to accept the validity of his. I seek to surrender my intuitions to the probabilities of empirical observation, and I strive to change my position as new data emerges (especially at such a large scale of consensus). Let’s be honest: John is probably not going to talk me out of these methodological convictions. “That’s what you believe is true,” he acknowledges. “But that is not what I think and feel about the issue.”

If I am going to convince John to vaccinate his child, I need to either help him change his functional definition of evidence itself or present an appealing counter-narrative in his language — a story told in his current ‘category’ of ‘evidence.’

Herein lies the dilemma of health promotion: ‘evidence-based research’ alone has never changed the minds or behaviours of people (or policy-makers) not already convinced of the validity of quantitative data.

Before John and I can have a meaningful conversation about vaccinations, we need to have a meaningful conversation about the nature of falsifiable propositions. And this needs to be a conversation, not a monologue correcting the ‘errors’ of his evidence. I cannot change John’s convictions and conceptualizations about the nature of evidence any more than I can change his deepest hopes and dreams.

I now view the anti-vaccine movement as a sort of cult, where any sort of questioning gets you kicked out, your crunchy card revoked. I was even told I couldn’t call myself a natural mother anymore, because vaccines are too unnatural. That’s fine. I just want to be the best parent I know how to be, and that means always being open to new information and admitting when I’m wrong. (Leaving the Anti-Vaccine Movement by Megan Sandlin)

This is the bottom line: I cannot change John’s mind. Only John can change John’s mind. If I accept this premise at the outset of our conversation, my interaction with John will be markedly different than if I were to presume myself capable of correcting John’s thinking for him.

After all, I am just like John: if you are going to convince me that vaccinations cause autism, you will have to show me evidence that I accept as valid — the kind of evidence I already use and trust to make sense of the world I live in.

John and I are so different. John and I are exactly the same.

The Antidote to Pseudo Science?

Who can you believe when everyone claims that “science” is on their side? This question has been preoccupying me lately. It strikes me as a fundamental issue at the heart of many policy debates today, especially environmental ones.

Quite by accident, this week I came across an article entitled Between complacency and panic by Philip Handler and Alexander Zucker. (It seems to be publicly unavailable online, but I was able to dig up a copy from the archives at the university.) Published in 1973, the article is a little dated, but highly prescient of the role that science might go on to play in political and ideological standoffs.

Consider just about any hot-button issue related to environmental policy:

Out of a great melange of brutal immediacies, conflicting theses, and sometimes sheer nonsense, one must try to extract some generalized approaches to the problems of the environment. The first impression is an abundance of unrelated issues, a babble of voices, some raised in protest, others reassuring in calming tones; prophecies of doom–and grand schemes to alleviate these problems. (p. 1748)

When it comes to systemic, long-term environmental risks and threats, scientists find themselves in a permanently awkward position: raw data is never prescriptive until it is interpreted by a human, and yet data is not valuable to humans unless it is interpreted.

Unfortunately, “risk/benefit analysis” is a facile phrase rather than an established science and, in the end, even with adequate data will usually turn on value judgements. (Ibid 1749)

Ergo, science finds itself inseparable from morality: the genuine scientist must walk a line of “absolute integrity” by clearly emphasizing the limit of her knowledge to the issue at hand. What she admits to not knowing is equally as important as what she declares to understand.

Scientists are now asked to forecast answers which they not only do not have, but see little hope of obtaining within the limits of present knowledge and technique. The individual scientist must then frankly admit the limits of his present knowledge and understanding, but at the same time, so as not to be counted out of the councils of decision-makers, he must say, “I don’t know the answer, but I know the reasons for my ignorance and I do know quite a bit about how the problem should be approached.” This difficult role demands absolute integrity on the part of the scientist. If not clearly understood, science, as well as the scientist, becomes vulnerable to attack by policy makers. (Ibid 1752)

When science fails to declare its ignorance on issues, it risks devolving into a blunt, ideological weapon. This, argue the writers, will lead to a situation that seems eerily similar to our world today: policy-makers and the public at large are left to choose between opposing “scientific views” on almost every issue. “The science clearly shows…” is quoted by both sides, with equal conviction and vigour. Predictive models predict opposing outcomes. The validity of peer-reviewed science becomes solely a question of who one’s peers are. The nature of scientific inquiry becomes an inquiry into who funded the research.

Science, in other words, becomes a battleground. “The science says” turns into a rhetorical tool for just about any argument. What good is “science”, then, if it fits in the arsenal of every propagandist and can be used to validate just about any proposition?

Here we reach the crux of the problem: if science can be so confidently cited and quoted by opposite arguments, why bother listening to science at all? How can you and I, average citizens that we are, ever hope to tease out the honest science from all the pseudo science? How can those of us without expertise choose which expert to listen to?

Is there an antidote to pseudo science?

True expertise on a subject not only includes comprehensive knowledge, it also entails an equally extensive understanding of the questions, gaps, and holes in the data… and an unyielding appreciation for the exhaustive nature of the unknown beyond them. These points of ignorance are as important to the scientist as her hypotheses, discoveries, and conclusions. When she speaks “from the pulpit of science” — especially to the public and to policy makers — she must disclose her lack of expertise as well as to her knowledge.

This overt disclosure of uncertainty is the moral code of science. Handler and Zucker argue that this “ethic” must be “enforced by the scientific fraternity” if science is to continue to have any relevance at all. (Ibid 1752) When science shirks away from its uncertainties, it loses validity.  Ultimately, the consequence of making science exclusively about answers and expertise is that science will ultimately become pointless in the public sphere, for it devolve into a meaningless game of  “My science is right. His science is wrong. Believe my science.”

we run the risk that scientific advice will no longer be sought because responsible laymen will find it too difficult to establish who represents science on which occasion–a steep price indeed! (Ibid)

Those of who us who wish to integrate some semblance of scientific rigour into policy consideration need to do a difficult thing: admit everything we don’t know. We need to not only admit uncertainty, we need to advertise our ignorance. Admittedly, in a world of polarized opinion on issues with high-stake consequences, this strategy might seem counter-intuitive. Indeed, many of us have become obsessed with the cause of bringing robust empiricism to triumph over cherry-picked results. But this is just the point: if we want science — true, honest, open science — on the table of policy-makers, we need to get over our preoccupation with winning arguments. This only makes our discipline irrelevant.

Science is laced with contingencies. Pseudo science claims to possess indisputable answers.

The antidote to the declining relevance of science in public discourse is not to madly insist that “science has the answers”, but to also unapologetically expound the woeful inadequacy of what we presently know.

At least, this is one hypothesis.