Critical Thinking: A Cheatsheet

  1. What would it take to convince me that I am wrong? (Falsifiability)

  2. How could I empirically prove the exact opposite of what I suspect to be true? (Null hypothesis)

  3. How could someone else argue that my position is illogical or irrational? (Self-debate)

  4. Who benefits the most when I hold this belief? (Critical discourse analysis)

  5. How would a rational person who holds an opposing viewpoint explain and justify their position? (Empathy)

  6. Can I conceptualize an alternative position that does not yield a binary ‘true or false’ dichotomy? (Non-dualism)

  7. How does my position and experience in society inform my assumptions and perspective? (Reflexive intersectionality)

  8. What unconscious mental shortcuts can I identify in my reasoning and rationale? (Cognitive bias mitigation)

  9. How can I guard myself against the illusion that I am reasoning objectively? (Skepticism)

  10. What beliefs have I already unconsciously accepted in order to arrive at my present position? (Presuppositions, tacit assumptions)

  11. What do the words that I use to express my beliefs connote implicitly that they do not denote explicitly? (Semantics, pragmatics)

  12. What are the psychological, social, institutional, or cultural costs of changing my mind? (Motivated reasoning)

  13. How would my identity be threatened if my beliefs or reasoning were shown to be flawed? (Externalize epistemology)

  14. If faced with sufficient counter-evidence, would I care about truth enough to abandon my present beliefs? (Ideological commitments)

  15. Who is framing, shaping, and informing the questions that I can even think to ask? (Social influence)

  16. What questions am I most afraid to ask? (Courage)

Groupthink: The Human Evolutionary Superpower?

I recently listened to a lecture, A New Theory of Human Understanding, by Hugo Mercier. In the talk, he discusses the research and findings behind his new book, The Enigma of Reason.

It all starts with a question: how and why did human beings acquire the ability to reason? The standard answer is that reason exists to assist with decision making. We assume that reason emerged under selective pressure favouring smartness. It’s survival of the most accurate! The ‘fittest’ are those most adapted to figuring out what is real and true about the world. As a Darwinian explanation, this seems reasonable enough.

Or is it?

With his colleague Dan Sperber, Mercier proposes a counter hypothesis: what if the original function of reason was not to make ‘right’ or ‘correct’ predictions or observations about the world, but to convince other people to think or act differently?

If you want someone else to do something, walking up to them and barking an order is highly ineffective. Even if commanding from a hierarchical position of authority, you still accept that influencing the actions of another human is a nuanced affair. To change what someone does, you have to change something about the way they think. You have to give them an insight into why your idea is a good idea. They need to buy it. Trust is crucial. In short, even the simplest act of influencing another person requires you to reason with them.

In this account, reason emerges primarily to serve a social function. The original, adaptive purpose of reason was not to reach correct and factual conclusions about the world but to convince one another about our ideas — beliefs about how we should organize, behave, and work together. For Mercier, this theory goes a long way to explaining why our reasoning is so prone to bias and groupthink. Reason evolved socially, not by solitary philosophers roaming around and coming up with scientifically factual observations about the world. (The practice of intentional, self-aware objectivity is only a particular ‘use case’ for reason that develops far down the evolutionary line.)

This social model for the evolutionary origin of reason goes a long to explaining why we tend to strongly favour what we already believe and why we excel at using reason to justify our prejudice and bias. After all, if the original purpose of ‘reason 1.0’ was to enable us to function and communicate as a group, we should not be unsurprised that our capacity to self-organize around ideas far outperforms our devotion to sober, detached analysis. In evolutionary terms, group inclusion equals survival, making group identity primal and paramount.

Consider the modern implications. For instance, we might look at the concept of ‘conclusive evidence‘ itself as a socially constructed and group-dependent cognitive framework: using evidence to make decisions is only an effective strategy to the extent that you are part of a group that shares your convictions about evidence in the first place.

The ramifications of Mercier and Sperber’s theory are significant. It invites us to critically rethink what it even means ‘to reason with someone’ in the first place. To suppose that presenting another person ‘with the facts’ is a coherent strategy for changing their minds is, perhaps, a mythology its own.

Post-Truth. Alternative Facts. Fake News. Blimey!

To what extent did ‘truth’ and ‘fact’ ever exist in politics and broadcast media before? How do the algorithms of social media fit into an evolving definition of propaganda today? Is society more ideologically ‘polarized’ than it has been in the past — and what would be the benchmark to measure this? How can accusations of practicing ‘post-truth politics’ and broadcasting ‘fake news’ be abused as politically rhetorical devices in their own right?

It boils down to a timeless question: what is truth and why does it matter?

Tim Blackmore is a Professor in the Faculty of Information & Media Studies at Western University. He has researched and written at length about war, war technology, propaganda and popular culture. His book, War X, focuses on the way humans understand the world of industrial warfare. Tim is especially interested in understanding how we use images and media to make war look attractive to ourselves as societies.

Related Blog Posts

Thoughts about depolarizing contentious arguments

Traders and investors rely on data. They need criteria for weighing the benefit of buying or selling. They use hard numbers: predictive analysis, historical trend patterns, algorithmic modeling, etc. At the same time, inescapable ‘tacit’ variables informing their decision: personal tolerance for risk, goals of their portfolio or clients, and so on. The choice to buy or sell at any given moment is data-driven, yes, but it ultimately rests on a human logic of value.

Consider another example: a scientist working on an empirical question is principally concerned with an objective understanding of the phenomenon in front of them. They are looking for evidence that is falsifiable and testing measurable outcomes against a null hypothesis. But why are they studying this question? Why do they investigate macrophages instead of the atmosphere? What motivates them to get out of bed in the morning and go to work? Are they driven to find a cure for a disease? A pursuit of knowledge to further the wellbeing of humanity? Compelled by personal devotion to a career path? Boredom? Obsessive compulsive curiosity?

Whatever the reason might be, it is a human reason.

The point? There could be a billion explanations why the investor and the scientist do what they do. But no matter how devoted they might be to the principles of data, evidence, objectivity, and the refutability of hypothesis, their devotion is no less a human devotion. To the extent that a commitment to evidenced-based decision-making reflects a commitment to a normative idea, it could be described as equally ‘ideological’ as any other human commitment to a normative idea. Everyone is presumably motivated by something, lest they be automatons or robots.

Describing evidence-based decision-making as an ‘ideology’ does not deter me from my commitment to it. In fact, I am ready and happy to take up the debate in defense of evidence at any opportunity. But my commitment to ideas like ‘truth,’ ‘fact,’ and ‘evidence,’ does not endow me with a special warrant to walk around and accuse everyone else of being ‘ideological’ — as if I am a rare, enlightened creature that has somehow transcended all human limits of comprehension. No, if I define ‘ideology’ as a system of beliefs and ideals, then my commitment to evidence and fact is as ‘ideological’ as any other belief out there.

Describing another person as an ‘ideologue’ seems like a rather hypocritical strategy of rhetoric. Is not every debate is a match between at least two or more ideologues? The concept of truth itself is an ideological proposition. Pretending that logic and evidence are illimitable artifacts that live somewhere beyond the realm of the species defining them is to only call science ‘divine’ by another name. To say, “I don’t have an agenda; I only follow the evidence,” is another way of saying, “My agenda is to follow the evidence as I understand the evidence.”

Pragmatic honesty demands that I acknowledge that a) my commitment to evidence is a normative human idea, b) which intrinsically comes with an agenda when confronted with decision-points or conflicting viewpoints, and c) runs concurrent with my inherently limited understanding of the data at play (conclusive evidence is only ‘conclusive’ to the extent that I conclude the absence and nonexistence of all further evidence).

How would I engage with the public sphere of debate differently if I could hold these thoughts at front-of-mind?

Scientific consensus and social values are distinct

This lecture by Sir Peter Gluckman is thought-provoking.

For a moment, consider genetically modified foods. Let’s say, for the sake of illustration, that the overwhelming consensus of the scientific community points to the conclusion that GMOs are categorically safe for human consumption. Now, the question Gluckman presents: should science also make a decision about the prevalence of GMO in our food supply?

He concludes, no.

We need to differentiate between scientific knowledge and social values. Just because science might reach the consensus that GMOs are safe, this does not somehow require society to rejig its policies to embrace genetically modified foods. What we do with GMOs is not only a scientific debate, but a debate about what we collectively value as a society. In other words: even if GMOs are safe, there may be other reasons why a society would choose not to use them.

We’ve seen many values debates obscured by inappropriate co-option of science to avoid the values debate… I think this issue of science being misused as a proxy for societal values-based debate is very bad. I think it short-changes democracy.

Gluckman says that if we want science to remain relevant in society, scientists must act as knowledge brokers, not social policy advocates. When science becomes advocacy, it simply becomes another voice in the values debate, thereby surrendering its deference to objectivity: “scientific knowledge is imperative for consideration at every level of government, but all science is conducted by humans, and human interactions and negotiations survive only on trust.”

It boils down to a simple social hypothesis: if you want people to respect your opinion when you claim to present material facts, don’t follow up your data with your social, political, or ideological agenda.

When science purports to be the decision-makers, they set themselves up to the charges of elitism that are prevalent today.

In the GMO example, then, the role of scientists to learn and inform, not make value judgments about society’s use of GMOs one way or another. In the end, what we do collectively is a decision that is related but ultimately conceptually distinct from the scientific analysis of the issue.

Listen to the whole lecture for Gluckman’s full argument.