When mulling over possible reasons for the alarming nastiness associated with the recent presidential election in the United States, I am reminded of my grade-school bully. Handsome, often charming, superbly athletic, the bully (let’s call him Mike) would frequently, usually without clear provocation, kick, punch and shove other classmates. Fortunately, for reasons not apparent at that time, he never bothered me.
Fast-forward 20 years. After his long-time girlfriend left him for another man, Mike stalked and stabbed to death the new boyfriend. Shortly following his murder conviction and incarceration, I ran into Mike’s father, who spontaneously blurted out: ‘Did you know that Mike had severe dyslexia?’
As soon as his father spoke, I recalled Mike’s great difficulty reading aloud in class. As he stumbled over simple words, the other kids fidgeted, snickered and rolled their eyes. In return, they got bullied. I can still sense my classmates’ fear of Mike even as I cringe at the knowledge that, in our collective ignorance, we were at least partially responsible for his outbursts. What if we had understood that Mike’s classroom performance was a neurological handicap and not a sign of general stupidity, laziness or whatever other pejoratives of cognition we threw at him? Would our acceptance of his disability have changed the arc of Mike’s life? Of ours?
Since running into his father, I’ve often wondered if Mike’s outbursts and bullying behaviour might offer an insight into the seeming association between anger, extremism and a widespread blatant disregard for solid facts and real expertise. I’m not dismissing obvious psychological explanations such as ideological and confirmatory biases and overriding self-interests, or suggesting that a particular human behaviour can be reduced to a single or specific cause. But Mike’s story suggests an additional, more basic dynamic. What if, as a species, the vast majority of us have a profoundly challenging collective difficulty with mathematics and science analogous to Mike’s dyslexia?
Whether contemplating the pros and cons of climate change; the role of evolution; the risks versus benefits of vaccines, cancer screening, proper nutrition, genetic engineering; trickle-down versus bottom-up economic policies; or how to improve local traffic, we must be comfortable with a variety of statistical and scientific methodologies, complex risk-reward and probability calculations – not to mention an intuitive grasp of the difference between fact, theory and opinion. Even moral decisions, such as whether or not to sacrifice one life to save five (as in the classic trolley-car experiment), boil down to often opaque calculations of the relative value of the individual versus the group.
If we are not up to the cognitive task, how might we be expected to respond? Will we graciously acknowledge our individual limits and readily admit that others might have more knowledge and better ideas? Will those uneasy with numbers and calculations appreciate and admire those who are? Or is it more likely that a painful-to-acknowledge sense of inadequacy will promote an intellectual defensiveness and resistance to ideas not intuitively obvious?
Imagine going to your family doctor for a routine physical exam. After running a number of screening tests, he informs you that one of the blood tests – for an initially asymptomatic but rapidly progressive and uniformly fatal neurological disease – came back positive. The doctor further explains that everyone with the disease tests positive (no false-negative rate), but that there is a 5 per cent false-positive rate (a positive test in people who never develop the disease). He then pats you on the shoulder and says: ‘I wouldn’t worry. It’s a rare disease that affects only one in 1,000 people.’
Before continuing, what’s your initial gut feeling as to the likelihood that you have the illness? Now take a moment and calculate the actual likelihood.
When this question was posed to a group of 61 students, house staff and faculty at Harvard Medical School in 2013, the most common response was that a positive test meant that you have a 95 per cent chance of having the disease. Less than a quarter had the correct response – about 2 per cent.
If those at the top of the educational heap can’t do better, what can be reasonably expected from the rest of us?
For those readers who immediately got the right answer, ask yourself a second question. Does your 2 per cent calculation feel intuitively correct, or does knowing you tested positive make you feel that the chances of getting the disease must be higher? For those who didn’t get the right answer, see how you feel about the following explanation.
In order to obtain a statistically accurate false-positive rate, it is necessary to test a large number of people known to not have the illness. If you test 1,000 people, a 5 per cent false-positive rate means that 50 healthy people tested positive. If the disease occurs once per 1,000 people (prevalence rate), one per 1,000 tested will have a true positive. Therefore, 51 of 1,000 people will test positive, of which 50 will be false positives and one will be a true positive for an approximate 2 per cent likelihood rate (1/51=1.96) Does this explanation both make sense and feel correct?
Given the likely childhood educational advantages and familial and peer encouragement common to most Harvard medical students, house staff and faculty, their shaky performance on this test of predictive probabilities challenges traditional explanations for the dreadful mathematics and science performance endemic in the US. If those at the top of the educational heap can’t do better (75 per cent fell for the so-called base rate neglect fallacy), what can be reasonably expected from the rest of us? Ironically, the above study was carried out to see if, given the improved mathematics and science education of the past few decades, present-day medical faculty and students at Harvard would perform better than their predecessors in a similar study conducted in 1978. They didn’t.
Perhaps the best-known relationship between poor performance on reasoning tests and cognitive bias is the study ‘Unskilled and Unaware of It’ (1999) by the psychologists Justin Kruger and David Dunning, then at Cornell University in New York. The researchers had a group of undergraduates take a logical reasoning self-assessment test. On average, participants placed themselves in the 66th percentile, indicating that most of us tend to overestimate our skills somewhat (the so-called above-average effect). Those in the bottom 25 per cent consistently overestimated their ability to the greatest degree, while those who scored at or below the 12th percentile believed that their general reasoning abilities fell at the 68th percentile. Dunning and Kruger concluded:
People who lack the knowledge or wisdom to perform well are often unaware of this fact. That is, the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognise competence, be it their own or anyone else’s.
To put the Cornell students’ performance into a national perspective, keep in mind that, on the new version of the SAT (the test used for college admissions in the US) a perfect score is 1,600 for verbal and mathematics; the average for entering Cornell is 1,480. The 25th percentile score is 1,390. Meanwhile, the national average is 1,010, with more than 90 per cent scoring lower than the bottom 25 per cent at Cornell. (Further bad news: the 2016 National Assessment of Educational Progress showed that only a quarter of high-school seniors are considered proficient in mathematics. Science scores are similarly discouraging, with no improvement seen among seniors in the past seven years).
I’d like to believe that these depressing statistics stem from underpaid or uninspired teachers and curriculum, cultural disincentives, and a long-standing national anti-intellectual atmosphere. It is tempting to explain away the above-average effect as a reflection of personality traits ranging from arrogance and insensitivity to the skills of others, to a profound narcissism that prevents seeing others in a positive light. (When confronted with his lack of knowledge of foreign affairs, Donald Trump responded: ‘I know more about ISIS than the generals do, believe me.’) However, such psychological attributes fall short of fully explaining why the Dunning-Kruger effect has been repeatedly demonstrated across a wide range of educational/cultural backgrounds and cognitive skills. A disquieting alternative possibility is that faulty thinking and self-assessment, rooted in neurobiology, makes us relatively impervious to better evidence and reason.
For a moment, conceptualise thought as a formal mental calculation, and an accompanying visceral sense of the correctness of the calculation. These two processes arise out of overlapping, but separate mechanisms and neural pathways, and so are capable of generating different, even jarring inconsistencies. A prime example is the phenomenon of cognitive dissonance, where so-called rational deliberation and good scientific evidence fail to overcome a stronger feeling that a contrary opinion is correct. With the Harvard test question, I can readily work out a back-of-the-envelope determination of a 2 per cent probability of having the neurological disease, yet cannot shake a gut feeling that the likelihood is much higher.
This discrepancy begins at the most basic level of probabilities. In grade school, we learn that the odds of a coin flip coming up heads or tails are 50 per cent. Though deeply ingrained, this knowledge conflicts with a superb pattern-recognising subconscious. If you see heads come up twenty times in a row, you rationally know the odds of the next toss are unaffected by prior tosses yet have subliminally detected a sequence that seems at odds with pure randomness. Affected by other subliminal influences such as innate optimism or pessimism, some of us sense that the streak is more likely to continue (‘hot streak fallacy’), while others feel that tails is more likely (‘gambler’s fallacy’). This conflict between logic and contrary intuition – the basis of much of modern behavioural economics – is self-evident when watching onlookers rush to the craps table to bet with a player on an extended roll or betting larger sums of money when having a losing streak at blackjack. In short, our visceral sense of the world can dramatically influence our perception of the simplest probability calculations.
Imagine a brain in which the visceral sense of knowing is disconnected from centres for logical thought, yet stuck on a given idea. No matter what contrary evidence or line of reasoning is presented that the idea is wrong, that brain will continue to generate a feeling of rightness. We’re all familiar with this behaviour in its most extreme form – those intractable ‘know-it-alls’ entirely immune to contradictory ideas. We must at least consider the possibility that know-it-all behaviour is a problem of neural circuitry, much like dyslexia.
Many of us (me included) can solve the equation f=ma without having any feel for what the equation means
I am reluctant to invoke evolutionary psychology to explain every nuance of human behaviour. Even so, present-day demands on our mathematics and science skills bear no resemblance to former days, when survival depended on quickly calculating if it’s better run up a tree to avoid a charging lion or pretend to be dead. No one applied complex game-theory matrices to determine the best policy strategies in the Middle East, or carried out complicated risk-reward calculations to decide whether to embrace genetic crop engineering, or used the standard deviation of the mean to understand normal versus abnormal lab values. Most of us have trouble programming a VCR.
Even when we can use the new methodologies, we often do so without any associated intuitive grasp of what we’re doing. Many of us (me included) can solve the equation f=ma (Newton’s second law of motion) without having any feel for what the equation means. Though I might fix a computer crash, I have no sense of what I’ve actually done.
To get a sense of how far we have travelled from simpler times, consider the age-old concept of the wisdom of crowds. At a 1906 county fair in England, 800 people were asked to estimate the weight of an ox. Though the estimates varied widely, the statistician Francis Galton found that the average of all of the votes was accurate to within 1 per cent of the ox’s actual weight. As those in the crowd were of a diverse background, ranging from farmers and butchers to those unfamiliar with cattle, Galton took his findings as supporting evidence for the value of democracy. Without relying upon any particular expertise, the wisdom of a crowd was more likely to provide the right answer than the best estimates of individuals.
Can we still rely on collective wisdom – the basic premise that underlies our belief in democracy? It’s hard to watch well-educated parents refuse to vaccinate their children, preferring the reasoning of a former Playboy model over a bevy of credentialed scientists. Presently, 42 per cent of Americans (27 per cent of college graduates) believe that God created humans within the past 10,000 years. As the US demographic changes, will you remain comfortable trusting the choice of public school science curriculum or vaccination policies to the wisdom of the crowd?
I’d be thrilled if better education combined with a more science- and mathematics-friendly culture could overcome these deficiencies. And there has been some good news. Although overall science and math performance among high-school seniors has stalled as educational opportunities have increased, racial and gender achievement gaps have modestly narrowed among younger schoolchildren. But multiple lines of evidence suggest that there are functional limits in our ability to intuitively grasp modern mathematics and science. Maybe the 19th-century French writer Alexandre Dumas said it best: ‘The difference between genius and stupidity is: genius has its limits.’
Substitute cognitive limitation for stupidity, however, and you have a potential ray of hope. A personal example comes to mind. Because of significant visual-spatial difficulties, I am unable to generate a good mind’s-eye image and have great problems recognising faces and reading maps; only rarely can I consciously recall old visual experiences. Despite special attention from patient and understanding teachers, I was never able to see perspective or visualise geometry and trigonometry. I don’t know what it would be like to easily conjure up a mental image of anything, any more than someone with dyslexia can know how it would feel to read effortlessly.
I’m not embarrassed by my visual-spatial difficulties, but call me stupid, foolish, lazy, incompetent, idiotic or deplorable, and I’m going to figure out a way to make you eat your accusations.
Though those most affected by the above-average effect are going to be the most difficult to convince, in the long run it’s far better to see our deficiencies as part of the very definition of being human than to pretend that they don’t exist or will be resolved through better arguments, more information and greater effort.
A huge first step would be to admit that these limitations apply to us all. In May 2016, commenting on Trump’s popularity, Dunning wrote: ‘the key lesson of the Dunning-Kruger framework is that it applies to all of us, sooner or later. Each of us at some point reaches the limits of our expertise and knowledge. Those limits make our misjudgments that lie beyond those boundaries undetectable to us.’ (My italics.)
Whether inadvertent or Machiavellian in origin, in December 2016 Trump perfectly articulated what few are willing to admit: ‘I think that computers have complicated lives very greatly. The whole Age of Computer has made it where nobody knows exactly what is going on.’
The best defence against combative ideologies isn’t more facts, but an admission of the limits to our knowledge
In recent years, a major debate in cognitive science has swirled around the degree to which blame and praise can still be considered appropriate labels if ‘free will’ is an illusion. Not assigning responsibility for our actions is a recipe for social mayhem; overshooting the mark leads to an inappropriate degree of blame for those acts that seem most clearly beyond individual control. We judge adolescents differently from adults because we accept that their lessened impulse control stems from hormonal surges and insufficient maturation of regions of the frontal lobe. We allow more latitude in judging the elderly if we suspect a degree of dementia. We place less blame on a murderer who is found to have a brain tumour in an area of the brain that modulates anger and aggression. Without a firm grasp of modern science, especially the cognitive sciences, we are at the mercy of raw intuition – hardly an optimal approach where fairness and justice are concerned.
It’s time we asked whether political frustration, anger and resistance to conflicting ideas results in part from a basic lack of ability to sense how the present world works. The best defence against runaway combative ideologies isn’t more facts, arguments and a relentless hammering away at contrary opinions, but rather a frank admission that there are limits to both our knowledge and our assessment of this knowledge. If the young were taught to downplay blame in judging the thoughts of others, they might develop a greater degree of tolerance and compassion for divergent points of view. A kinder world calls for a new form of wisdom of the crowd.
A few years ago, at my 50th high-school reunion, I ran into Mike. He was standing alone in a corner of the banquet room, observing his former classmates. Catching my eye, he motioned me over. ‘My Dad tells me that you’re a neurologist,’ he began. ‘Maybe you suspected even back then.’ He continued shaking my hand. ‘I want to thank you for not making fun of me.’ Was that why he didn’t pick on me, I wondered as Mike looked away and said, to no one and yet perhaps to everyone: ‘Maybe if I’d have known…’