Presenting risk, distorting perceptions

When human psychology and mathematical risk models collide

Life would be in many ways easier if risk were a purely mathematical concept. For instance, we could ca lculate it. And we’re good at calculating. Then if the maths was right, the stated risk would be accurate.

But of course, it’s only part-mathematical. Risk is also based on belief: part-subjective, part-influenced by the way that ideas are framed – by custom, politics, psychology and language – and part-based on limited information, as information tends to be. In short, the human factor matters to people’s struggles with risk every bit as much as the maths.

Take the following example. A recent writer to The London Review of Books raged about GPs’ hesitation to diagnose dementia: “the sooner someone sues a GP for failure to diagnose as early as possible the better,” she said.

She had a mental model of risk in which screening spots people who are getting ill – and what could be wrong with that?

A reply published a few weeks later considered 100 people: “If the background rate for dementia were 6%, screening would find 4 of the 6… but 23 others would be told they have dementia when they do not.”

That is, only 15% (4 out of 27 in total: 4 true positives plus 23 false positives) of the positive diagnoses would be correct, and we have no idea which – based on a test that looks as if it is about 75% accurate in any individual case.

Accurate-sounding tests can arguably be harmful, as we discover when the model is elaborated, in this case when we factor in a small error rate for the great majority (who will be okay) to yield a ton of positive cases (wrongly) – far more than revealed by the same test on the very small number who are genuinely positive.

So was the letter writer unusually arrogant, ignorant, innumerate or what?

I’d say none of these. Wrong, yes, but mostly she did what we all do: she formed a mental model given certain information and filtered this through her own values, all with robust confidence in her reasoning: “sue the GPs!”

What’s more, her model is founded on a seemingly common-sense notion of ‘screening’ – which sounds like a kind of in-or-out filter. But that everyday notion turns out to be unhelpful. A better way of thinking about screening is to say that, often, all it can do is narrow the odds, sometimes not much. So language carries with it presumptions all too easily taken as valid.

Similarly, we think we know what we’re talking about when we use words like austerity, stimulus, confidence, monetary easing, or diversification, and then put numbers around these concepts in our models. But clearly, others may disagree with us. Maybe another factor in the screening example is that some suspect the NHS isn’t always on the patient’s side when it says ‘can’t do!’ – even if others disagree. So there’s room for politics here too.

Beliefs lie at the core of risk and this example is typical.

The human-factor spanner in the works

Contrast that with a simple argument that runs like this:

1. Risk is based on probability – the chance that something bad will happen.

2. Probability is a quantitative concept – we can count how often these bad things happen in relevant circumstances.

3. We can use that information to predict how often they’ll happen in future.

This is often, broadly, true. The problem is that the most interesting, and sometimes catastrophically disruptive elements of risk occur when that description is only partially true. And that problem gets worse when we forget how partial it can be.

‘Partiality’ is a good word to describe the human-factor spanner in the works. It suggests both bias – in the sense of being less than impartial, riddled with presumptions, values, convenience, and other varieties of the kind of subjectivity we saw in our example – and also i ncompleteness – in the sense of being short of full information.

Those of us who fancy ourselves to be quantitative sophisticates sometimes like to think we’re above subjectivity, which is for people who don’t understand science. Are we?

Probability vs belief

Take the toss of a coin, about as simple a probability problem as you can find. What’s the chance of heads? In a perfect world, that’s easy (there is only one of two outcomes). But in a real world where I am tossing the coin and inviting you to guess, it’s less straightforward.

For example, we can complicate the answer by tossing the coin and concealing the result. Now what’s the probability of heads? The state of the coin is a fact, but the probability is not – it’s a property of the coin plus your knowledge. If I then look at the coin but conceal it from you, we can see that people can have different probabilities depending what they think they know. But now you’ve seen that I’m playing games here, maybe you’re wondering if I’m using an honest coin. Maybe it’s double-headed and the probability was always 100%? In playing this for real, I’ve had answers of 0%, 25%, 50%, 90% and 100%, just for the simple toss of a coin. I’m not sure that any of these answers is ridiculous.

What we’re dealing with here are what the economist John Kay calls ‘off model’ problems: all those human considerations that for one reason or another we find it hard to incorporate in our quantitative reasoning, but which refuse to go away.

Extremes among observed outcomes, Kay has written, are much more often the result of ‘off model’ events than the result of vanishingly small probabilities. I’d add that the definition of ‘off model’ needs to be broad, and his conclusion is that while risk models are fine for managing everyday liquidity, they are not fit for the principal purpose for which they are designed: protecting against severe embarrassment or catastrophic failure.

Maybe I’m just not sophisticated enough, but it seems to me there’s an inevitable human softness in judgments of risk in almost any context – judgments that might depend, for example, on assumptions about how the whole economy will behave, or about how people will. What seemed stable in 2007, based on past frequencies, turns out useless in 2008.

Impact of presenting risk

And because we draw on subjectivity when assessing risk, we are easily influenced by how a risk is presented, or framed. Different framing causes us to invoke different values, and so to judge the same risk differently. You can call this irrational if you like; I’d call it normal. The following graphics are adapted from a Flash app called “Spinning the Risk”. It can be found on the Understanding Uncertainty website and was devised and built by Professor David Spiegelhalter and Mike Pearson. The app can present relative risk, like in Figure 1 (overleaf). You’ll see that there is an additional 20% chance of colorectal cancer if you eat bacon every day. Scary? Now let’s look at the same data from an absolute perspective, as in Figure 2You’ll see that a 20% increase in relative risk is equal to just one extra person with cancer in every 100 people. The number rises from 5 to 6 people if everyone is eating more bacon: that’s where the 20% figure comes from.

Graphics based on visualisations from “Spinning the Risk”, a flash app made by Understanding Uncertainty (www.understandinguncertainty.org)

You can also turn the risk around and frame it positively to see how your chance of being fine is affected by eating bacon. That gives you a result like Figure 3. And you can see that this chance of being fine falls from 95 in 100 to 94 in 100: a relative reduction in your chance of being fine of just over 1%, instead of a relative increase in risk of 20%. And you can randomize the appearance of the results. Try it, and test your own human responses.  All of these adjustments change people’s response. ‘Ah, it’s not so bad after all,’ they might say. Or ‘oh, that looks worse.’ It is all the same data.

Again, does this variety of response demonstrate irrationality from which, with sufficient sophistication, we can escape? Or does it suggest that differences in framing provoke us to draw on different values – all perhaps valid – and that human softness lurks in our risk models more than we’d like to think? If the latter, the critical task is to find that softness. And if its influence is big enough to wipe out any subsequent calculation, admit it.

Michael Blastland is a writer and broadcaster. His latest book is The Norm Chronicles: Stories and Numbers about Danger, with David Spiegelhalter.The views expressed in this article are the author’s own.