Risk – IX: Microlives to Micromorts, or why risk makes so little sense

The previous posts in this series examined how risk operates in energy markets, social systems, national security, and military infrastructure. This one asks why do humans fail to act on risks we can measure, price, and see coming?

I was once trying to explain risk as a concept to my students, and to demonstrate it, confidently asked- okay, would you sky dive? Because reader, I would not throw myself out of a plane. I had forgotten there was an army officer in that class, who stuck a pin to my example by casually answering yes. Yes he would indeed jump out of an airplane, no problem Ma’am.

We were both being entirely rational too, and that is the problem with communicating risk to humans. Everyone perceives risk through the sieve of their lives and personalities. We are surrounded by risk data. We have extraordinarily sophisticated tools to measure, price, and manage some types of risk. And yet, individually and collectively, we routinely ignore the risks that will actually kill us, panic about risks that likely won’t, and remain completely unmoved by risks that threaten the existence of life on our beautiful space rock.

Units
Risk resists measurement. Therefore, it naturally repels measurement units.

We tried anyway.

Ronald Howard, a Stanford engineer, created a unit called micromort in 1989: a unit equal to a one-in-a-million chance of death.1 The prefix is simply the metric micro-, which means one millionth attached to the word mortality. One micromort is a tiny, almost abstract sliver of the possibility of dying. But the power of the unit is in comparison. So, riding a motorcycle for 9 kilometres costs 1 micromort (UK data)1, running a marathon costs about 7 micromorts, and skydiving once– just one more micromort at 82. Climbing the Everest: somewhere in the region of 37,000 micromorts, which means your odds of dying even on a successful ascent are roughly 1 in 27.34 The numbers are drawn from epidemiological (relating to the study of how diseases spread, who gets sick, and why, within a specific population5) data, which just means that you take the population who did an activity, count deaths, and divide. They are approximations, not laws, and they vary by country, era, age, and fitness. But it’s something concrete. Certainly I wouldn’t have thought my aversion to falling would be comparable to my aversion to running.

The second was David Spiegelhalter, Cambridge statistician who invented Microlives in 2012.6 One microlife is half an hour of life expectancy, derived by dividing a roughly 57-year adult lifespan into one million equal parts. Every microlife you spend is thirty minutes of your future, gone. It measures the relationship between an individual’s habits and their lifespan, so the daily choices they make and how long they are likely to live.7 Longitudinal studies (the kind that follow the same person over many years) have found that watching an extra hour of television is akin to burning up half a microlife, but smoking two cigarettes equals two. You can earn two back with twenty minutes of moderate exercise though.8

A micromort measures acute risk. The word “acute” comes from the Latin acutus, meaning sharp, or sudden. A discrete(the statistical word meaning single, or individual9) event with a clear before and after: you jump out of the plane, or you don’t. A microlife measures chronic risk, from the Greek chronos, meaning time. Slow, accumulating, invisible, with no clean moment of crisis.6

Mathematics says that for someone in their late twenties with roughly one million half-hours of adult life ahead of them, one micromort of acute risk is almost exactly equal to one microlife, or thirty minutes of expected life.10

Actuaries
There is, unsurprisingly, an entire profession built around calculating risk.11 These people are called actuaries, and they calculate the probability of death or loss across different groups, and price it. They usually work for insurance companies.12

Actuarial science establishes something important: risk that is chaotic at the individual level becomes orderly at scale.13 No actuary can tell you whether you will die this year. But they can tell you, with considerable confidence, what percentage of a million people like you will. This is why, when you try to buy term insurance in India, you are asked about tobacco use, your pin code, your gender, your profession, your income- so that a calculation can be made about how likely someone like you is to die soon.14

This is called risk pooling.15 Insurance companies accumulate risk from many people because they know the chances of all insured events occurring simultaneously are vanishingly small. It is why premiums are lower for younger people- not because the young are invincible, but because the numbers say they are less likely to die right now than someone older.16

However, while the mathematics works, it is meant for populations. You are not a population. You are a person. And people are usually terrible at thinking about risk.

Risk Perception
A few years ago, before my life lost its plot, I went to Goa. I’m afraid of heights, so I decided to go parasailing- because I was so afraid of it, but also because I knew the instructor would be up there with me. I was terrified all the way up and while I was in the air, but while descending, I started to enjoy myself. So I went up a second time, and enjoyed that entire redo much more.

I was reminded of this recently when I came across a podcast where a Para SF veteran described the moment of hesitation at the gate before his first jump- not fear exactly, but his rational mind asking: why am I doing this?

Two people, two completely different risk profiles, the same pause. The same risk perception.

The difference between this person and me wasn’t in the physics of the fall. Gravity is an equaliser. The difference lay in the “internal weighting” we gave to the danger. To the officer, the risk was a managed variable, mitigated by years of training and a parachute he knew how to use. To me, the risk was an existential threat, unmitigated and visceral. We weren’t looking at the same event; we were looking at two different versions of the future, shaped by our pasts.

This gap between risk as it exists mathematically and risk as it lives in the human mind has a name in behavioural economics: cognitive bias. There are several that are particularly relevant to risk, and between them they explain most of the grand collective failures that follow. Here’s a short list:

  • The first is the affect heuristic17: we judge risk by how something feels, not by its actual probability.
  • The second is availability bias18: we judge how likely something is by how easily we can recall an example.
  • The third, and the most important for what comes next in this article, is psychic numbing19: The psychologist Paul Slovic spent decades documenting a deeply uncomfortable finding: human compassion and concern do not scale with numbers. We feel genuine, mobilising distress for one identified person in danger. As the numbers grow from ten people, to a hundred, then a million, emotional engagement does not grow with them. It collapses. “The more who die, the less we care,” is how Slovic summarises it.
  • And the fourth is temporal discounting20: we systematically undervalue future outcomes relative to present ones. A certain reward today is worth more to us than a larger reward next year.

Between 7.1 – 33 Million Dead21
These are the people we lost to the pandemic.

Official confirmed deaths from COVID-19 stand at approximately 7.1 million, as recorded by the WHO. Excess mortality estimates (the gap between how many people died and how many would have died anyway) put the real figure somewhere between 14.9 million and 33 million.2223

COVID-19 was, for a large part of the global population, the first time in living memory that ordinary people walked around consciously calculating their own mortality risk. It did not make us more rational. It made us more anxious.24

Research published during and after the pandemic found that prolonged exposure to mortality risk increased temporal discounting24, the bias that makes us prioritise the present over the future. People under pandemic stress didn’t become careful, long-term thinkers- they became more impulsive, more present-focused, more likely to reach for the immediate reward over the future benefit. One study found that greater temporal discounting directly predicted lower compliance with masks and social distancing, the behaviours that would actually reduce risk.25 

Psychic numbing compounded this.26 In the early weeks, when COVID deaths were in the hundreds, there was genuine grief, and that specific terror of the unknown. By the time the death tolls reached hundreds of thousands, then millions, our emotional machinery had largely switched off, because our brains cannot hold a million deaths the way they holds one. The numbers became, in Slovic’s phrase, mere statistics.19

The pandemic taught us something important and uncomfortable: mass risk awareness does not produce mass rational behaviour. It produces mass emotional behaviour- fear, denial, exhaustion, and the very human tendency to make the anxiety stop by pretending, at some level, that the threat is not quite as real as the numbers say. We had all the data. We had the micromort equivalent of a daily death budget displayed on every news channel in the world. And we still, collectively, could not think clearly about it.

Now consider what happens when the risk is neither immediate, nor personal, and communicated about constantly using statistics and numbers.

Our Beautiful Space Rock
Climate change is risk that has been engineered, almost perfectly, to defeat every cognitive tool we have. It is chronic, not acute- there is no single moment of crisis, just accumulation. It is global in scale and feels distant even when it isn’t.27 It is statistical, not personal, because it kills in aggregates, not with faces.28 Its worst consequences arrive in timelines beyond our natural planning horizon.29 And it requires collective action at precisely the moment when individuals are most inclined to discount, deny, and defer.30

To this, add that people often believe that weather is made by the gods, which means both- that we are unable to interfere with it (so no anthropogenic climate change)303132, and that anything we do is also going to be useless- because it is god’s wish for it to be so33. To this worldview, even if climate were changing, the right response would be to accept it, because it cannot be changed by humans.

This is the central tragedy of climate risk communication. Climate communication has, for the most part, been built for spreadsheets, not for minds. It relies on scale, statistics, and long timelines- exactly the conditions under which human intuition fails. We communicate climate risk in parts per million, degrees of warming, and deaths by the million, and then wonder why it does not move behaviour.34

Climate change is not just an environmental problem. It is a risk communication problem- more specifically, a chronic risk problem. It is literally a million and one small events all over the planet cascading into one big final boss problem.

Solutions
Here is how I would tackle this problem:

  1. Localise the risk: People respond to risks they have seen, or can imagine happening to people like them. Climate communication that begins at the global level fails; communication that begins with lived, local experience has a chance. In India at least, many communities have experienced climate-origin loss. Start there. Explain climate change to them through their own frame of reference.
  2. Shorten the time horizon: As long as climate change is framed as a 2050 or 2100 problem, it will be systematically deprioritised. Communication that highlights present-day impacts such as heatwaves, air quality, food prices aligns with how we actually make decisions.
  3. Use social proof, not just data: Behaviour is contagious. If we are able to change what one person does, especially someone influential in the community, the rest of the community are more likely to follow. Community-level interventions, whether it is water management, crop choices, or energy use, scale because they are visible, repeatable, and socially reinforced.
  4. Understand that only agency beats risk: People do not act on risks they feel powerless to change. Effective communication pairs risk with action which is specific, achievable, and immediate.

None of this is sufficient. The scale of the problem dwarfs every communication strategy we have. But the alternative- continuing to recite statistics into the void and wondering why nothing changes- is not working either.

I was able to overcome my fear of heights, however briefly, only because I felt empowered to do it, had the means to do it, had the right guidance in the parasailing instructor, and felt motivated about it.

These are inherently human traits.

Those conditions- agency, means, guidance, motivation, are the same conditions under which people act on any risk. And they are exactly what is missing from most climate communication.

Climate risk is the ultimate jump. It is a “god-sized” problem, yes, but it is one that will be solved in the very human terrain of local communities, social proof, and individual agency. We have to stop treating people like calculators that have failed a maths test and start treating them like the both the army officers in this post: individuals who can face immense risk, provided they have a mission, a team, and a plan.

Sources

  1. Microrisks for Medical Decision Analysis — Ronald A. Howard (Semantic Scholar)
  2. How Dangerous is Skydiving? — Skydive Magazine
  3. Micromorts — micromorts.rip
  4. Microlives — Understanding Uncertainty, University of Cambridge
  5. Epidemiology — Cambridge Dictionary
  6. Using Speed of Ageing and Microlives to Communicate the Effects of Lifetime Habits — The BMJ
  7. Understanding Uncertainty: Microlives — Plus Maths, Cambridge
  8. BMJ Microlives Supplementary Data — The BMJ
  9. Discrete vs Continuous Data — G2
  10. Understanding Uncertainty: Microlives — Plus Maths, Cambridge
  11. Actuaries — US Bureau of Labor Statistics
  12. What is Actuarial Science? — Institute and Faculty of Actuaries, UK
  13. Core Principles of Risk in Actuarial Science — Asian Actuarial Conference
  14. Mortality Charges in ULIP — ICICI Prudential Life Insurance
  15. Risk Stability Using Volume: The Law of Large Numbers — IRMI
  16. Insurance Regulatory and Development Authority of India — IRDAI
  17. The Affect Heuristic in Judgments of Risks and Benefits — Slovic et al. (Semantic Scholar)
  18. Judgment Under Uncertainty: Heuristics and Biases — Tversky & Kahneman, Science
  19. Psychic Numbing — The Arithmetic of Compassion (Paul Slovic)
  20. Time Discounting — Behavioural Economics
  21. COVID-19 Deaths Dashboard — World Health Organization
  22. The True Death Toll of COVID-19: Estimating Global Excess Mortality — WHO
  23. Excess Mortality During the Coronavirus Pandemic — Our World in Data
  24. A Mini-Review on How the COVID-19 Pandemic Affected Intertemporal Choice — PMC
  25. Risk-Taking Unmasked: Temporal Discounting and COVID-19 Preventative Behaviours — PMC
  26. Psychic Numbing: Why Rising COVID and Climate Death Tolls No Longer Shock Us — Grist
  27. The Psychological Distance of Climate Change — Frontiers in Psychology
  28. Psychic Numbing — The Arithmetic of Compassion (Paul Slovic)
  29. Climate Change and the Tyranny of Psychological Distance — PreventionWeb
  30. Religious Beliefs and Climate Change Adaptation — PMC
  31. Nearly 40% of Indians Believe Climate Change is God’s Will — Transform Rural India / LiveMint
  32. Winds of Change: Religion and Climate in the Western Himalayas — Journal of the American Academy of Religion, Oxford
  33. Divine Will and Climate Change Denial — Nature
  34. Three Recommendations for Effective Climate Communication — Social Science Research Council
    Unknown's avatar

    Author: Finrod Bites Wolves

    A blogger.

    Leave a comment