We’re all guilty of it, basing decisions on the most recent event. It’s surely part of our wiring. The question is – what do we do about it. In this episode, Justin Morgenstern breaks down availability bias: what it is, how it shows up in life and medical practice, the difference between learning and bias, research showing availability bias happening in real time, and ways to turn availability bias from a bug into a feature.
Guest Bio: Justin Morgenstern is a community emergency physician with a passion for education, resuscitation, and evidence based medicine, Purveyor of the amazing First10EM blog, Justin works in the Greater Toronto Area.
On a personal note, Justin and I met years ago at a conference in the Bahamas. He was in the audience and was such an incredible contributor to a group conversation that I had him come up on stage and be a panel member. Since then, I’ve found that he is a rare mix of humility, genius, and kindness.
We discuss:
Availability Bias and how it relates to everyday decision making.
What is availability bias;
- Caveat: “there is absolutely no proof, no evidence that thinking or talking about any of these biases improves our practice” But they are still important!
- Availability bias is “when your thinking is based on the ease with which thoughts come to your mind,” rather than the reality of the situation.
- This can be a recent rare diagnosis or something that has an emotional connection like a recent mistake, getting sued, something you see frequently and is at the forefront of your mind.
- For example, on your way to the beach, you’re probably more likely to think about a shark attack than hitting a deer but you are way more likely to die by hitting a deer than being killed by a shark.
- Many cognitive biases overlap. This overlap allows us to function in the world but it can be a mess.
Where does this show up in clinical practice;
- Is availability bias really a bias (an error) or a heuristic (a process)? We only know in retrospect. If it has a bad outcome, it is a bias. If it has a good outcome, it’s a heuristic.
- We need to learn from events, but this learning needs to be based in evidence based practice and not reationary. For example, this can lead to a heuristic when we hear about a lot of cases of RSV and this puts that diagnosis further up on the differential diagnosis.
Learning is not availability bias:
- We need to make sure that our learning is being appropriately applied and use objective measures (e.g. the Wells score or other validated tools) to weigh against our possible biases. This can help us recalibrate and reset after a bad case.
- Biases are often polar opposites and can cause harm when we swing wildly from one end of the spectrum to another. A perfect example is base rate neglect vs. zebra retreat. Base rate neglect occurs when you don’t think about pre-test probability before you go searching for a disease. The opposite is zebra retreat, where a rare diagnosis is heavily weighted in your differential diagnosis list, but you retreat from making the diagnosis.
Whether more testing is the right path for subtle presentations of life threatening diseases;
- Tests don’t work very well at low probability of disease. If we are doing sensible workups, some cases of atypical rare disease processes will be missed.
- We are bad at grasping probability and seeing the actual scale of things. A lottery is a perfect example – we are poor at appreciating the true odds of winning.
- We are tasked with identifying very rare things, but there is study after study showing the human brain is atrocious with probabilities and the lower the probability gets, the worse it is.
“The majority of the time that physicians actually find a zebra, it’s just a horse that has some paint splattered on it.”
Testing thresholds;
- What is your threshold for risk? Most of us know about the test threshold because of pulmonary embolism and Jeff Kline’s work where there’s a number around which the benefits of doing the test are going to outweigh any harms of doing the test. That number is 2%. If you’re above 2% chance of having a PE, the benefits of doing the test are going to outweigh the harms of doing the test. If you’re below 2%, you are more likely to be harmed than to benefit from the test. That’s why 2% is considered an ‘acceptable’ miss rate.
- We should have a low threshold for life threats like aortic dissection. The problem is, we don’t know the test thresholds for most diseases.
Tips for avoiding availability bias in our practices;
- Recognize when we are operating under faulty thinking and identify cognitive biases.
- In medicine, find ways to be objective in our testing and diagnostics. Embrace your inner math nerd by knowing pretest probabilities, test thresholds, and when we are doing harm by over-testing.
- When possible, identify where the harm outweighs the benefits. Chris Carpenter has laid this out beautifully with subarachnoid hemorrhage and lumbar punctures after CT.
“You have to have a good diagnostic system before you can even get to thinking about your thinking and metacognition.”
Real world example of availability bias;
- One study found that after making a diagnosis of pulmonary embolism, physicians were 15% more likely to test for it in the following 10 days (compared to their baseline). This increase in testing was transitory, suggesting that it was an incorrect increase in testing (a.k.a. a bias) and not teaching.
“You should not have variability in your practice based just on what diagnosis you happen to make on any given day.”
Justin’s tools for addressing bias;
- The first step is to monitor your thoughts and identify when you are making decisions. Mindfulness leads to greater awareness of these moments when you’re acting on autopilot. For a deep dive on mindfulness and meditation, Stimulus episode 38.
- Second, you need to slow down and incorporate cognitive stop points. There are two types. Intuitive stop points are required when things are not going as well as you’d expect. This is the time to stop and think. Forced cognitive stop points are a natural time-based event in any case (e.g. before patients get in the ambulance for transfer, before a patient goes to CT, etc.). Take 60 seconds to review everything in your head and make sure you did some of that System 2, slow thinking.
“Don’t get stuck on autopilot.”
Hack the bias;
- Use checklists
- Translate bias into heuristic
“The question we need to be asking ourselves is this: How do we use availability bias to our advantage rather than to our disadvantage?”
The super bonus, must read, deep dive on availability bias in Justin’s Textbook chapter
Shownotes by
- Joshua Anderon
- Rob Orman
- Melissa Orman
Leave a Reply