September 5th, 2011

Base-Rate Neglect: A Common Clinical Fallacy

In my previous post, “Numbers Traps in Clinical Practice,” I ended with this quiz question:

An emergency department decides to perform serum troponin testing on all patients with any type of chest complaint. They suspect that the incidence of documented myocardial infarction in this subgroup is only 1%, but they are determined not to miss a single MI. They choose a high-sensitivity troponin assay with a sensitivity of 95% and a specificity of 80%. For one of these patients with a positive troponin, what are the odds of having an MI?

About two thirds of online respondents selected the correct multiple-choice answer: 1 to 24. In this post, I’d like to delve a bit deeper into how we make calculations like this in clinical practice. Conditional probabilities are hard to calculate in our heads, but easy with a branching algorithm or a 2×2 table. For this problem, assuming there are 1000 total patients, we can set up the following table:

Patients with MI Patients without MI Predictive Value
Positive test 9 true positives 198 false positives 9 /207 = 4%
Negative test 1 false negative 792 true negatives 792/793 = 99.9%
Total 10 990 1000

 

The posttest probability of an MI, given a positive troponin result when the test is used relatively indiscriminately, is only 4%. The odds (probability divided by the complementary probability) are 4 to 96. Thus, the odds are 24 to 1 against you if you are betting that your patient is having an MI — not a very good bet.

As the table shows, it’s relatively easy to calculate posttest probability if you work through the numbers. You can also calculate the posttest odds by multiplying the pretest odds by the likelihood ratio, or by simply plugging the numbers into an app on your smartphone. But for individual patients, we rarely do formal calculations. We seem to prefer subjective probabilities, perhaps acknowledging that pretest probability is usually just an estimate and that posttest probability is a number that doesn’t necessarily yield an obvious clinical decision.

According to cognitive psychologists, we tend to eschew formal calculations in favor of a heuristic called anchoring and adjusting and, thereby, subjectively estimate conditional probabilities. We estimate an anchor, which is the pretest probability or the base rate. We then adjust the anchor using new information. Studies show that decision makers are often over-influenced by the adjustment or new information. They often forget about the base rate and incorrectly estimate the probability. It’s a fallacy known as base-rate neglect. Base-rate neglect can lead to poor design of testing strategies, such as indiscriminate troponin testing.

Let’s imagine that we revise the strategy of troponin testing and instead take a history and physical, plus an electrocardiogram, from patients before ordering the test. We do not order the troponin test for patients with an obvious noncardiac etiology but remain fairly liberal about ordering it for other patients. To identify patients who need troponin testing, we estimate that the sensitivity of our initial clinical screen is 90% but, by casting a wide net, that our specificity is only 50%.

With our new strategy of combining an initial evaluation with troponin testing, we reduce the number of false positives from 198 to 99 without a loss of true positives or an increase in false negatives. Just by changing the sequence of our evaluation — talking to the patient and more selectively ordering tests — we can improve the accuracy of our evaluation.

As the hard calculations show, indiscriminate testing driven by the desire not to miss a single MI makes no sense. False-positive results lead to frequent diagnostic cascades of further testing, potentially exposing patients to unnecessary radiation and procedure-related risks, as well as increasing costs. Of course, we all know that defensive medicine is the reason for this illogical behavior. Our logic is further distorted by third-party payment incentives that make the clinician and patient insensitive to the monetary costs.

It would be more productive — and less defensive — to see this situation as an opportunity for shared decision making with the patient and the family. But single-event probabilities can be baffling, so some explanation is in order. After all, an individual patient’s chances of having an MI are either 0 or 100%.

Cognitive psychologists tell us that people tend to think about single-event probabilities in two ways: either as a frequentist or as a Bayesian. Frequentists usually think of single-event probabilities as the relative frequency with which an event occurs in a population over time. Bayesian thinkers, on the other hand, view a single-event probability as an expression of belief, conviction, or doubt about an event. Consider weather prediction: When a forecaster reports the chance of rain for tomorrow, he is describing a single event, and we will likely act differently if given a number of 30% rather than 70%.

Many cognitive psychologists think that the frequentist approach is easier for the mind to grasp. So it is probably best to explain a single-event probability to a patient by describing how often an event would occur in a theoretical population of similar patients. You might use plain language like this: “If I had 100 patients just like you, I would expect 5 to experience a complication.”

Patients want us to reduce the uncertainty about what they are feeling or fearing. They want clear-cut diagnoses and predictions. Probability is uncertainty quantified. As we convey information to our patients, we must be careful about how we estimate probability. By calibrating our decision-making strategies with objective calculations of probability — and remaining mindful about base rates — we can do a better job of diagnosing, informing, and treating patients.

Can you think of ways to reduce base-rate neglect in your practice? Please share them here.

Comments are closed.