December 8th, 2011
How Randomness Affects Quality of Care
Each month I meet with administrators at my hospital to review the quality of our cardiology program. At several meetings I’ve complained that our performance thresholds are too high and fail to account for the random variation that is a part of everyday medicine. My administrators don’t want excuses, though — they aim for perfection. But a discussion at our last meeting about door-to-balloon times for STEMI patients changed their minds.
Last month a STEMI patient presented to the ER and underwent a timely PCI. During the procedure, a second STEMI patient arrived. Plans to quickly finish and transfer the first patient to the ICU were thwarted because the ICU was full. The team developed a work-around solution to take the first patient back to the ER to accommodate the second patient in the catheterization laboratory. Despite fast and creative action, the second patient had a door-to-balloon time of 118 minutes, falling short of the 90-minute threshold.
So how were we to respond to this apparent lapse in quality? As we discussed the situation in our meeting, the answer became clear: No corrective action was required. The team had acted admirably. The lapse was due not to a systems defect but to the uncommon, unpredictable presentation of two STEMI patients simultaneously. The administrators realized how randomness affects our measured outcomes.
Cognitive psychologists tell us that our minds are hardwired to try to find causes for occurrences. We often jump to erroneous conclusions by ascribing unwarranted explanations to events that happen randomly. Intuition can sometimes be helpful, as I’ve discussed in previous blog posts, but it can also lead us astray. Apparently, we evolved the tendency to jump to conclusions long before statistics and probability theory explained random variation.
The quality gurus Walter Shewhart and W. Edwards Deming recognized this when they described “chance causes” versus “assignable causes” of variation. Shewhart also used another term, “special cause variation,” to describe variation that may stem from system deficiencies. Both men recognized that overreacting to “noise” in measurement could lead to wasted quality-improvement efforts or, worse, poor staff morale and even an atmosphere of fear.
Pay-for-performance plans and co-management agreements often have quality thresholds that fail to build in an allowance for the play of chance. Quality thresholds for door-to-balloon time are sometimes aggressively set at 96%. But most hospitals have only about 50 door-to-balloon opportunities per year. If by chance they miss a few, like in the above example, they fall below the 96% threshold. It’s great to set lofty goals, but when you turn goals into reward thresholds, missing the threshold because of chance only discourages diligent practitioners from trying to provide quality care.
Failure to recognize randomness can also affect individual performance. Experts in any field gain experience by using intuition to recognize regularities in their environment. Use of objective measurement and feedback is ideal, but much of what constitutes experience is gained through subjective self-monitoring and observation over time. I suspect that the experts who seem to “get it” are those who can filter out the noise of random variation and focus on the signal observations, which enables them to gain meaningful experience.
We should remain vigilant for trends and patterns in medicine that provide opportunities to improve the quality of care. But clinical medicine also has a Brownian-type motion that is unpredictable and uncontrollable because of randomness. Knowing that our natural tendency is to jump to conclusions about causation, we should remember to account for chance and the inherent randomness of events as we monitor our individual and institutional performance.