March 15th, 2013

How Good Is Your Eye for Coronary Angiography?

and

John Ryan interviews Brahmajee Nallamothu, lead author of a new study published in Circulation, comparing the accuracy of visually interpreted coronary angiography with that of quantitative coronary angiography (QCA). Nallamothu discusses the study’s implications for clinical practice.

THE STUDY

Researchers randomly selected 175 patients undergoing PCI at one of seven hospitals in 2011. Percent diameter stenosis as interpreted by visual assessment was compared with percent diameter stenosis determined by QCA. Of 216 lesions, 213 (98.6%) were interpreted to have ≥70% stenosis by visual assessment. QCA identified 56 of those 213 lesions (26.3%) as having <70% stenosis. Mean percent diameter stenosis was 8.2% higher when assessed clinically than when measured by by QCA.

THE AUTHOR RESPONDS

Ryan: Of the lesions deemed to be ≥70% by clinical assessment, more than a quarter were found to be <70% by QCA. This seems like a large number. Did it surprise you?

Nallamothu: It did surprise me a bit, but I’m not exactly sure what we should have expected. Part of the reason this study was so exciting to do was that no one had really looked broadly at this question in several years. However, it is important to emphasize that although we saw this difference, it is unclear how it would have changed a clinical decision or outcome. That remains to be seen.

Ryan: On average, clinical interpretation was 8.2% higher than the lesions’ percent stenosis on QCA. How representative of U.S. cardiology practices do you think these findings are?

Nallamothu: I am not sure. These hospitals were pretty special. The very fact that they were engaged and interested in exploring this aspect of quality separates them from most institutions in my mind. They all had great leadership in their cath labs, so we were able to work with closely with the people there.

Ryan: No lesion was calculated to be <50% on QCA. With all the newspaper reports of cardiologists performing inappropriate interventions, did this surprise you? It certainly seems reassuring.

Nallamothu: It didn’t surprise me, but it should reassure others. I understand the concerns that have been raised, but I strongly believe that these reports are the rare exception and do not reflect the practices of the vast majority of interventional cardiologists who are trying to do their best for their patients each day.

Ryan: Most of the reported values on clinical interpretation were divisible by 10 (i.e., 60%, 70%, 80%, etc.), whereas QCA gives a more precise number. Should cardiac catheterization labs switch to QCA for all their angiograms to avoid this rounding? How would such a change affect outcomes, if at all?

Nallamothu: I’m not sure if turning an interventional cardiologist into a “human QCA machine” (this is Dave Cohen’s expression) is necessarily the answer. I think the real opportunity is to understand why and when discrepancies may occur that would change clinical decision making and outcomes. For example, we have thought about how best to give feedback to hospitals about cases of substantial discrepancy between the clinical interpretation and QCA. In those instances, the hospital and operators could review the case more fully. It might turn out that the clinical decision was completely appropriate and in the patient’s best interest. It also may be that additional studies could have been of value – perhaps fractional flow reserve or stress testing – before deciding to perform PCI. Those are the next steps that we have to consider carefully. Finally, this issue isn’t just important for interventional cardiologists. The reliability of diagnostic-test interpretation also matters in other areas of cardiology, such as echocardiography and nuclear medicine, as well as internal medicine more generally.

Comments are closed.