January 4th, 2011
Study Suggests Large Proportion of ICD Implantations Lack Firm Evidence Base
Study Summary by Larry Husten: An analysis in JAMA of the National Cardiovascular Data Registry (NCDR) shows that a substantial proportion of ICD implantations are not supported by a firm evidence base. Sana Al-Khatib and colleagues examined data from 117,707 patients who received ICDs between January 1, 2006 and June 30, 2009, and found that 22.5% of implantations were not evidence based.
Newly diagnosed heart failure accounted for more than half of the non-evidence-based implantations. People with MI within 40 days or with NYHA Class IV symptoms accounted for most of the other cases.
Compared with those who received an evidence-based ICD, in-hospital death was higher in patients who received an ICD without a firm evidence base (0.57% vs 0.18%, p<0.001) and post-procedural complications were also higher (3.23% versus 2.41% (p< 0.001).
Electrophysiologists had a lower rate of non-evidence-based implants than other physicians. Patients who received non-evidence-based ICDs were older, sicker, more likely to belong to a racial minority (other than black), and were more likely to receive a dual-chamber ICD.
The authors comment that although the absolute difference in complications between the two groups was “modest, these complications could have significant effects” on the patients. “Importantly, these complications resulted from procedures that were not clearly indicated in the first place. While a small risk of complications is acceptable when a procedure has been shown to improve outcomes, no risk is acceptable if a procedure has no demonstrated benefit.”
In an accompanying editorial, Alan Kadish and Jeffrey Goldberger write that the findings “should be used to inform public health policies toward the appropriate use of this life-saving but expensive technology.”
Commentary by John Spertus:
The paper in this week’s JAMA by Al-Khatib and colleagues analyzed the NCDR ICD registry to describe the prevalence, inter-hospital and inter-specialty variability, and in-hospital outcomes associated with non-evidence-based implantation of ICDs. Their finding that almost a quarter of ICD implantations are not evidence-based, with tremendous site variability, is very important. The accompanying editorial notes several potential challenges with the study – that I will not repeat – but I think that there are several key additional issues that need to be emphasized that were not noted by the editorialists.
First, this type of analysis is an enormous and important extension of previous work from procedural registries, such as those of the NCDR. Procedural registries have traditionally focused upon the procedural complications associated with treatment. An enormous goal for improving the quality of care for cardiac disease is to shift our focus from “was the procedure done well?” to “did we do the procedure in the right patient?” The ACC has started to move in this direction with the creation of Appropriate Use Criteria, but those do not yet exist for ICD devices. Therefore, the authors examined whether or not there was evidence to support implantation in certain subsets of patients.
Finding that almost 1 in 4 procedures were done for patients without evidence to support their use is impressive and concerning. However, I believe that there are 2 types of circumstances in which evidence doesn’t support the use of a treatment. The first is when those patients were not included in the clinical trial. As physicians, we recognize that clinical trials have extensive inclusion and exclusion criteria (to minimize sample size and costs of a study) and use our judgment to extend the findings to patients we clinically feel are similarly likely to benefit. For example, the vast majority of initial DES studies were performed in patients with single-vessel coronary disease and we all feel comfortable using them in patients with multi-vessel disease. A much more concerning scenario, however, is when patients were included in a study and no benefit was observed. For example, the DINAMIT investigators proved that there was no benefit from ICD implantation within 40 days of an MI. Yet this was the second most common “non-evidence-based” indication of ICD implantation in this study. Providing an expensive and risky treatment to patients in whom we know that there is no benefit makes any complication or risk from the treatment unjustifiable and deserves a major effort to eradicate this practice so that we can systematically provide safer, more cost-effective care to our patients.
Second, the authors note that the practice of implanting “non-evidence-based” devices has changed little, even though the NCDR ICD registry provides reports to sites about their rates of these treatment patterns. Al-Khatib and colleagues note that “Via quarterly reports, the NCDR shares data with participating hospital sites on their rates of approved indications for primary prevention ICD implantations…” To me, this is the most concerning finding of the study. Although the authors suggest more education is needed to minimize the occurrence of “non-evidence-based” implantations, such educational interventions are known to be very weak. In fact, the NCDR has gone beyond this by already providing sites with their own performance data. My question is what are the sites doing with these data? It is a phenomenal opportunity to know what your hospital is doing, but this has had no apparent impact on practice. I believe that the key challenge raised by this article is that we need to develop improved methods for using these data about our selection practices. Do we need to start reporting at an individual operator level? Do we need to start holding physicians accountable? Should there be a prospective worksheet to document why a physician chooses to defy the evidence and treat a patient? Should there be mandatory “secondary opinions” prior to treatment to minimize this practice in the future? These are key issues facing our profession and we need to start addressing them.
Finally, the authors have made much of the lower rates of “non-evidence-based” implantations by electrophysiologists. Personally, I believe that this is a distraction. The absolute difference in rates between specialties is ~4%, except for cardiothoracic surgeons who had a higher rate. Yet 1 in 5 of electrophysiologists’ cases were done in the absence of evidence! This underscores that all providers have a huge opportunity to improve. In fact, the actual number of “non-evidence-based” devices implanted by electrophysiologists (about 15,475) is more than twice the rate of the “non-evidence-based” devices placed by non-electrophysiologist cardiologists (about 6,870), cardiothoracic surgeons (about 1,048) and other clinicians (about 1,697) combined. No specialty is immune from the opportunity to improve and we all need to work collaboratively to improve the selection of patients for invasive treatments so that we can improve the quality of care, at the least possible costs, for all of our patients.