January 4th, 2011

Study Suggests Large Proportion of ICD Implantations Lack Firm Evidence Base

Study Summary by Larry Husten: An analysis in JAMA of the National Cardiovascular Data Registry (NCDR) shows that a substantial proportion of ICD implantations are not supported by a firm evidence base. Sana Al-Khatib and colleagues examined data from 117,707 patients who received ICDs between January 1, 2006 and June 30, 2009, and found that 22.5% of implantations were not evidence based.

Newly diagnosed heart failure accounted for more than half of the non-evidence-based implantations. People with MI within 40 days or with NYHA Class IV symptoms accounted for most of the other cases.

Compared with those who received an evidence-based ICD, in-hospital death was higher in patients who received an ICD without a firm evidence base  (0.57% vs 0.18%, p<0.001) and post-procedural complications were also higher (3.23% versus 2.41% (p< 0.001).

Electrophysiologists had a lower rate of non-evidence-based implants than other physicians. Patients who received non-evidence-based ICDs were older, sicker, more likely to belong to a racial minority (other than black), and were more likely to receive a dual-chamber ICD.

The authors comment that although the absolute difference in complications between the two groups was “modest, these complications could have significant effects” on the patients. “Importantly, these complications resulted from procedures that were not clearly indicated in the first place. While a small risk of complications is acceptable when a procedure has been shown to improve outcomes, no risk is acceptable if a procedure has no demonstrated benefit.”

In an accompanying editorial, Alan Kadish and Jeffrey Goldberger write that the findings “should be used to inform public health policies toward the appropriate use of this life-saving but expensive technology.”

Commentary by John Spertus:

The paper in this week’s JAMA by Al-Khatib and colleagues analyzed the NCDR ICD registry to describe the prevalence, inter-hospital and inter-specialty variability, and in-hospital outcomes associated with non-evidence-based implantation of ICDs. Their finding that almost a quarter of ICD implantations are not evidence-based, with tremendous site variability, is very important. The accompanying editorial notes several potential challenges with the study – that I will not repeat – but I think that there are several key additional issues that need to be emphasized that were not noted by the editorialists.

First, this type of analysis is an enormous and important extension of previous work from procedural registries, such as those of the NCDR. Procedural registries have traditionally focused upon the procedural complications associated with treatment. An enormous goal for improving the quality of care for cardiac disease is to shift our focus from “was the procedure done well?” to “did we do the procedure in the right patient?” The ACC has started to move in this direction with the creation of Appropriate Use Criteria, but those do not yet exist for ICD devices. Therefore, the authors examined whether or not there was evidence to support implantation in certain subsets of patients.

Finding that almost 1 in 4 procedures were done for patients without evidence to support their use is impressive and concerning. However, I believe that there are 2 types of circumstances in which evidence doesn’t support the use of a treatment. The first is when those patients were not included in the clinical trial. As physicians, we recognize that clinical trials have extensive inclusion and exclusion criteria (to minimize sample size and costs of a study) and use our judgment to extend the findings to patients we clinically feel are similarly likely to benefit. For example, the vast majority of initial DES studies were performed in patients with single-vessel coronary disease and we all feel comfortable using them in patients with multi-vessel disease. A much more concerning scenario, however, is when patients were included in a study and no benefit was observed. For example, the DINAMIT investigators proved that there was no benefit from ICD implantation within 40 days of an MI. Yet this was the second most common “non-evidence-based” indication of ICD implantation in this study. Providing an expensive and risky treatment to patients in whom we know that there is no benefit makes any complication or risk from the treatment unjustifiable and deserves a major effort to eradicate this practice so that we can systematically provide safer, more cost-effective care to our patients.

Second, the authors note that the practice of implanting “non-evidence-based” devices has changed little, even though the NCDR ICD registry provides reports to sites about their rates of these treatment patterns. Al-Khatib and colleagues note that “Via quarterly reports, the NCDR shares data with participating hospital sites on their rates of approved indications for primary prevention ICD implantations…” To me, this is the most concerning finding of the study. Although the authors suggest more education is needed to minimize the occurrence of “non-evidence-based” implantations, such educational interventions are known to be very weak. In fact, the NCDR has gone beyond this by already providing sites with their own performance data. My question is what are the sites doing with these data? It is a phenomenal opportunity to know what your hospital is doing, but this has had no apparent impact on practice. I believe that the key challenge raised by this article is that we need to develop improved methods for using these data about our selection practices. Do we need to start reporting at an individual operator level? Do we need to start holding physicians accountable? Should there be a prospective worksheet to document why a physician chooses to defy the evidence and treat a patient? Should there be mandatory “secondary opinions” prior to treatment to minimize this practice in the future? These are key issues facing our profession and we need to start addressing them.

Finally, the authors have made much of the lower rates of “non-evidence-based” implantations by electrophysiologists. Personally, I believe that this is a distraction. The absolute difference in rates between specialties is ~4%, except for cardiothoracic surgeons who had a higher rate. Yet 1 in 5 of electrophysiologists’ cases were done in the absence of evidence!  This underscores that all providers have a huge opportunity to improve. In fact, the actual number of “non-evidence-based” devices implanted by electrophysiologists (about 15,475) is more than twice the rate of the “non-evidence-based” devices placed by non-electrophysiologist cardiologists (about 6,870), cardiothoracic surgeons (about 1,048) and other clinicians (about 1,697) combined. No specialty is immune from the opportunity to improve and we all need to work collaboratively to improve the selection of patients for invasive treatments so that we can improve the quality of care, at the least possible costs, for all of our patients.

8 Responses to “Study Suggests Large Proportion of ICD Implantations Lack Firm Evidence Base”

  1. Outstanding comments by John. I especially like the way he thinks about the three points…the value of the registry analysis for research; the failure of feedback to change practice; and the lack of evidence (despite the framing in the paper) that electrophysiologists are doing so much better than the others.

    I wrote some thoughts for the general public… if you are interested it is at http://tinyurl.com/3xoaaqm

    and then a lot in the news…

    ABC World News (1/4, story 6, 2:20, Sawyer) reported that research published in the Journal of the American Medical Association is raising “questions for some of the thousands of people who have” an implantable cardioverter-defibrillator (ICD).
    The Los Angeles Times (1/5, Maugh) reports that “more than 1 in 5 patients who receive an implantable defibrillator to prevent sudden death fall outside guidelines for the use of such devices and have about three times the risk of dying while hospitalized for the procedure as those who receive it within the guidelines.” Investigators looked at data from a “registry of implants maintained by the American College of Cardiology that covers an estimated 95% of all US implants.” The researchers “studied 111,707 implants for what is called primary prevention, performed between January 2006 and June 2009.”
    The New York Times (1/5, A12, Grady) reports that the investigators “were surprised to find that more than 25,000 people — 22.5 percent of all those who got defibrillators — did not match the guidelines.” The data also indicated that “blacks and Hispanics were more likely than whites to get defibrillators they probably did not need. At many centers, more than 40 percent of the devices went to patients outside the guidelines.”
    The AP (1/5, Johnson) reports that the investigators found that “the patients who got implants according to guidelines were less likely to die in the hospital than the patients whose surgeries clearly fell outside the guidelines.”
    The Wall Street Journal (1/5, Kamp, subscription required) reports that according to Ralph Brindis, president of the American College of Cardiology, the research will “have major implications for physicians and hospitals in their evaluation of their practice patterns.”
    CNN (1/5, Rice, Falco) quotes Dr. Brindis as saying “The study indicates that there are substantial variations among hospital ICD implantation strategies.” He added, “This variation clearly demonstrates an opportunity for improvement in care.”
    Bloomberg News (1/5, Cortez), AFP (1/5) and Modern Healthcare (1/5, Vesely, subscription required) cover the story, as did HealthDay (1/4, Reinberg), MedPage Today (1/4, Phend), WebMD (1/4, Goodman), and HeartWire (1/4, Miller).

    This is sure to get the attention of policymakers.

  2. Excellent commentary by Dr’s Spertus and Krumholz. As noted, it is somewhat disturbing to think that the data were reported back to the institutions with little effect on behavior. I believe that individual provider data, sent to the provider personally, will be the level of granularity necessary to alter behavior. In the classic Deming model concept, we may have failed to “close the loop” and educate providers as to their practice characteristics. (And I agree that it is yet to be proven that such feedback, on the scale we are discussing here, works). It is important to remember that the data are truely ours, skillfully generated and we can not deny its import. We need not fear the data but certainly should act upon it by evaluating every aspect of “inappropriate use” to determine root causes for such action. For instance, are physicians fearful of delaying ICD implant in a 25% ventricle for tort or other reasons? The press response to the data will be painful but should not deter us from a careful, thoughtful analysis of the registry data.

    Competing interests pertaining specifically to this post, comment, or both:

  3. Rahul Bhardwaj, MD says:

    I’m still in training and I don’t fully understand how reimbursement works. How are these devices that are placed in inappropriate patients being paid for? I had thought these large trials and carefully thought out guidelines that cover a large portion of, though certainly not all, patients so that we can provide good care and do the right thing for the patients.

    The results of this study of the NCDR ICD registry are disappointing. I prefer to think that improper ICD placements are due to ignorance or because of non-indicated reasons, rather than greed, and hopefully we will see more evidence based practice in the future with better education or a prospective study on why physicians choose to defy guidelines as Dr. Spertus suggested that can explain these results. These devices are not without risk and are very expensive. If more data like these comes out, I wouldn’t be surprised to see stricter governmental regulation on how we practice medicine in the future due to our own ineptitude to ensure cost efficiency and good, evidence-based care. Primum non nocere.

    Competing interests pertaining specifically to this post, comment, or both:

  4. The JAMA article by El-Katib and colleagues is eye-opening but not especially surprising. It illustrates the disconnect that often occurs between trial results, guideline recommendations, and performance measures. I concur with the comments by Drs. Spertus and Krumholz.

    With regard to ICDs and CRT-D devices, the major trials that influenced the guidelines on use of ICDs for primary prevention(SCD-HeFT, MADIT-II) generally enrolled symptomatic heart failure outpatients with heart failure of at least 3 months duration and EFs ≤30 or 35% who were on optimal medical therapy (beta-blockers and ACE inhibitors).

    The key SCD-HeFT inclusion criteria are listed below:

    1. Symptomatic CHF (NYHA class II and III) due to ischemic or non-ischemic dilated cardiomyopathy.

    2. LVEF ≤35% as measured by nuclear imaging, echocardiography or catheterization within 6 mos of enrollment.

    3. 18 years of age; no upper age limitation.

    4. CHF present for at least three months prior to randomization and
    ACEI and beta-blocker therapy at optimal doses unless the patient does not tolerate these agents.

    Of particular note is that in SCD-HeFT, there was no improvement in survival during the first year after ICD implantation. Certainly it is quite reasonable to ask “What’s the rush?” in these patients.

    In this regard, it is notable that the current AHA Get-With-The-Guidelines performance measures include CRT-D implantation or scheduled implantation AT THE TIME OF DISCHARGE, with no mention of the duration of heart failure or a period of optimal medical therapy prior to the procedure. Improvement, and not infrequently normalization, of LV function often occurs over a period of months with the initiation of beta-blockers, ACE inhibitors and aldosterone blockers, after which many such patients will not only no longer meet the implantation criteria but will be at quite low risk for sudden cardiac death.

    In other words, a “quality improvement performance measure” is inconsistent with both the trial evidence and the guideline recommendations. Is it any wonder that patients are receiving devices that are not evidence-based? It’s time to ask ourselves “What’s going on here?”

    Competing interests pertaining specifically to this post, comment, or both:

  5. While I do not doubt that a too-high proportion of ICDs are placed outside of the realm of evidence-based indications, I think this study likely exaggerates the problem. It seems to have been written with a larger focus on the lay media interpretation (which was entirely predictable) and a lesser focus on the scientific community, and paints a worst-case scenario. A couple of points.
    1) To my knowledge, of the “non-evidence based indications” studied, only placement of the ICD early after MI has been associated with harm.
    2) The major driver of inappropriate placement was related to implant early after HF diagnosis or implant in class IV patients. Some of the class IV patients likely were chronically class II or III and may have been appropriate candidates. More importantly, decisions about timing of implant in heart failure (outside of recent MI) are not evidence based, and I think many reasonable physicians would consider implanting in a patient who has not been followed for 3 months, but who has supportive evidence of chronicity of disease.
    3) The most interesting data from of the study is not the absolute rates but rather is the wide between-center variation. However, we should not assume the centers with low rates of non-evidence based care were practicing highest quality care. This papers focuses exclusively on “sins of commision,” and it could be argued that given the benefits of ICDs, “sins of ommission” are much greater breaches in quality. It may be that the sites with low non-evidence based use also implant less in patients with indications.

  6. Robin Motz, M.D., Ph.D. says:

    What is the medical reason that those who did not “need” an ICD suffered more procedural complications? If the study were properly randomized, you might suspect that this result is an outlier, and if that result is an outlier, why not the conclusion of the article? Something doesn’t fit statistically—it might be that the data was analyzed after-the-fact, which would affect the true “n” of the substudy and therefore broaden the gap before statistical significance is reached.

  7. Robin Motz, M.D., Ph.D. says:

    Also, the possibility that the ICD benefitted some patients who did not fit the inclusion criteria is not properly analyzed in the article.

  8. I think that Robin raises an interesting point. However, I would note that the guidelines-based indications are where there is clear and demonstrable benefit, in excess of risks. If the peri-procedural risks of non-evidence-based implantations are higher, then the risk/benefit ratio is likely to be even worse in these patients.
    I agree with James that the variability is fascinating. I think it is very healthy for our community to constantly analyze our practice patterns. If we feel that there is important benefit in non-evidence-based guideline implantations, then we should study these much more carefully. If we don’t, then we have lost a great opportunity to continue to improve our practice. Importantly, this week’s JACC demonstrates high rates of inappropriate shocks and if we aren’t tailoring our therapy to those with clear, substantial benefit, we may be adversely impacting their quality of life – something that should concern us all. Thus, I stand by my original observations and encourage us to seize these data as an opportunity to critically review our practice and improve – or at least scrutinize our decisions more closely and clearly articulate to our patients and colleagues why we are deviating from the guidelines recommendations.