March 26th, 2013
Controversial NIH Chelation Trial Published in JAMA
Larry Husten, PHD
Final results of the troubled NIH-sponsored Trial to Assess Chelation Therapy (TACT) testing chelation therapy for coronary disease have now been published in JAMA. Last November, when the preliminary results were presented at the American Heart Association meeting, the positive finding in favor of chelation therapy surprised many observers, though the investigators and senior AHA representatives expressed considerable caution about the proper interpretation of the results. Full publication of the main results should now allow for a more thorough consideration of the trial.
TACT was initially funded by the NIH more than a decade ago to test chelation therapy with EDTA, an alternative medicine therapy received by more than 100,000 people every year but with no evidence base for support. The highly controversial trial was temporarily suspended in 2008 in response to ethical concerns, but was then allowed to resume. The trial was also hampered by slow enrollment, eventually resulting in a downsizing of the trial population. To maintain the trial’s power to achieve a meaningful result, the follow-up time was increased. (Because of this change, and because the data and safety monitoring board reviewed the data multiple times over the course of the study, the threshold for statistical significance was lowered to 0.036.)
TACT was a double-blind study testing active or placebo infusions of chelation in 1,708 stable patients with a history of MI. The primary endpoint of the trial — the composite of death, MI, stroke, coronary revascularization, or hospitalization for angina — was significantly lowered in the chelation group:
- 26% in the chelation group versus 30% in the placebo group (HR 0.82, 0.69-0.99, p=0.035)
There were no significant differences in the individual components of the endpoint. Two subgroups — patients with diabetes and patients with anterior MI — appeared to benefit most from chelation therapy:
- Diabetes: HR 0.61, CI 0.45-0.83
- Anterior MI: HR 0.63, CI 0.47-0.86
The authors concluded that chelation therapy “modestly reduced the risk of adverse cardiovascular outcomes, many of which were revascularization procedures.” But, they cautioned against using the results as an endorsement of the clinical use of chelation therapy: “These results provide evidence to guide further research but are not sufficient to support the routine use of chelation therapy for treatment of patients who have had an MI.”
Although the TACT trial has been, and may well continue to be, the subject of intense controversy, so far the medical community has circled the wagons and been in substantial agreement about its practical implication. The AHA, the NHLBI, and two JAMA editorials (by the JAMA editors and Steve Nissen) have all expressed agreement with the authors that the results do not support the clinical use of chelation therapy. Writes Nissen: “the findings of TACT should not be used as a justification for increased use of this controversial therapy.”
The editorial by the JAMA editors is itself evidence of the extraordinary sensitivity of the TACT trial. The JAMA editors, in a highly unusual situation, discuss their detailed review of TACT and explain their decision to publish the trial. Although they acknowledge multiple limitations of the trial, they defend its value: “reports of rigorous investigations should not be censored because of preexisting ideological positions,” they write.
In his editorial, Steve Nissen agrees with the JAMA editors’ decision to publish the trial but issues a fierce indictment of the trial and its conduct. The TACT paper, Nissen writes, “represents a situation in which many important limitations in the design and execution of a clinical trial compromise the reliability of the study and render the results difficult to interpret. Unfortunately, the efforts of these investigators fell short of the minimum level of quality necessary to adequately answer the question they sought to investigate.”
Nissen questions the reliability of the trial since “more than 60% of patients were randomized at enrolling centers described as complementary and alternative medicine sites.” “Whether a high-quality RCT can be performed at such sites is questionable,” he writes. The major flaw in the trial is that 18% of patients were lost to followup, most of whom because of withdrawal of consent, which occurred significantly more often in placebo patients than in chelation patients (174 versus 115, HR 0.66, p=0.001).
Since in general withdrawals should occur more often in the active treatment group due to toxicity or adverse effects, Nissen speculates that “a logical explanation is unmasking of treatment assignments. If either the investigators or the patients knew who was receiving chelation, patients assigned to the placebo group would likely be influenced to withdraw or stop study treatment, particularly when some investigators were advocates for chelation therapy.” This “substantial nonretention of study participants is sufficient to compromise the validity of the study results.”
TACT investigator Daniel Mark provided CardioBrief with the following detailed response to Nissen’s criticism. (Nissen declined to respond to Mark.)
In his editorial, Dr. Nissen asserts that the “logical” explanation for the greater withdrawals in the placebo group is that patients were unblinded. He further implies that the CAM sites were more likely to be responsible for such unmasking.
His editorial is written from the perspective of someone who is absolutely sure that the trial results are wrong and his mission is to identify where the flaws originate.
Unfortunately, this perspective has blinded him to a more nuanced consideration of the extensive evidence presented in the paper and the appendices.
There is absolutely no indication in the data that placebo patients knew that they were not getting the active therapy. There is no indication in the data that CAM sites were unmasking their patients as Dr. Nissen implies. The AHJ 2012 TACT design paper describes the extensive lengths to which the TACT leadership went to ensure that unblinding of patients and investigators would not occur.
And most importantly, the differential loss of placebo patients would most likely introduce a conservative bias in our results. In other words, if these placebo patients had stayed in the trial and had events, the difference in favor of EDTA might have been even stronger than it was. Sensitivity analyses in the appendix (Table 8) show that the results are robust under a wide variety of reasonable assumptions about the outcomes of the withdrawals.
The idea that the CAM sites were somehow able to tip the results of TACT to favor the EDTA arm is at odds with the data in Figure 3 of the paper, which shows that the effect size for EDTA therapy was actually larger in the non-CAM sites (HR 0.72) than in the CAM sites (HR 0.89).
So basically Dr. Nissen has made assertions that are not empirically supported by anything in the data. The “logic” he sees is the logic of his own beliefs. And the most likely effect of the biases he asserts would actually strengthen not weaken the results of the study.
Just came across Dr. Krumholz’s excellent take on this trial in Forbes (http://www.forbes.com/sites/harlankrumholz/2013/03/27/chelation-therapy-what-to-do-with-inconvenient-evidence/).
The emotional response to this trial has been fascinating. Far more modest findings from an antiplatelet trial, for example, would have been greeted with great pomp and splendor. Advertisements for the medication would be all over pages of the next issue of every major medical journal and hospitals would be switching to the new medication.
But for the TACT trial, we get a scathing editorial by Dr. Nissen that questions the abilities of the investigators.
Is it because much of research in medicine in done within ‘safe zones’, and serves only to provide data for our biases? A recent article in the Atlantic monthly makes this argument and cites a classic paper on this topic by Dr Ioannidis.
Perhaps the reaction to these findings highlight a resistance to change within the medical community that has kept the pace of innovation far slower than in other fields.
Atlantic Article: http://www.theatlantic.com/health/archive/2013/03/how-health-research-misdirects-us/274203/?fb_action_ids=10151589364377269&fb_action_types=og.recommends&fb_source=aggregation&fb_aggregation_id=288381481237582
Ionndis article: http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124
I must say that I will be in the group that is very skeptical. There is no reason to expect the therapy to work—no real hypothesis—only ideas used by altmed folks to justify a study of a strange and profitable practice they were already performing.
The ethics of the trial were more than questionable (see Kimball Atwood’s excellent piece–pubmed it).
When a drug company sponsors a study, we always take that into account in our interpretation—potential conflicts of interest are important. With many CV studies, we encounter squishy data and endpoints, and COIs. But this study is different, in that there was no plausible hypothesis to begin; many of the centers had a dog in the hunt in that they are profiting from the practice already; and the given the Bayesian “extraordinary claims require extraordinary evidence,” this study honestly doesn’t even pass a modest sniff test.
Think of the Geiers and their lupron therapy for autism—given “proper” study design and biases, I’m sure one study could find a positive effect—but no one would likely believe it, as the hypothesis is insane, the treatment harmful, the the proponents quacks.