March 17th, 2011

Meta-Analysis Suggests Worse Outcomes For Rosiglitazone Compared To Pioglitazone

There are no long-term trials directly comparing rosiglitazone and pioglitazone. In an article published in BMJ, Yoon Kong Loke and colleagues performed a systematic review and meta-analysis of 16 observational studies with more than 800,000 thiazolidinedione users in an attempt to assess the relative cardiovascular effects of the two drugs.

When compared with pioglitazone, rosiglitazone was associated with

  • a significant increase in MI (OR 1.16, CI 1.07-1.24; P<0.001),
  • a significant increase in CHF (OR 1.22, CI 1.14-1.31; P<0.001), and
  • a significant increase in death (OR 1.14, CI 1.09 to 1.20; P<0.001).

They calculated that for every 100,000 patients who received rosiglitazone rather than pioglitazone there would be 170 excess MIs, 649 excess cases of HF, and 431 excess deaths.

In an accompanying editorial, Victor Montori expresses concern over the continuing availability of rosiglitazone in the U.S., and writes that “research should be undertaken to understand what occurs when drugs are left on the market with strong warnings.” He also expresses concern that “regulators and prescribers do not seem to have learnt from the rosiglitazone saga,” noting that new diabetes drugs have been approved and rapidly adopted by prescribers in recent years.

Montori concludes:

“The rosiglitazone story says much about how healthcare has become less about promoting patients’ interests, alleviating illness, promoting function and independence, and curing disease, and much more about promoting other interests, including those of the drug industry. Has the corruption of healthcare advanced so far that it is unreasonable, even naive, to expect responsible drug companies, enlightened regulators, and thoughtful prescribers?”

10 Responses to “Meta-Analysis Suggests Worse Outcomes For Rosiglitazone Compared To Pioglitazone”

  1. I remain unconvinced whether this meta-analysis advances the argument surrounding the rosiglitazone controversy. I am somewhat surprised (and disappointed) that the accompanying editorial did not highlight the “fatal flaws” of this analysis, thereby providing some much-needed balance and cautioning us against drawing definitive conclusions.

    In my opinion, no valid inferences can be drawn from this study for the following reasons:
    1. Observational studies are limited by confounding and selection bias that often distort the findings.
    2. Meta-analyses of observational data produce “very precise but equally spurious” results that are seldom replicated in randomized trials.
    There are numerous examples of discordance between the results of observational and randomized studies. Hormone replacement therapy, calcium channel blockers, beta carotene, vitamin E, drug eluting stents’ impact on mortality, etc. come to mind.
    3. The authors justified pooling because of lack of statistical heterogeneity. However there is substantial clinical heterogeneity that arguably precludes pooling. Because of variability in study population, design (case-control vs cohort), treatment exposure, type of comparator (single vs. combinational therapies), endpoints, analysis, matching technique, effect size, etc., the FDA statistical experts and reviewers appropriately cautioned during the July 2010 Advisory Panel meeting that “the study differences strongly suggest that estimates of association from studies should not be pooled into a single meta-analytic estimate.”
    4. When one examines the individual studies, the primary “going in” hypothesis of increased MI risk raised by meta-analyses was not validated in the 3 largest and “high quality” studies (Wickelmeyer, Juurlink, Graham). Only 2 of the 16 studies showed statistically significant increase in MI risk. It is axiomatic in science that hypothesis should drive the data, not the reverse. Of the observational studies, only 1 was hypothesis driven (increased risk of MI), and it failed to validate the hypothesis. The rest were what can be best characterized as “fishing expeditions.”
    5. The risk estimates for death or MI end points were all less than an odds ratio of <1.5. Effect sizes of this magnitude in epidemiologic studies are of questionable relevance and credibility. However, they can be considered as hypothesis-generating at best. The 2010 FDA patient-level met-analysis reported an 80% increase in the hazard for MI, an effect large enough to allow an epidemiologic study to detect it. None of the epidemiologic studies have detected this effect.

    Bottom line, all of these limitations of the epidemiologic studies accounted in the current report conspire to preclude reliable estimates of treatment effects. Thus, there is inadequate evidence to properly judge the relative safety or efficacy of pioglitazone over rosiglitazone. Prospective clinical trials designed for the specific purpose of establishing the CV benefit or risk of rosiglitazone versus pioglitazone or other antidiabetic agents are required to adjudicate the signal of harm derived from meta-analyses or observational studies.

    • Fascinating and correct analysis of course, but it seems to hold the study in isolation of other evidence, which is not quite right.

      Taken together with what we know, this study is not hypothesis generating, since the hypothesis has already been the subject of study. This low quality evidence is essentially consistent with extant evidence of higher quality that deserved our and the regulatory authority’s attention as long ago as September 2010 (we are still waiting for the FDA REMS and prescriptions continue in the US for rosiglitazone).

      Is this study worthy of much news? Not more than the work done on trials which is already known.

      So what should the editorialist do?

      (a) We could have thrown our attention to the flaws of a paper that does not significantly contribute to the literature beyond what was already there and identify problems mentioned by Dr. Kaul and other problems (e.g., We did not like that the paper claimed “real world” value to this evidence because it came from observational studies – if the results had been inconsistent with the RCT data would they have concluded that rosi was safe in the real world and dangerous only in trials? Do not think so. This “real world” language is misleading).


      (b) We could take the approach of looking at the big picture. and the role of regulators, prescribers, and patients in ensuring drug safety.

      We took the latter because that is how we thought we could be most helpful.

      Readers could judge the results of our work by reading the editorial. And I am sorry to have disappointed Dr. Kaul, yet somewhat tickled that we manage to surprise him!

      Competing interests pertaining specifically to this post, comment, or both:
      I was one of the editorialists cited by Dr. Kaul, quite likely the one that both surprised and disappointed him.

  2. Robin Motz, M.D., Ph.D. says:

    Once again, let me re-iterate my position that the results of a meta-analysis should only be used to suggest a hypothetical, and not a clinical conclusion. There are too many mathematical/statistical holes in all such studies, not the least of which is that we are never told which studies were not included, so we can’t possibly review the data ourselves.

  3. Bruce Kottke, MD,PhD says:

    Dr. Kaul is absolutely correct. The senior author of the meta-analysis implicating problems with rosiglitazone is the same person who forced Viox off the market and another glitizone off the market. A coincidence????He is now senior author of a study showing with pioglitizone a reversal of coronary lesions as measured by IVUS . Thus after effectively removing rosiglitizone nearly off the market he comes with a study praising its ONLY HIGH PRICED competitor.This certainly has the appearance of a rather obvious severe comflict of interest.Shouldn’t journal editors pay more attention to such conflicts of interest???

    B.A. Kottke MD,PH.D.FACC

    Emeritus Professor of Medicine
    Mayo College of Medicine

    Competing interests pertaining specifically to this post, comment, or both:
    None except for the welfare of our patients

  4. I was “disappointed” at the journal for not shining the analytical spotlight on the study. The journal has a very strong track record of highlighting strengths and weaknesses of study methods over the years. Who can forget the cleverly titled “spurious precision” seminal article published by Eggers et al in BMJ in 1998, cautioning us against the results of meta-analyses of observational studies?

    I was “surprised” at Dr. Montori, who I consider a methodologist of the highest intellect and rigor, to not utilize the opportunity to highlight the deficiencies in the study while advancing his own valuable personal observations regarding drug regulation in the US. That would have allowed the reader with the much-needed context and balance to be properly informed.

    At the crux of the rosiglitazone controversy is one simple fact – there is no “high-quality” evidence that informs the rosiglitazone debate. If such evidence existed, reasonable people would not come to such divergent conclusions. Take for example, the Office of Surveillance and Epidemiology within the FDA found the evidence of cardiovascular harm to be credible and actionable, but not the Office of Drug Evaluation. There were conflicting recommendations by professional societies – consensus statement of the ADA and EASD advising against the use of rosiglitazone, while a science advisory from the AHA/ACC calling for more controlled trials to resolve the uncertainty. Ultimately, even the regulatory decisions came to different verdicts – the FDA restricting marketing and access while the EMA suspending marketing. Using the standard of proof required by the US Supreme Court for “clear and convincing evidence” in civil cases, is the evidence “sufficiently strong to command the unhesitating assent of every reasonable mind”? My colleague, Dr. Diamond, and I think not!

    Until the evidence of harm is clear and convincing, physicians should be free to exercise clinical judgment, and weigh the comparative risks and benefits of agents such as rosiglitazone on a case-by-case basis.

    Personally, I too think that there is a concern with rosiglitazone, but the currently available evidence is insufficient to either implicate it or exonerate it. As clinicians, we are routinely called upon to make decisions in the face of uncertainty by balancing benefit-risk tradeoffs. The same applies to rosiglitazone. If the patient has tolerated the drug for a long time, there is no compelling reason to switch to pioglitazone or any other drug. If I were to start someone on a diabetic drug, neither rosiglitazone nor pioglitazone would be my first choice!

    Competing interest
    Never prescribed rosiglitazone or pioglitazone, but subscribe to the philosophy that sound evidence of benefit-risk should drive regulatory decisions and clinical recommendations.

  5. Savas Celebi, md says:

    will we be roaming around the pio or roziglitazon? what about the others?

    Competing interests pertaining specifically to this post, comment, or both:

  6. So I hope everyone had a chance to see the Commentary in JAMA by Joe Ross and Kasia Lipska about the TZD class, indicating that we should be focusing away – rather than thinking about as simple move from rosi to pio. I agree with Sanjay that the TZD class has its challenges, even outside of the issue with rosi.

    And with regard to the editorial, must admit that I depart from Sanjay’s view. These guys are not dwelling on this study – but the larger issue of how we are integrating overall evidence and how we are best involving patients in decisions about their care. I was energized by their words.

  7. Sanjay,

    I wonder if you could clarify what you meant by the following:

    “Using the standard of proof required by the US Supreme Court for ‘clear and convincing evidence’ in civil cases, is the evidence ‘sufficiently strong to command the unhesitating assent of every reasonable mind’?”

    I am not an expert in civil procedure, but the standard of proof in *most* civil cases is “preponderance of the evidence” (often described as “more likely than not”). Certain particular kinds of civil cases, such as those involving severance of parental rights, use a “clear and convincing evidence” standard. In addition, the language you quote is from a *California* Supreme Court case, Sheehan v. Sullivan, 126 Cal. 189 (1899). This language implies a very stringent interpretation of clear and convincing, closer to “beyond a reasonable doubt,” which as you know is the standard used in criminal cases. Can you please cite the U.S. Supreme Court case you are referring to that adopted this language? I would like to read the case so I can see the holding and what kind of civil case the Court was applying it to.

    My impression is that many, perhaps most, things in medicine have not been proven by a standard of clear and convincing evidence. But, nonetheless, physicians and patients must make decisions on a daily basis in situations where clear and convincing evidence is lacking. I do not accept your premise that evidence of harm should be ignored until “clear and convincing” evidence is gathered. Where good alternative treatments exist, a lower standard is appropriate.

    Competing interests pertaining specifically to this post, comment, or both:

  8. Marilyn,

    You have the right reference. As you know preponderance of evidence (lower level of proof) and clear and convincing evidence (intermediate level of proof) are both used for civil cases in the US courts, while “proof beyond a reasonable doubt” is utilized for criminal cases.

    The basis for new drug approval as defined by the Code of Federal Regulations (CFR) 314 requires demonstration of “substantial” evidence of efficacy with acceptable safety in adequate and well-controlled studies. Typically 2 trials, each with a p<0.05 are required. In some cases, a stringent p value of 0.001 might be sufficient, especially for mortality or serious nonreversible morbidity. This is equivalent to 2 trials each with p<0.05: 0.05 x 0.05 = 0.0025 for 2-sided p value or 0.0025/2 = 0.00125 for 1-sided p value (p=0.001 is short for 0.00125). The rationale behind this is the “proof beyond a reasonable doubt” standard for new drug approval. If one trial shows treatment efficacy with a p<0.05, there is only a 50% probability, that the results of this trial will be replicated. With a second positive trial with p<0.05, the replication probability goes up in excess of 90%. Replication is at the heart of the scientific endeavor and at the core of drug approval.

    While the standard for new drug approval is clear cut and formalized by CFR, there is no explicit guidance for what constitutes “substantial” evidence for withdrawal of drug approval, except for drugs that have been approved via the “accelerated approval process”. Even then, the FDA lacks authority to enforce withdrawal, it can only make a recommendation for voluntary withdrawal (look at the recent examples of Midodrine and Avastin). Considerations regarding the magnitude and seriousness of the risk, the quality of the underlying evidence, the availability of alternative therapies, etc. all factor into this decision. In my opinion, an intermediate burden of proof (“clear and convincing evidence”), which on a probabilistic scale is a probability greater than 70-75% but less than 90%, is more justifiable than the lower burden of proof (“preponderance of evidence”) which essentially ‘means more likely than not’ or greater than 50% probability of harm.

    So, when one examines the quality of evidence of cardiovascular (CV) harm for rosiglitazone, it is derived primarily from meta-analyses of randomized trials none of which were designed to assess CV events. Nearly a dozen meta-analyses have reported inconsistent results regarding the CV risk associated with rosiglitazone. The most robust (for a variety of reasons) is the 2010 FDA meta-analyses reporting an 80% increase in MI risk (absolute difference of 0.22%), while the most controversial is the Nissen and Wolski meta-analysis. In the latter analysis, the p values for 28% increased MI risk “dance around” 0.05 (depending on sensitivity analyses that correct for many trials with zero events). Such p values are hardly considered robust for meta-analytic estimates which require more stringent p values (0.01 or 0.001). Furthermore, meta-analytic results, to the best of my knowledge, have never been accepted by the FDA as credible evidence in support of NDA (new drug application). In the hierarchy of evidence, meta-analyses (even of randomized trials) are considered to be "low quality" because of the observational nature of the analysis (one is aware of the data going in and therefore "data is driving the hypothesis" instead of the time-honored scientific approach of "hypothesis driving the data"). I accept, however, that in some cases, especially with sparse data (as with safety outcomes), methodologically stringent meta-analytic results might provide credible and actionable information.

    So, when one examines the quality and the quantity of evidence in support of CV risk with rosiglitazone, it does not rise to the “clear and convincing” level of proof. In absence of sound evidence of harm, one should be free to exercise clinical judgment. These decisions should be based on the “art of medicine” philosophy and not dictated by regulatory edicts. Although, I have not found a compelling reason to prescribe rosiglitazone that does not mean that I can dismiss or disrespect the judgment of others who do.

    Finally, when two reasonable people disagree, it usually speaks to the weakness of the evidence.

    Apologies for the long-winded response.


  9. David Powell , md, facc says:

    Interesting. “clear and convincing” to me sounds like over 90%. Since there are alternative treatments, “preponderance of evidence” seems like a more prudent bar from a public health perspective. How about medicolegal risks? Even though the harm data are not” clear and convincing” to some, would a prescribing doctor survive a lawsuit..even hypothesizing no harm? I doubt it
    .with” trial by jury”.