August 21st, 2012
Shining a Light on Standards at Medical Journals
Druin Burch, BM BCh MA MSc, John Stephen Yudkin, MB BChir MD FRCP and Marion Marjorie Mafham, MBChB MD
The two BEGIN Basal-Bolus trials, published this past April in the Lancet and funded by Novo Nordisk, do not shed light on how best to treat patients with type 1 and type 2 diabetes, but rather on the consequences of poor standards in medical journals and some medical specialties. These open-label randomized studies share notable characteristics. The trials demonstrate that insulin degludec is noninferior to insulin glargine for glycemic control, as measured by HbA1c after 1 year. Each deals with a common disease yet recruited fewer than 10 patients per center. That may be an inefficient way to gather scientific data, but it’s very effective at getting a large number of units to start prescribing a new drug.
Strikingly, neither BEGIN article focuses its main conclusion on the primary outcome but instead on secondary measurements: nocturnal hypoglycemia in the first trial and overall hypoglycemia in the second. In both studies, the differences in outcomes were of marginal statistical significance, and there is no mention of adjustment for multiple testing. The focus on secondary endpoints encourages readers to believe that selecting insulin degludec over alternative treatments is warranted. Such lower rates of hypoglycemia rates in unblinded studies, however, should be considered hypothesis-generating at best. At worst, conclusions about them are completely spurious.
Another recent diabetes study in the Lancet, funded by Boerhinger Ingelheim, compares linagliptin with glimepiride for glycemic control. It follows the same pattern of giving undue emphasis to a hypothesis-generating outcome: The abstract highlights that linagliptin is associated with fewer cardiovascular events, even though this secondary outcome was not prespecified. Incredibly, Figure 3 in the paper further subdivides the cardiovascular events into individual outcomes, showing 3 strokes in the linagliptin group compared to 11 with glimepiride. Buried in the discussion is an assertion that “the likelihood that this finding was due to chance (type 1 error) cannot be discounted, because the study was neither planned nor powered for cardiovascular outcomes.” The mention of type 1 error brilliantly implies that the qualification is merely a statistical complexity, leaving the focus on the supposed advantages of linagliptin. But the only reasonable conclusion from the data is, at most, that different oral hypoglycemic agents might differ in their effects on hard outcomes.
Why does the Lancet repeatedly allow authors to get away with this sort of stuff? Is it because distribution of reprints is a major part of drug-company marketing and, therefore, a major source of revenue for the journal? The two insulin degludec trials and the linagliptin trial were written with the help of drug company–funded professional writers. Their skill in doing their job is clear. Equally clear is the lack of Lancet editorial effort to balance it. Yet these low editorial standards cannot be attributed entirely to commercial self-interest: In June the Lancet published a non–industry-funded trial of stroke thrombolysis (with recombinant t-PA) that had a negative primary outcome, yet again the article had an inappropriate emphasis on a positive secondary result.
The Lancet was founded in 1823 to support clarity and attack error. Its name is a pun: A lancet is both a window that lets in light and a surgical instrument used to evacuate pus. These articles, metaphorically, do neither. But the Lancet is not guilty of unusually bad behavior. All major medical journals frequently allow papers to improperly emphasize secondary endpoints; stopping them from doing so is somehow not seen as an editorial duty. Nor is the Lancet unique in making a large amount of money from drug-company purchases of article reprints that are then used by the drug makers as part of a marketing strategy. In a recent paper where the British Medical Journal and the Lancet revealed their income from drug company–purchased reprints, the New England Journal of Medicine, JAMA, and the Annals of Internal Medicine all declined to reveal their own figures. The culture across medical journals appears consistent both in allowing drug companies (often using professional writers) to inappropriately spin data, and in the journals then profiting when those same companies purchase the relevant reprints.
Contrast that with cultural inconsistency, across medical specialties, in the extent to which their trials rely on unjustified surrogate outcomes. There is a strong tradition within diabetology of conducting these sorts of poor-quality studies. Recent reviews have demolished the supposed benefits of intensive glucose lowering on macrovascular outcomes in patients with type 2 diabetes and the cardiovascular-mortality benefits of metformin. It is plain that HbA1c is not a reliable surrogate for weighing the harms and benefits of particular hypoglycemic agents in patients for whom the risks of macrovascular complications far outweigh those of microvascular eye and kidney disease. Nonetheless, the next study of a hypoglycemic agent will undoubtedly be along the lines of these two insulin degludec trials. The agent will be judged chiefly via nothing better than HbA1c reduction—and the trial will promptly be published and the drug licensed. In cardiology, imagine a company trying to license a new HDL-raising agent solely on the basis of its effects on HDL levels, without examining mortality and morbidity endpoints. It would simply never happen. What explains the difference in epistemological quality between the two specialties? Why are drugs for diabetes evaluated in ways that don’t show what they do to hard outcomes, when drugs for the heart now almost always are?
Cultural attitudes toward evidence exert a huge influence and deserve our attention. The tradition within cardiology of demanding high-quality evidence is not a function of any pre-existing moral or mental superiority among cardiologists. It comes simply from cardiologists’ having witnessed (over decades) the impact of a large number of high-quality trials that changed practice and saved lives. Experience has largely converted cardiologists to the value of seeking reliable data on hard outcomes and of subjecting even their favorite therapies to experimental validation. It is unfortunate that diabetologists have ignored the lessons learned by cardiologists. It is inexcusable and bizarre that journal editors have done the same.
Note: In June the authors published a correspondence piece on this topic in the Lancet.
What’s your take on editorial standards in medical journals and on the BEGIN trials in particular?