January 3rd, 2012

Missing Data: The Elephant That’s Not in the Room

There is a problem so grave that it threatens the very validity of what we learn from the medical literature. Bad data? Not exactly. Actually, it’s missing data — information, relevant to the risks and benefits of treatments, that is simply not published. In some cases, these data would make a critical difference in the inferences that readers draw from the literature. The absence of the data renders meta-analyses, systematic reviews, and book chapters suspect. Conclusions are made on the basis of incomplete science. In short, publication bias and selective publication are impugning the validity of what we can learn from a PubMed search or even the most careful review of published studies.

This matter demands our immediate attention and speaks to the need to rethink the configuration of clinical medical science. It may be time to adopt strategies to ensure that all relevant studies, results, and supporting documentation are made publicly available. “Out of sight, out of mind” is a dangerous reality in science and medicine. It’s time for a change — and it starts with the recognition that we have a problem.

I urge you to read BMJ this week to explore the evidence of this problem. In full disclosure, the studies include one by me (with others, led by Joe Ross) showing that more than half of trials sponsored by the NIH go unpublished even 30 months after completion. The other articles reveal troubling information, including about how missing data can affect the results of meta-analyses — and how many investigators are ignoring the requirements for mandatory reporting of trial results, raising the question of what “mandatory” actually means.

It is time to pay attention to this issue — and to begin working together to solve it. Let’s advocate for open science and get all the information out in a timely way for everyone to inspect. There are many facets to this problem, and we should not look to assign blame, but we do need to change our research and publication culture. Our entire clinical research community, including those who use the information and those who contribute to its dissemination, must collectively determine how best to get beyond this period when we are working with an incomplete view of medical evidence. Let’s put everyone on notice: The era of missing data must end.

After you review the studies in BMJ, please share your thoughts here with fellow members on CardioExchange. Links to several of the articles are provided below, with key quotations from each one.

Ross et al: Despite recent improvement in timely publication, fewer than half of trials funded by NIH are published in a peer reviewed biomedical journal indexed by Medline within 30 months of trial completion. Moreover, after a median of 51 months after trial completion, a third of trials remained unpublished.

Hart et al: The effect of including unpublished FDA trial outcome data varies by drug and outcome.

Ahmed et al: Publication, availability, and selection biases are a potential concern for meta-analyses of individual participant data, but many reviewers neglect to examine or discuss them. These issues warn against uncritically viewing any meta-analysis that uses individual participant data as the most reliable.

Prayle et al: Most trials subject to mandatory reporting did not report results within a year of completion.

Wieland et al: Based on the results for 2005, at least 3000 records describing randomised controlled trials but not indexed using RCT may have been entered into Medline between 2006 and 2011.

8 Responses to “Missing Data: The Elephant That’s Not in the Room”

  1. Robin Motz, M.D., Ph.D. says:

    It’s interesting that there is no record in any scientific field (physics, chemistry, biology) of experiments that did not “pan out” and therefore were never reported, or the research papers that were rejected by reviewers that may nevertheless have contained valuable information.

  2. Steven Greer, MD says:

    Antidpressants and the literature are plagued by this, as Dr. Turner published in the NEJM. See our video here

    http://currentmedicine.tv/2011/specialties/psychiatry/eric-turner-md-selective-publication-of-positive-trial-on-antidepressants/

    Competing interests pertaining specifically to this post, comment, or both:
    none

  3. Saurav Chatterjee, MD says:

    maybe higher impact journals can have reviewers complete the new Cochrane tool for assessing risk of bias?
    http://www.ncbi.nlm.nih.gov/pubmed/22008217

  4. Martin Broder, MD says:

    Not all readers have the same needs or interests. Don’t some print journals refer interested readers to a website which contains charts, tables, etc? For those that still read print journals, is this not a sensible way to cut down on the sheer bulk (and expense) of research papers with extensive data documentation. As long as the data are accessible, and reviewed by the peer-review process, is that not a satisfactory compromise?

    Competing interests pertaining specifically to this post, comment, or both:
    No conflicts

  5. In “The Earth is Round, p < .05," (American Psychologist, Dec. 1994), Jacob Cohen coined the phrase "statistical hypothesis inference testing," the acronym of which is the best description of our current system of medical research reporting.

  6. Harlan is, as usual, absolutely correct. Unavailable data which is not considered may substantially contribute to the difference between actual performance of drugs, procedures, and interventions in the trenches and what is reported, and anticipated, from published trials.

    At the moment, the data on the surface constitute an “as if” world, as opposed to the real world.
    Richard Kones MD

    Competing interests pertaining specifically to this post, comment, or both:
    None.

  7. In an era of “pay for performance”, should a withhold of a portion of the funding of a study be withheld(? 1/3 – 1/2) until publication of the study or full posting of results in an independent archive?

  8. Dan Hackam, MD PhD says:

    The problem is much worse in the observational data world. If you think data suppression, publication bias, reporting bias and “hidden results” are prevalent amongst registered clinical trials, you have no idea how things are bad in “bread and butter” epidemiologic observational research.