January 18th, 2013

Media Coverage of Research: Does It Sometimes Miss the Point?

Sometimes when I see a new study in the media I wonder about the coverage. The media spin often makes it seem that the researchers are studying the obvious. Today I saw a headline in the American College of Cardiology CV News Digest that blared: “Postop Mortality May Be High When CPR is Required.” MedPage Today, Medwire, and Medscape covered this study. I have not read the paper, but fully accept that the conclusion is true. Still, I couldn’t help to wonder about it. Here is the blurb:

Study: Postop Mortality May Be High When CPR Is Required.

MedPage Today (1/18, Smith) reports, “One surgical patient in 203 had a cardiac arrest and needed cardiopulmonary resuscitation (CPR) either during or after surgery, according to analysis of a large 5-year database.” However, “fewer than one patient in five who had CPR survived to discharge in 30 days and the risk of death increased as the number of comorbidities rose, according to” researchers. The investigators found that approximately “three-quarters of patients who had CPR had a complication, such as sepsis, on or before the day of CPR.” The research was published in JAMA Surgery.

MedwireNews (1/18, McDermid) reports that “the most common complications were intubation (in 46.5% of patients), prolonged ventilator use (37.0%), septicemia (33.5%), renal impairment (17.7%), pneumonia (17.3%), and bleeding (16.0%).”

Medscape (1/18, Fox) reports that, according to the researchers, “Complications commonly precede arrest; prevention or aggressive treatment of these complications may potentially prevent CPR and improve outcomes. These data could aid discussions regarding advance directives among surgical patients.”

Do you encounter studies or coverage that make you wonder: Was that really a research question?

3 Responses to “Media Coverage of Research: Does It Sometimes Miss the Point?”

  1. Tariq Ahmad, MD, MPH says:

    Dr. Krumholz, I completely agree! However, I hope people do not feel that way about my research (when and if it gets published). Along those same lines, do you think that the explosion in the number of cardiology (or other) journals leads to a decline in the overall quality of publications? Or have the same percentage of such articles always made it to print?

  2. My biggest pet peeve these days with media coverage is conflating observational studies with experimental studies (i.e., randomized clinical trials). The implication of cause and effect, which seems almost universal in the headlines describing an observational study is particularly prevalent and misleading. I have often wondered whether we could adopt some type of standardized warning in the headline of a story that would clearly indicate that the study was observational in nature. Maybe a 72 point “OBS” somewhere in the headline. Eventually, over time, perhaps even Joe Q. Public would start to understand the difference between good science and an interesting hypothesis.

  3. Great points so far. When we look at all of the research that is being done in the US, there are currently not enough journals to publish the results, despite Tariq’s accurate observation of the increase in the number of journals. This void has started to be filled by online-only journals, such as PlosOne. In this model, the consumer decides what research is relevant or well-done, more so than the editorial board. This brings to David’s and Harlan’s points- readers, including journalists, will need to have a better understanding as to the different type of studies being published and what are the intricacies of the research being presented.