August 20th, 2013

A Pathway to Preventing Research Fraud

In recent years, allegations of misconduct and fraud in the Netherlands and Japan have resulted in retractions of important clinical trials. The Dutch case may well lead to significant revisions in the guidelines for perioperative beta-blockade. The Japanese case involves a Novartis employee who participated in clinical trials without without disclosing his relationship with the company.

When I hear about manipulation and fabrication of data, I am unfortunately not surprised. With the pressure to succeed, some people will act unethically. Concerns about studies are raised only through uncommon observations about something suspicious in the published material. Certainly that approach has to have a low sensitivity for detecting fraud – or error.

The bigger problem is that we have no opportunity to verify most studies because the data are not available to the scientific community. Wouldn’t the situation be different if we had an expectation that people would share their data? We are asked to trust the studies that are published – but shouldn’t we be able to examine the data in detail and verify the findings? If everyone had access to the raw data, wouldn’t that deter fraud and increase the conscientiousness of investigators? Of course, only a small number of people can make sense of raw data. But if the data were available to students throughout the world, to investigators working in the same field, to regulatory agencies interested in the topic – the prospect of verification alone might raise the level of rigor in the work.

The quality of research may vary notably across groups, but that may be hard to tell from published articles. The raw data are likely to reveal much more about a research group’s work. The documentation and organization of the data can provide insight into the care with which the research was conducted.

Maybe it is time to push data sharing, not only to make science more collaborative and efficient but also to provide the opportunity to verify results. Otherwise, what is really to stop some researchers from fabricating their data or manipulating the results – and who can detect an error in coding?

Many details are worth considering, but this might be the pathway to higher-quality, more trustworthy research. What’s your perspective?

 

2 Responses to “A Pathway to Preventing Research Fraud”

  1. Antonio H. Reis, Ph.D says:

    Research fraud and how it affects guidelines is a timely issue! I agree that “only a small number of people can make sense of raw data”. Those who make fraud never make their data available to the public. Then, manipulation is not easily detectable, and might occur at different degrees. Selective utilization of raw data and disregard of negative results is the most common malpractice, while data fabrication though not very common also occurs. Bias is also possible through poor study design, statistics manipulation, by confounding association with causality, lack of open-mindedness and apology of mainstream views. The influence of industry is also visible in many studies: “Eighty four per cent of doctors say they are concerned about industry influence over clinical guidelines, yet the fear of malpractice suits puts many in an untenable position of following guidelines they believe are flawed or dangerous to patients.”(Why we can’t trust clinical guidelines | BMJ – BMJ.com)(2013)http://www.bmj.com/content/346/bmj.f3830.
    In my view one must create a widespread culture of critical analysis. Then, one must analyze every new result against all other results (especially those of studies that not corroborate the new “findings”). One should invite the authors to comment the results that are different from theirs. One must question the methods they used, and the conclusions they came to. If the authors feel the pressure of criticism they surely will be more cautious.
    Finally, the readers should be open-minded to question the mainstream views. As R. A. Hayward and H. M. Krumholz (2012) wrote: “Changing long-held beliefs is never easy, even when the need for change is based on strong evidence. Change is especially difficult when prior beliefs are firmly embedded in culture, accepted as dogma, and codified in books, articles, guidelines, public service announcements, and performance measures.” (http://circoutcomes.ahajournals.org/content/5/1/2.full)

  2. It would be an incredible motivator to students to choose a real-world verification assignment from among thousands of available options. In many cases, it might be the first verification ever undertaken, rather than a well-worn study that the teacher and previous students have sifted through hundreds of times. The “look what I found” incentive alone would be empowering. Of course, competent teachers would have to verify the students’ work, but the opportunity to rigorously verify or correct the record for a real study on a topic of interest cannot be underestimated.