March 27th, 2012

Statistical Pitfalls: The Onus is on Us to Look at the Evidence

Several Cardiology Fellows who are attending ACC.12 this week are blogging together on CardioExchange.  The Fellows include Tariq AhmadBill CornwellMegan CoylewrightJeremiah Depta, and John Ryan (moderator). Read the previous post here. Read the next post here.

Mark Twain said, “Facts are stubborn things, but statistics are more pliable.”  I attended a session on Sunday, entitled Literature Interpretation and Statistical Pitfalls in ACS Trials.  The session was moderated by Dr. Debabrata Mukherjee and included an incredibly lively discussion and discourse between Drs. Sanjay Kaul and Salim Yusuf.

The session was highly informative. Though the tone was quite humorous, the speakers systematically debunked the statistics that are commonly used to conduct clinical research. Dr. Sanjay Kaul presented how P values are inappropriately used and placed emphasis on the rift that can exist between statistical and clinical significance. He also spoke about the need to place more emphasis on risk-benefit analysis for therapeutic interventions. His breakdown of several major clinical trials revealed a new perspective on the data when the weight of the risks (fatal bleeding) is compared to the clinical benefit (e.g., revascularization). Helen Parise, Sc.D., then spoke about the appropriate and sometimes incorrect use of composite outcomes, which is incredibly common in cardiology. Dr. Yusuf brilliantly discussed the “minefield” of sub-group analysis, followed by an interesting discussion by Dr. Parise on non-inferiority trials and comparative effectiveness research.

After her discussion, the session became quite animated as Dr. Ajay Kirtane stated his reservations about large observational studies and the quality of data obtained in these studies. An example listed was the selection bias that often goes into treatment decisions that may not be captured in a dataset. He then remarked that comparative effectiveness research could sometimes be “ineffective comparativeness” research, which created quite a lot of discussion including a mention about his remarks in a session the following day on statistical “lies.”  Later in the session, Dr. Peter Jüni discussed how meta-analyses fit into the hierarchy in the pyramid of quality of evidence. Traditionally, meta-analysis was placed at the top of the pyramid but he challenged this notion by stating that it depends on the quality of the meta-analysis.

The most entertaining discussion was between Drs. Kaul and Yusuf on the FDA’s approval of the 150 and 75 mg doses of dabigatran and not the 110 mg dose. Dr. Yusuf was quite adamant that the FDA was in error in their decision making and hamstringed the physicians in the U.S. by not giving them the option to use the 110 mg dose. Dr. Kaul stated that the decision to withhold the dose was based, in part, on the FDA’s fear that the 110 mg dose would be used primarily instead of the 150 mg dose, for which only the latter dose had shown superiority over coumadin in the RE-LY trial. Dr. Salim then retorted that in Canada, both dosages are approved for use and the 110 mg dose was only prescribed in 25% of patients. Again, he cited that U.S. physicians should be given the opportunity to choose the appropriate dose for their patients based on bleeding risk instead of being left to use the 75 mg dose which has not been studied in a randomized trial and approved on the bases of pharmacodynamic/pharmacokinetic data.  If interested, further discussion on this topic can be found here.

The opportunity to see luminaries in the field not only point out the significant limitations of our current methods of statistical analysis but also the incredible divergence of opinions on how data can be interpreted was truly enlightening. At first it may seem as though we are shaping the data to our own accord and that the editors of the major journals have missed the mark by allowing evidence that may be muddled to surface. However, in my opinion, the onus is on the physician to look at the evidence presented and make an informed decision based on his/her interpretation of the analysis.  Without an understanding of the nuances and pitfalls of statistics, it is impossible to critically assess the literature.

Comments are closed.