November 16th, 2010
Is There a Statistician in the Room?
Susan Cheng, MD
Several Cardiology Fellows who are attending this week’s AHA meeting are blogging together on CardioExchange. The Fellows include Susan Cheng, Madhavi Reddy, John Ryan, and Amit Shah. Check back often to learn about the biggest buzz in Chicago this week — whether it’s a poster, a presentation, or the word in the hallways. You can read the preceding post here.
While I was sitting in on an AHA epidemiology session focused on ideal cardiovascular health status, based on what’s now called Life’s Simple 7, somebody mentioned using “principal components analysis.” This method was used to re-categorize a measure of healthy diet — basically because the vast majority of people (Americans) in the study would otherwise be categorized as eating unhealthily, which would render the diet measure useless. As somebody with a bit of formal training in biostatistics, I was familiar with most of the methods mentioned in the session. So I kind of know what principal components analysis is meant to do but only in a very general sense, and I would definitely need a statistician to help me understand how it was applied in this particular study. I wondered how many other people in the room might also have felt stumped by this part of the methodology.
Having heard a lot of buzz about ROCKET-AF, I later ventured to drop by the plenary session where this trial was being presented. The post-presentation discussion was extremely interesting, but again involved statistical concepts that I didn’t feel completely familiar with, as somebody who isn’t active in clinical trials research. Following Ken Mahaffey’s very polished presentation of the results, Elaine Hylek presented her discussant opinion and focused on the potential pitfalls of non-inferiority trial design. By the time she was done, I was wishing that I could watch the trial presentation again so that I could better scrutinize the methods. Only after brushing up on the differences between non-inferiority and superiority trials (this site is helpful), did I feel that I could revisit the slides online to try to figure out the methodological nuances for myself.
So, I’m wondering if maybe there’s something missing at conferences like AHA. Cutting-edge research often involves not just ongoing advances in cardiovascular science but also ongoing developments in the statistical methods being used — including risk prediction models (C-statistics, net reclassification index, etc.), genome-wide association analyses, non-inferiority trials, and adaptive trial designs. Could there be a way to help the average conference attendee make better sense of methods in order to better make sense of the results? If conferences like AHA are to serve as a form of CME, perhaps they should have a greater emphasis on keeping us all up-to-date on how to critically appraise the latest research. Perhaps more statistics primers scheduled at the beginning of the conference, or each day of the conference, would help? Or journal-club-like sessions at the end of each conference day? Or maybe just an online resource that reviews some general concepts and some more advanced ones in a relatively accessible format?
Then again, I could be the only person at AHA hungering for more stats knowledge while wandering through the convention halls. If so, just let me know…
Dr Cheng-thanks for putting up a very enlightening and pertinent post.lots of clinicians get flummoxed by presentation delving into deeper nuances of biostatistics-and often feel overwhelmed.I have even heard of instances where a great idea/data-set didnot get the proper importance or an appropriate publication because of unavailability of an adroit biostatistician to lend direction to the study.That is why I feel that a platform like Cardioexchange was so badly needed where we have an opportunity as discuss and get inputs from experts in so many fields……..
Susan:
You are 99.65% (p<0.002) right!! Where's a good statistician when you need one?
Every doctor of medicine is in danger of accepting the results that are clouded by smart or sly statistics. Presentations and papers have to include statistical methods, yet they should include some conclusion that would link the statistical method with the impact for clinical practice. (Last time, I have seen that one study of about 300 pts changed the guidelines for CRT in Europe. Is this normal, reasonable???)
It is a pity one has to read or register a lot of news in medicine that will finally prove to be of little or no clinical relevance.
Who would curb this?
Susan
I most certainly agree… For the past several years, I have collected a number of papers ranging from reviews to esoterica in study design, statistical methods, and “tricks” (dirty and otherwise)that may be employed. But above all, it behooves us to be able to understand the nuances.
Also, it was a pleasure to chat with you briefly pending the photograph recently!
We constantly see statical analysis mis-interpreted at the highest levels in our best Journals.
How often have I seen a study that failed to prove an association between two variables being misinterpreted with the conclusion stating that it proved a lack of association. It may fail to prove an association that exists because the study pool is too small or the placebo worked too well or the dose or delivery form of a pharmaceutical was poorly chosen. This lack of association proves nothing other than possibly an ill conceived study.
Not only could the attendees at the AHA benefit from some statistical education, the authors and editors of our major journals should be given training in remedial statistics.
I’m drawn to Dr. Kostek’s statement that study conclusions should “…link the statistical method with the impact for clinical practice.” The NNT (number needed to treat) is helpful, but it certainly doesn’t describe frame the whole picture (i.e., changing practice based on this small study may affect treatment of millions of people at a cost of a bijillion dollars).