December 16th, 2009

New Guidelines

How closely do you watch for new guidelines? It seems as if every week new guidelines are published on a variety of topics including atrial fibrillation, valve diseaseSTEMI and NSTEMI, and, most recently, the use of perioperative beta blockers.  Between guidelines, appropriateness criteria and various position papers, it can be extremely challenging to keep abreast of the most recent “standards of care.”

How closely do you scrutinize these publications?  Do you know what data the recommendations are based upon? When I first put together a talk on perioperative beta blockade (back in 2001), it seemed that almost everyone should be on a beta-blocker. Now, with the incorporation of data from several recent studies (including POISE and DECREASE), that just is not the case.

What are your thoughts on making sense of all this – how do you incorporate new recommendations?  What’s more, do you think that guidelines lag behind the data?  Should we wait for these publications to change how we treat our patients?

5 Responses to “New Guidelines”

  1. Trust But Verify…
    As a fellow, it’s always a challenge to keep up with the literature. So when new guidelines do come out, it’s nice to think that here is something that will review the latest evidence in one topic area where I don’t have to on my own. That being said, your point about guidelines sometimes lagging behind the evidence is well taken. Furthermore, there was a recent analysis in JAMA highlighting the fact that most recommendations are level of evidence “C,” or based on expert consensus rather than derived from investigation. Even though it’s tough, I think we should try to critically appraise guidelines the same way we do for the primary literature –sometimes they fall short because they’re out of date or because the evidence base they rely on didn’t include certain patient groups or other reasons. So I don’t think guidelines should always dictate practice in these situations, but they are a great source for reviewing the literature and for defining standard of care in situations where the evidence is clear.

  2. Guidelines are great, but limited…

    Great comment, Nihar. I think you hit on some pretty important points and it reminds me of the recent JAMA article highlighting exactly your comment about limitations of the evidence: http://jama.ama-assn.org/cgi/content/abstract/301/8/831?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=tricoci+allen+kramer+califf+smith&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT. It also reminds me of a pair of the JAMA series of articles they made us read in med school: Users’ Guides to the Medical Literature
    VIII. How to Use Clinical Practice Guidelines A. Are the Recommendations Valid? (http://jama.ama-assn.org/cgi/content/summary/274/7/570?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=hayward+wilson+tunis+bass+guyatt+&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT) and Users’ Guides to the Medical Literature
    VIII. How to Use Clinical Practice Guidelines B. What Are the Recommendations and Will They Help You in Caring for Your Patients? (http://jama.ama-assn.org/cgi/content/summary/274/20/1630?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=hayward+wilson+tunis+bass+guyatt+&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT).

  3. Long and potential for bias
    I agree with all of you guys. One of the aspects that I find challenging is the length of the guidelines. I understand of course that they are reference tools and are designed to be comprehensive. But as a fellow when you are trying to catch up with sixty years of cardiology literature, oftentimes the guidelines are exhaustive. I liked the JAMA article that you guys talked about already. I think was an important publication to get out there. What is also worth reading is a multi-vessel disease review published this year O. Soran et al. / Interactive CardioVascular and Thoracic Surgery 8 (2009) 666–672. In this review the authors compare the number of cardiologists vs surgeons in MVD-PCI guidelines (ACC/AHA guidelines are written by 23 cardiologists and 2 CT surgeons). I think this is an important point that there is often an inherent bias in guidelines almost by definition depending on those who are writing them. at the same time however, most would agree that there is no sinister component or vested interest in guideline, but they should be read with an open mind and in context to the patient sitting in front of you.

  4. Subtle and non-so-subtle bias…
    Thanks, John, for bringing to attention that article from the surgery literature. Although I’ve often wondered what surgeons must think of cardiology guidelines on CV surgery recommendations, I hadn’t actually thought to see how many authors were actually surgeons and hadn’t seen this article before. Fascinating. On a related note, when the very first ICD guidelines came out, it worried me that many of the leading guidelines writers had relationships to device companies — which of course are always disclosed, though always tucked away at the very end of the tome. Now that a lot more evidence has emerged to support the fact that ICDs do save lives, I worry less. I realize it’s probably hard to get a good number of people at that level together who don’t have some potential conflicts of interests, and so I want to trust that the different interests of experts in the room all balance out in the end. As Nihar put it well, though, trust but verify…

  5. My gripe with the guidelines
    I agree that our guidelines are too long, and think the Europeans are much more on the mark here with a more pared down and essential list of recommendations. Although I do see a delay in data being incorporated into guidelines, I am not sure this is always a negative, as I think that the guidelines tend to be too aggressive with “rewarding” new agents (and the companies that make them) with class I recommendations based on trials that reach statistical significance and FDA approval standards but represent modest incremental advances. A great example is the GP IIb/IIIa inhibitors. While they are useful agents in selected circumstances, the data were never strong enough to merit a class I level of evidence.