August 27th, 2012

When Does a Risk Marker Make the Mark?

, and

CardioExchange asked three prevention experts to comment on two recent studies of cardiovascular risk markers, published in JAMA.

THE STUDIES

In an analysis of data from 1330 intermediate-risk participants in the Multi-Ethnic Study of Atherosclerosis, coronary artery calcium provided “the highest improvement in discrimination” for CHD risk, relative to five other risk markers studied.

In a meta-analysis of 14 studies on carotid intima-media thickness, involving nearly 46,000 patients, adding CIMT to risk prediction provided only a small improvement in net reclassification, which the authors determined “unlikely to be of clinical importance.”

See complete news coverage on CardioExchange.

THE EXPERTS RESPOND

Question: Which, if any, additional risk markers are practical enough to use routinely? How do we determine when a risk marker crosses the threshold of usefulness?

J. Michael Gaziano (Brigham and Women’s Hospital; author of the JAMA editorial on these two studies)

Risk markers have two uses: They can be predictors of risk or targets for intervention. While many novel markers capture various aspects of the atherothrmobotic process and reveal valuable insights about mechanism, they often do not greatly enhance our approach to risk assessment in the clinical setting. Risk assessment using a simple prediction model that incorporates a few easy-to-assess risk factors such as age, sex, smoking, blood pressure, etc. has really stood the test of time. Developed in the 1960s by investigators for the Framingham Heart Study, available risk scores provide estimates that get patients into broad risk categories. For most clinical decisions, that’s good enough. Adding new markers generally does not improve the process for several reasons.

Many new markers are correlated with the older ones; thus, the performance of the models is only modestly improved. While reclassification of patients is used as one metric for improving these models, in clinical practice we are concerned only about reclassification when a patient is near the boundary where a clinical decision is to be made. Improving the accuracy of someone’s risk assessment if that risk is very high or very low may not be useful when it does not alter our clinical decisions for that patient. If a patient is near a decision boundary, we could consider merely repeating the risk assessment score at some point in the future. Repeating a diagnostic test can improve its accuracy and, further, can provide information about its trajectory.

For those few markers that seem to yield a meaningful improvement in the characteristics of the risk modeling, which may be the case for coronary calcium scoring, work remains to be done. We have to weigh the incremental value of the knowledge gained against the costs and the risk of obtaining the results of the novel test. This is needed to better understand for whom and in what situations a novel marker has real utility. Little work in this area has been done. For these reasons, the bar remains high for improving risk-prediction modeling using current risk scores comprising simple, low-cost conventional risk factors.

 

Amit Khera (UT Southwestern Medical Center)

The move from simple hazard ratios and P values (i.e., statistical associations) to more clinically meaningful metrics, such as the net reclassification improvement (NRI), has been incredibly valuable for evaluating risk markers. By current metrics, coronary artery calcium (CAC) scanning is the clear winner and, in my experience, can be very helpful in treatment decisions for individual patients. However, for routine application, the devil is in the details. Beyond short-term (10-year) risk, CAC scanning is not feasible for people age 30 to 40, for whom preventive efforts offer greater long-term benefits. Fewer than 5% of women are intermediate-risk, so our current algorithms undercut the potential value of additional risk markers in this group. Rather than picking the one best marker, it may be best to use them in combination — and differently in various populations or for predicting specific endpoints (e.g., CAC poorly predicts stroke). Most important, we have no consensus (or good evidence) about the appropriate intervention for a particular level of most markers, including CAC score.

 

Khurram Nasir (Baptist Health South Florida; Johns Hopkins University)

In my opinion, these studies offer nothing new. The MESA analysis basically confirms the established value of CAC, relative to other biomarkers and risk factors, in terms of prognostication, discrimination, and reclassification of risk for future CVD. The real question is how to gauge whether “novel” risk markers are useful for clinical management. I believe that such markers will cross the critical threshold only when they are able to guide intensity of treatment by identifying which groups are — and which groups are not — likely to benefit from pharmacotherapy, thereby helping to improve decision making about resource allocation. What we need now is a clinical trial comparing the utility and cost-effectiveness of all candidate markers simultaneously. Until then, from the standpoint of clinical equipoise, CAC appears to be the best choice, as it identifies “appropriate” clinical risk and thereby limits under- and overtreatment when only traditional, Framingham-based risk stratification is used.

Which, if any, additional risk makers do you think are ready for routine use in clinical practice?

One Response to “When Does a Risk Marker Make the Mark?”

  1. Robin Motz, M.D., Ph.D. says:

    But now we have to demonstrate that using CAC as a risk marker for increased probability of ASCVD will lead to clinical interventions that will extend life or decrease morbidity.