Like you said, the calculator gives the same point estimate from the model each time you enter the same input variables. There is no variation in that.

However, as you also astutely infer, the model itself could be used to yield confidence intervals for these point estimates. The calculation is complex and I believe requires the raw data and variance-covariance matrices. I have discussed this with senior bio-statisticians and am told it can be done though.

I think there is some value in thinking, at least for theoretical purposes, about confidence intervals in this situation. Sure, the risk score is only an estimate, but it is an estimate based only on your study sample and may not be true for the underlying population. If you base the decision not to treat with statins on an estimate of 7.0%, wouldn’t one be interested to know the actual range that covers the true risk 95% of the time? (especially if the upper end of this range goes well past the treatment threshold of 7.5%)

As an aside, another thing we are interested in is looking at natural variation in the risk prediction estimates from the model. For example, we are interested in repeating lipids and BP 24 hours after the initial entry of the input variables into the risk model. This is another way to think about variability that is less statistical.

That said, I am not trying to direct the conversation to the technicalities of confidence limits, rather I was using this idea of extremely wide limits simply to underscore the fact that individual risk is, as you say, indeed a difficult thing to contemplate.

Many thanks.

]]>You mentioned in your last comment that the confidence interval of the risk estimate is extremely wide for a given individual. I’m not sure that it even makes sense to talk about a confidence interval around the calculated risk estimate of an individual.

The calculator itself has no variabity. It will give you the same result with the same input variables, time and time again. It has a confidence interval of zero.

The model upon which the calculator is based probably has a confidence interval, but how would you calculate it? I suppose a statistician could somehow combine the confidence intervals associated with all the beta coefficients that go into the model to give a cumulative confidence interval, but that is way over my head.

But even the beta coefficients don’t have one confidence interval. The beta coefficients give a relationship between a unit change in a covariant and it’s affect on the overall model. But within each cohort, the confidence interval for the beta coefficient will vary over the range of the covariant. There are data sparce and data dense areas within the cohort. In data sparce areas, the confidence interval for the beta coefficient would be much broader.

So I’m not sure that it makes sense to think about a confidence interval around the calculated ASCVD risk. It is only an estimate. It is a number that provides a starting point for a discussion with the patient about risk factors and statin treatment. The number only makes sense when it is put into context. How does an individual’s estimate compare and is the number high enough to warrant a statin?

I totally agree with your point in your article. Estimating individual risk is a difficult thing to contemplate.

]]>However, I would like to refocus the conversation a little, if possible. This piece is not about the pros and cons of statins in primary prevention. Personally, I feel there is a very strong evidence base for statins in targeted higher-risk primary prevention individuals. Recent data, including those from the Cochrane Collaboration, support this. However, I am not trying to force this belief on any doubters or make this the focus.

Rather, this conversation relates to a more fundamental concept, that of risk. The primary focus on the patient-physician conversation in the new guidelines requires that we now explain risk accurately to patients. This was the motivation for the piece. I have been humbled to recognize how difficult it is to truly know an individual person’s risk. Our AJC review explores many reasons for this, incorporating analogies from other fields of science that also depend on risk and probability.

This uncertainty about individual risk required a simple change in how I personally think about risk and, more importantly, how I explain it to patients. Risk estimates are accurate for groups but are not personalized and the confidence interval is extremely wide for a given individual. I am concerned that some providers may not recognize this limitation and could over-rely on risk-scores, obtaining a false sense of security, in some persons who may be ‘outliers’ in the risk model (where the model does not ‘fit’ and underestimates their personal risk situation). Admittedly, this is concept is not new, indeed it gets a brief disclaimer in the guidelines, but it appears to be under-recognized in general practice.

I would be interested to hear how other physicians describe risk to their patients

Finally, I generally endorse both the risk assessment and cholesterol treatment guidelines, both landmark documents, simply because I do not have just one patient, and for groups of patients I know that by following the guidelines I will get things right on average. However, by not over-relying on risk estimates, and by using my clinical acumen when necessary, I hope to get things right more often than on average.

]]>Regarding applying calculations of risk based on the presence of risk factors, wouldn’t you want to allow that the 100 people put into the group were NOT identical to him. That group would be of a wide variety of people who shared the characteristics considered in the calculation of risk. There would be no one else quite like him in the group. That would reinforce the idea of population risk estimates. The public has been hearing about such risks for a long time. Even if we used the CAC to help refine our risk estimates, wouldn’t we still be doing population risk estimates. ]]>