Skip to main content

Table 1 Terminology

From: Performance of the Framingham risk models and pooled cohort equations for predicting 10-year risk of cardiovascular disease: a systematic review and meta-analysis

 

Definition

Case-mix/patient spectrum

Characteristics of the study population (e.g. age, gender distribution)

Prediction horizon

Time frame in which the model predicts the outcome (e.g. predicting 10-year risk of developing a CVD event).

External validation

Estimating the predictive performance of an existing prediction model in a dataset or study population other than the dataset from which the model was developed.

Predictive performance

Accuracy of the predictions made by a prediction model, often expressed in terms of discrimination or calibration.

Discrimination

Ability of the model to distinguish between people who did and did not develop the event of interest, often quantified by the c-statistic.

Concordance (c)-statistic

Statistic that quantifies the chance that for any two individuals of which one developed the outcome and the other did not, the former has a higher predicted probability according to the model than the latter. A c-statistic of 1 means perfect discriminative ability, whereas a model with a c-statistic of 0.5 is not better than flipping a coin [17].

Calibration

Agreement between observed event risks and event risks predicted by the model.

Observed versus expected (OE) ratio

The ratio of the total number of outcome events that occurred (e.g. in 10 years) and the total number of events predicted by the model. The OE ratio can be calculated for the entire study population (further referred to as ‘total OE ratio’), or in categories of predicted risks.

Calibration slope

Measure that gives an indication of the strength of the predictor effects. The calibration slope ideally equals 1. A calibration slope < 1 indicates that predictions are too extreme (low-risk individuals have a predicted risk that is too low, and high-risk individuals are given a predicted risk that is too high). Conversely, a slope > 1 indicates that predictions are too moderate [18, 19].

Model updating/recalibration

When externally validating a prediction model, adjusting the model to the dataset in which the model is validated, to improve the predictive performance of the model.

Updating the baseline hazard or risk

When externally validating a prediction model, adapting the original baseline hazard or intercept of the prediction model to the dataset in which the model is validated. This updating method corrects for differences in observed outcome incidence between the original development and external validation dataset.

Updating the common slope

When externally validating a prediction model, adapting the beta coefficients of the model using a single correction factor, to proportionally adjust for changes in predictor outcome associations [20].

Model revision

Taking the predictors of an existing previously developed model and fitting these in the external dataset by estimating the new predictor-outcome associations (e.g. regression coefficients).