Prediction models inform decisions in many areas of medicine. Most models are fitted once and then applied to new (future) patients, despite the fact that model coefficients can vary over time due to changes in patients’ clinical characteristics and disease risk. However, the optimal method to detect changes in model parameters has not been rigorously assessed.
 We simulated data, informed by post-lung transplant mortality data and tested the following two approaches for detecting model change: (1) the “Direct Approach,” it compares coefficients of the model refit on recent data to those at baseline; and (2) “Calibration Regression,” it fits a logistic regression model of the log-odds of the observed outcomes versus the linear predictor from the baseline model (i.e., the log-odds of the predicted probabilities obtained from the baseline model) and tests whether the intercept and slope differ from 0 and 1, respectively. Four scenarios were simulated using logistic regression for binary outcomes as follows: (1) we fixed all model parameters, (2) we varied the outcome prevalence between 0.1 and 0.2, (3) we varied the coefficient of one of the ten predictors between 0.2 and 0.4, and (4) we varied the outcome prevalence and coefficient of one predictor simultaneously.
 Calibration regression tended to detect changes sooner than the Direct Approach, with better performance (e.g., larger proportion of true claims). When the sample size was large, both methods performed well. When two parameters changed simultaneously, neither method performed well.
 Neither change detection method examined here proved optimal under all circumstances. However, our results suggest that if one is interested in detecting a change in overall incidence of an outcome (e.g., intercept), the Calibration Regression method may be superior to the Direct Approach. Conversely, if one is interested in detecting a change in other model covariates (e.g., slope), the Direct Approach may be superior.

Thieme. All rights reserved.

Author