A key task of emergency departments is to promptly identify patients who require hospital admission. Early identification ensures patient safety and aids organisational planning. Supervised machine learning algorithms can use data describing historical episodes to make ahead-of-time predictions of clinical outcomes. Despite this, clinical settings are dynamic environments and the underlying data distributions characterising episodes can change with time (data drift), and so can the relationship between episode characteristics and associated clinical outcomes (concept drift). Practically this means deployed algorithms must be monitored to ensure their safety. We demonstrate how explainable machine learning can be used to monitor data drift, using the COVID-19 pandemic as a severe example. We present a machine learning classifier trained using (pre-COVID-19) data, to identify patients at high risk of admission during an emergency department attendance. We then evaluate our model’s performance on attendances occurring pre-pandemic (AUROC of 0.856 with 95%CI [0.852, 0.859]) and during the COVID-19 pandemic (AUROC of 0.826 with 95%CI [0.814, 0.837]). We demonstrate two benefits of explainable machine learning (SHAP) for models deployed in healthcare settings: (1) By tracking the variation in a feature’s SHAP value relative to its global importance, a complimentary measure of data drift is found which highlights the need to retrain a predictive model. (2) By observing the relative changes in feature importance emergent health risks can be identified.
© 2021. The Author(s).

Author