When managing patients with asthma, a major goal is to reduce hospital visits resulting from the disease. Some healthcare centers are now using machine learning predictive models to determine which patients with asthma are highly likely to experience poor outcomes in the future. “Machine learning is a state-of-the-art method for gaining high prediction accuracy,” explains Gang Luo, PhD. “While it has great potential to improve healthcare, most machine learning models are black boxes and don’t explain their predictions, creating a barrier for use in clinical practice. This has been a well-known problem associated with machine learning for many years.”

Predicting & Explaining Asthma Hospitalization Risk

Recently, Dr. Luo and colleagues built an extreme gradient boosting (XGBoost) machine learning model to predict asthma hospital visits in the subsequent year for patients with asthma. This XGBoost model was found to be more accurate than previous models, but like most machine learning models, it did not offer explanations as to why patients were at risk for poor outcomes. To overcome this barrier, Dr. Luo and colleagues conducted a study—published in JMIR Medical Informatics—in which they developed a method to automatically explain the model’s prediction results and suggest tailored interventions without lowering any of the model’s performance measures.

The automatic explanation function was able to explain prediction results for 89.7% of patients with asthma who were correctly predicted to incur asthma hospital visits in the subsequent year. This percentage is high enough to support routine clinical use of this method. Of note, the researchers also presented several sample rule-based explanations provided by the function to illustrate how the function worked (Table).

Suggesting Tailored Asthma Interventions

“For the first time, our study showed that we can automatically provide rule-based explanations and suggest tailored interventions for predictions from any black-box machine learning predictive model built on tabular data without degrading any of the model’s performance measures,” says Dr. Luo. “This occurs regardless of whether the outcome of interest has a skewed distribution. Clinicians were able to understand the rule-based explanations. Among all automatic explanation methods for machine learning predictions, our method is the only one that can automatically suggest interventions.”

According to Dr. Luo, clinicians previously needed to manually review long patient records and think of interventions on their own. “This consumes a lot of time, is labor intensive, and may lead to missing important information and interventions,” he says. “Our method can serve as a reminder system to help prevent clinicians from missing these opportunities. It also greatly speeds up processes, because the summary information is presented directly to clinicians and doesn’t require sifting through long patient records to make an informed decision.”

The study team notes that the automatic explanation function should be viewed as a reminder for decision support rather than a replacement for clinical judgment. It is still the clinician’s responsibility to use their own judgment to decide whether to use the model’s prediction results and apply suggested interventions to their patients. If there are any doubts, clinicians are recommended to check their patients’ records before making final decisions on any recommendations.

Impacting Clinician Use of Machine Learning for Patients With Asthma

After further improvement of model accuracy, using the asthma outcome prediction model together with the automatic explanation function could help with decision support to guide the allocation of limited asthma care management resources. This could help boost asthma outcomes and reduce resource use and costs.

“Predicting hospital visits for patients with asthma is an urgent need for asthma care management, which is widely used to improve outcomes,” Dr. Luo says. “Researchers have been working on this problem for at least two decades but have repeatedly encountered problems with low prediction accuracy. Our model significantly improved prediction accuracy. In addition, we can now automatically explain the prediction results. These are important factors that impact the willingness of clinicians to use our model in clinical practice. In future research, we plan to test our automatic explanation method on more predictive modeling problems, such as in different prediction targets and diseases.”