But questions and concerns about its use remain

A prediction model developed with the help of artificial intelligence (AI) identified individuals who would particularly benefit from lung cancer screening more accurately than standard eligibility criteria, and the model missed fewer lung cancers as well, a new analysis found.

Having developed a form of AI known as a convolutional neural network (CNN), researchers found their so-called “CXR-LC” model was more sensitive at identifying lung cancer incidence at 12 years than standard eligibility criteria, yielding a positive predictive value (PPV) of 7.3% (95% CI, 5.2-7.2%) compared to a PPV of 6.2% (95% CI, 5.2-7.2%; P=0.012) for standard eligibility criteria when the two models were compared in equally-sized screening populations, Michael Lu, MD, MPH, Harvard Medical School, Boston, and colleagues reported in the Annals of Internal Medicine.

The same deep-learning model also missed 30.7% fewer lung cancer cases in the same screening population, Lu and colleagues noted.

“Automatically flagging smokers who are eligible for lung cancer screening with CT (computed tomography) in the EMR (electronic medical record) would be an important way to improve screening participation, but this has proved difficult,” Lu and colleagues observed. “We found that a CNN (CXR-LC) can identify patterns on the [chest X-ray] image that identify smokers at high risk for 12-year incident lung cancer and lung cancer death.”

The CXR-LC was a fusion CNN that had a single chest X-ray image plus age, sex, and whether a patient was a current smoker or not as its input. Based on these simple criteria, investigators then predicted the incidence of lung cancer over the next 12 years in two large lung cancer screening populations, the Prostate, Lung, Colorectal, and Ovarian Cancer Screening (PLCO) trial and the National Lung Screening Trial (NLST).

Screening eligibility criteria were those used by the Centers for Medicare & Medical Services (CMS) and include age between 55 to 77 years, a smoking history of 30 or more pack-years, and whether someone is a current smoker or a former smoker who quit smoking within the past 15 years.

As the authors noted, the CMS eligibility criteria, though hailed as a major advance in the prevention of lung cancer death at the time, still miss over half of incident lung cancers.

Investigators compared the performance of their AI model with that of CMS eligibility criteria plus that of another screening model, the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Model 2012 (PLCOm2012) risk score. The PLCOm2012 is a validated lung cancer risk score with state-of-the art performance, predicting six-year lung cancer risk based on the input of 11 different risk factors for lung cancer, as investigators pointed out.

“The primary outcome for our study was incident lung cancer, which we defined as all lung cancers diagnosed after the participant enrolled in the trial,” investigators noted.

The ability of each of the 3 models to discriminate 12-year incident lung cancer rates was assessed using the area under the receiver-operating curve (AUC) for lung cancer incidence. As the authors explained, the AUC describes how well a model discriminates between patients who go on to develop lung cancer and those who do not, where an AUC of 1 indicates a perfect performance and an AUC of 0.5 is equivalent to random chance.

In the PLCO validation set, the CXR-LC model had an AUC of 0.755 for predicting incident lung cancer at 12 years compared with an AUC of 0.634 when using CMS eligibility criteria (P<0.001). In contrast, the performance of the CXR-LC model was similar to that of the PLCOm2012 risk score in the same PLCO validation set.

The deep-learning model also predicted 12 year mortality risk from lung cancer.

Again, in the PLCO data set, CXR-LC had a higher AUC at 0.762 than CMS eligibility criteria at an AUC of 0.638 (P<0.001) and a similar AUC to the PLCOm2012 risk score.

The authors cautioned that they do not recommend the use of chest X-rays solely to assess lung cancer risk. Rather, they envision a future where implementation of CXR-LC could analyze existing chest X-rays from outpatient smokers using an automated EMR tool.

“The process of analyzing an image using CXR-LC takes less than half a second using standard chest radiographs on a local, consumer-grade computer,” they noted. “[And h]igh CXR-LC risk would trigger an EMR alert to do a targeted interview to assess risk and discuss lung cancer screening.”

They also suggested that combining both CXR-LC and a risk model like the PLCOm2012 could improve risk prediction for long-term lung cancer, provided both chest X-ray images and detailed risk factor information are available.

Commenting on the use of AI to assess lung cancer risk, Paul Pinsky, PhD, National Cancer Institute, Bethesda, Maryland, pointed out that the U.S. Preventive Services Task Force has expressed concerns that the use of more complex, computer-based risk prediction models to determine eligibility for lung cancer screening might become a barrier to broader implementation of such screening, especially given the very low current rate of CT screening, even for CMS eligible patients.

“This concern would presumably be greater for an AI-based prediction tool than a standard risk model,” Pinsky suggested.

Moreover, if a model is not explainable to either physicians or patients, “patients and physicians may lack confidence in its predictions,” he stated.

Pinsky also questioned whether the increased risk of lung cancer that may be identified through data mining of patients’ electronic health records (EHRs) is really worth it when the potential to mitigate that risk appears to be modest. “For lung cancer screening, even among eligible persons, the risk for lung cancer is not that high (about 2% within 6 years),” as he pointed out. Moreover, even several rounds of screening will only reduce that risk by 15% to 20%.

There are also harms associated with lung cancer screening, including false positives and unnecessary invasive procedures, as Pinsky pointed out. More broadly, Pinsky questioned whether patients would even want clinicians to use AI systems to assess their records in an effort to identify conditions for which they may be at high risk. Should patients not first offer consent before anyone applies AI to their EHR?

“Further, if the algorithm has limited explainability, how do physicians present the findings to patients?” Pinsky asked.

“The use of patient EHR data to assess disease risk will likely continue to grow in the near future,” he acknowledged. “[But a]long with the potential benefit, there are concerns among patients, physicians, and health care organizations about how to responsibly manage this use.”

“This will be an important field of research going forward,” he predicted.

  1. A model developed with the help of artificial intelligence performed better at predicting long-term incident lung cancer than standard eligibility criteria in at-risk screening populations.

  2. The deep-learning model based on a single chest X-ray image plus simple EMR data missed fewer lung cancers in screening populations than standard screening criteria.

Pam Harrison, Contributing Writer, BreakingMED™

A graphics processing unit used for this research was donated to Lu as an unrestricted gift through the Nvidia Corporation Academic Program. Lu reported research funding as a co-investigator to MGH from Kowa Company Limited and Medimmune/AstraZeneca, personal fees from PQBypass unrelated to this work, and common stock in Nvidia and AMD. Massachusetts General Hospital is exploring licensing the CXR-LC algorithm.

Pinsky had no conflicts of interest to disclose.

Cat ID: 24

Topic ID: 78,24,730,24,143,192,489,65,925,481,96

Author