THURSDAY, Nov. 29, 2018 (HealthDay News) — A deep learning algorithm, CheXNeXt, performs comparably to radiologists in detecting multiple thoracic pathologies in frontal-view chest radiographs, according to a study published online Nov. 20 in PLOS Medicine.

Pranav Rajpurkar, from Stanford University in California, and colleagues examined the performance of the deep learning algorithm on the detection of pathologies in chest radiographs versus practicing radiologists. A convolutional neural network was developed to concurrently detect the presence of 14 different pathologies in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a validation set of 420 images. CheXNeXt’s discriminative performance on the validation set was compared to the micro-averaged performance of nine radiologists using the area under the receiver operating characteristic curve (AUC).

The researchers found that CheXNeXt achieved and did not achieve radiologist-level performance on 11 and three pathologies, respectively. Statistically significantly higher AUC performance was achieved by radiologists on the pathologies of cardiomegaly (AUC: 0.888 versus 0.831), emphysema (AUC: 0.911 versus 0.704), and hiatal hernia (AUC: 0.985 versus 0.851). In detecting atelectasis, CheXNeXt performed statistically significantly better than radiologists (AUC, 0.862 versus 0.808). In the other 10 pathologies, the differences were not statistically significant. Radiologists took longer than CheXNeXt to interpret the 420 images (240 versus 1.5 minutes).

“This technology may have the potential to improve health care delivery and increase access to chest radiograph expertise for the detection of a variety of acute diseases,” the authors write.

Several authors disclosed financial ties to the biopharmaceutical, medical device, and health care industries.

Abstract/Full Text

Copyright © 2018 HealthDay. All rights reserved.
healthday

Author