FRIDAY, June 1, 2018 (HealthDay News) — The diagnostic performance of a deep learning convolutional neural network (CNN) seems better than that of dermatologists, according to a study published online May 28 in the Annals of Oncology.

Holger A. Haenssle, M.D., from the University of Heidelberg in Germany, and colleagues trained and validated Google’s Inception v4 CNN architecture using dermoscopic images and corresponding diagnoses. A 100-image test-set was used in a comparative cross-sectional reader study (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). The main outcome measures were sensitivity, specificity, and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification of lesions by the CNN versus an international group of 58 dermatologists.

The researchers found that the sensitivity and specificity for lesion classification was 86.6 and 71.3 percent, respectively, in level-I for dermatologists. The sensitivity and specificity were improved with more clinical information (level-II), up to 88.9 and 75.7 percent, respectively. The CNN ROC curve had higher specificity compared with dermatologists in level-I and level-II (82.5 versus 71.3 and 75.7 percent, respectively) at their sensitivities of 86.6 and 88.9 percent, respectively. Compared with the mean ROC AUC of the dermatologists, the CNN ROC AUC was greater (0.86 versus 0.79).

“Most dermatologists were outperformed by the CNN,” the authors write. “Irrespective of any physicians’ experience, they may benefit from assistance by a CNN’s image classification.”

Abstract/Full Text

Copyright © 2018 HealthDay. All rights reserved.
healthday

Author