Sharing patient data across institutions to train generalizable deep learning models is challenging due to regulatory and technical hurdles. Distributed learning, where model weights are shared instead of patient data, presents an attractive alternative. Cyclical weight transfer (CWT) has recently been demonstrated as an effective distributed learning method for medical imaging with homogeneous data across institutions. In this study, we optimize CWT to overcome performance losses from variability in training sample sizes and label distributions across institutions.
Optimizations included proportional local training iterations, cyclical learning rate, locally weighted minibatch sampling, and cyclically weighted loss. We evaluated our optimizations on simulated distributed diabetic retinopathy detection and chest radiograph classification.
Proportional local training iteration mitigated performance losses from sample size variability, achieving 98.6% of the accuracy attained by centrally hosting in the diabetic retinopathy dataset split with highest sample size variance across institutions. Locally weighted minibatch sampling and cyclically weighted loss both mitigated performance losses from label distribution variability, achieving 98.6% and 99.1%, respectively, of the accuracy attained by centrally hosting in the diabetic retinopathy dataset split with highest label distribution variability across institutions.
Our optimizations to CWT improve its capability of handling data variability across institutions. Compared to CWT without optimizations, CWT with optimizations achieved performance significantly closer to performance from centrally hosting.
Our work is the first to identify and address challenges of sample size and label distribution variability in simulated distributed deep learning for medical imaging. Future work is needed to address other sources of real-world data variability.

© The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com.

Author