To develop and evaluate the performance of a deep learning model to generate synthetic pulmonary perfusion images from clinical 4DCT images for patients undergoing radiotherapy for lung cancer.
A clinical data set of 58 pre- and post-radiotherapy 99mTc-labelled MAA-SPECT perfusion studies (32 patients) each with contemporaneous 4DCT studies was collected. Using the inhale and exhale phases of the 4DCT, a 3D-Residual Network was trained to create synthetic perfusion images utilizing the MAA-SPECT as ground truth. The training process was repeated for a 50-imaging study, five-fold validation with twenty model instances trained per fold. The highest performing model instance from each fold was selected for inference upon the eight-study test set. A manual lung segmentation was used to compute correlation metrics constrained to the voxels within the lungs. From the pre-treatment test cases (N=5), 50th-percentile contours of well-perfused lung were generated from both the clinical and synthetic perfusion images and the agreement was quantified.
Across the hold-out test set, our deep learning model predicted perfusion with a Spearman correlation coefficient of 0.70 (IQR: 0.61 – 0.76) and a Pearson correlation coefficient of 0.66 (IQR: 0.49 – 0.73). The agreement of the functional avoidance contour pairs was Dice of 0.803 (IQR: 0.750 – 0.810) and average surface distance of 5.92mm (IQR: 5.68 – 7.55).
We demonstrate that from 4DCT alone, a deep learning model can generate synthetic perfusion images with potential application in functional avoidance treatment planning.

© 2021 Institute of Physics and Engineering in Medicine.

Author