To overcome the computational burden of processing three-dimensional (3D) medical scans and the lack of spatial information in two-dimensional (2D) medical scans, a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2D convolutional neural networks (2D-CNNs). In order to combine the low-level features and high-level features, we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process. Further, in order to resolve the problems of the blurred boundary of the glioma edema area, we superimposed and fused the T2-weighted fluid-attenuated inversion recovery (FLAIR) modal image and the T2-weighted (T2) modal image to enhance the edema section. For the loss function of network training, we improved the cross-entropy loss function to effectively avoid network over-fitting. On the Multimodal Brain Tumor Image Segmentation Challenge (BraTS) datasets, our method achieves dice similarity coefficient values of 0.84, 0.82, and 0.83 on the BraTS2018 training; 0.82, 0.85, and 0.83 on the BraTS2018 validation; and 0.81, 0.78, and 0.83 on the BraTS2013 testing in terms of whole tumors, tumor cores, and enhancing cores, respectively. Experimental results showed that the proposed method achieved promising accuracy and fast processing, demonstrating good potential for clinical medicine.

Author