Multi-modality magnetic resonance imaging (MRI) can reveal distinct patterns of tissue in the human body and is crucial to clinical diagnosis. But it still remains a challenge to obtain diverse and plausible multi-modality MR images due to expense, noise, and artifacts. For the same lesion, different modalities of MRI have big differences in context information, coarse location, and fine structure. In order to achieve better generation and segmentation performance, a dual-scale multi-modality perceptual generative adversarial network (DualMMP-GAN) is proposed based on cycle-consistent generative adversarial networks (CycleGAN). Dilated residual blocks are introduced to increase the receptive field, preserving structure and context information of images. A dual-scale discriminator is constructed. The generator is optimized by discriminating patches to represent lesions with different sizes. The perceptual consistency loss is introduced to learn the mapping between the generated and target modality at different semantic levels. Moreover, generative multi-modality segmentation (GMMS) combining given modalities with generated modalities is proposed for brain tumor segmentation. Experimental results show that the DualMMP-GAN outperforms the CycleGAN and some state-of-the-art methods in terms of PSNR, SSMI, and RMSE in most tasks. In addition, dice, sensitivity, specificity, and Hausdorff95 obtained from segmentation by GMMS are all higher than those from a single modality. The objective index obtained by the proposed methods are close to upper bounds obtained from real multiple modalities, indicating that GMMS can achieve similar effects as multi-modality. Overall, the proposed methods can serve as an effective method in clinical brain tumor diagnosis with promising application potential.
Copyright © 2022 Elsevier Ltd. All rights reserved.

Author