The following is a summary of “A generative adversarial neural network with multi-attention feature extraction for fundus lesion segmentation,” published in the October 2023 issue of Opthalmology by Yuan et al.
Researchers started a retrospective study to develop a more accurate fundus lesion segmentation method using a generative adversarial neural network with multi-attention feature extraction.
They developed a generative adversarial network with multi-attention feature extraction to segment the diabetic retinopathy region. The main contributions were an improved residual U-Net with a self-attention mechanism as a productive network. This design allowed for the total extraction of local and global features of lesions while reducing the loss of key feature information. An external attention mechanism was incorporated into the residual U-Net network to address the correlation between identical lesion features in different samples. This allowed a focus on the pertinent parts of the same lesions across the entire dataset. Designed a discriminative network using the PatchGAN structure to enhance the generation network’s segmentation ability by distinguishing between true and false samples.
The results showed that the proposed network was assessed on the public dataset IDRiD, attaining Dice correlation coefficients of 75.7%, 76.53%, 50.06%, and 45.89% for EX, SE, MA, and HE, respectively.
Investigators concluded that generative adversarial neural networks accurately segmented diabetic retinopathy in fundus images.
Source: link.springer.com/article/10.1007/s10792-023-02911-y
Leave a Reply