The most direct means of glaucoma screening is to use cup-to-disc ratio via colour fundus photography, the first step of which is the precise segmentation of the optic cup (OC) and optic disc (OD). In recent years, convolution neural networks (CNN) have shown outstanding performance in medical segmentation tasks. However, most CNN-based methods ignore the effect of boundary ambiguity on performance, which leads to low generalization. This paper is dedicated to solving this issue.
In this paper, we propose a novel segmentation architecture, called BGA-Net, which introduces an auxiliary boundary branch and adversarial learning to jointly segment OD and OC in a multi-label manner. To generate more accurate results, the generative adversarial network is exploited to encourage boundary and mask predictions to be similar to the ground truth ones.
Experimental results show that our BGA-Net system achieves state-of-the-art OC and OD segmentation performance on three publicly available datasets, i.e., the Dice scores for the optic disc/cup on the Drishti-GS, RIM-ONE-r3 and REFUGE datasets are 0.975/0.898, 0.967/0.872 and 0.951/0.866, respectively.
In this work, we not only achieve superior OD and OC segmentation results, but also confirm that the values calculated through the geometric relationship between the former two are highly related to glaucoma.

Author