Melanoma is a highly malignant skin tumor. Accurate segmentation of skin lesions from dermoscopy images is pivotal for computer-aided diagnosis of melanoma. However, blurred lesion boundaries, variable lesion shapes, and other interference factors pose a challenge in this regard.
This work proposes a novel framework called CFF-Net (Cross Feature Fusion Network) for supervised skin lesion segmentation. The encoder of the network includes dual branches, where the CNNs branch aims to extract rich local features while MLPs branch is used to establish both the global-spatial-dependencies and global-channel-dependencies for precise delineation of skin lesions. Besides, a feature-interaction module between two branches is designed for strengthening the feature representation by allowing dynamic exchange of spatial and channel information, so as to retain more spatial details and inhibit irrelevant noise. Moreover, an auxiliary prediction task is introduced to learn the global geometric information, highlighting the boundary of the skin lesion.
Comprehensive experiments using four publicly available skin lesion datasets (i.e., ISIC 2018, ISIC 2017, ISIC 2016, and PH2) indicated that CFF-Net outperformed the state-of-the-art models. In particular, CFF-Net greatly increased the average Jaccard Index score from 79.71% to 81.86% in ISIC 2018, from 78.03% to 80.21% in ISIC 2017, from 82.58% to 85.38% in ISIC 2016, and from 84.18% to 89.71% in PH2 compared with U-Net. Ablation studies demonstrated the effectiveness of each proposed component. Cross-validation experiments in ISIC 2018 and PH2 datasets verified the generalizability of CFF-Net under different skin lesion data distributions. Finally, comparison experiments using three public datasets demonstrated the superior performance of our model.
The proposed CFF-Net performed well in four public skin lesion datasets, especially for challenging cases with blurred edges of skin lesions and low contrast between skin lesions and background. CFF-Net can be employed for other segmentation tasks with better prediction and more accurate delineation of boundaries.

Copyright © 2023. Published by Elsevier B.V.