Bone identification and segmentation in X-ray images are crucial in orthopedics for the automation of clinical procedures, but it often involves some manual operations. In this work, using a modified SegNet neural network, we automatically identify and segment lower limb bone structures on radiographs presenting various fields of view and different patient orientations.
A wide contextual neural network architecture is proposed to perform a high-quality pixel-wise semantic segmentation on X-ray images presenting structures with a similar appearance and strong superposition. The proposed architecture is based on the premise that every output pixel on the label map has a wide receptive field. This allows the network to capture both global and local contextual information. The overlapping between structures is handled with additional labels.
The proposed approach was evaluated on a test dataset composed of 70 radiographs with entire and partial bones. We obtained an average detection rate of 98.00% and an average Dice coefficient of 95.25 ± 9.02% across all classes. For the challenging subset of images with high superposition, we obtained an average detection rate of 96.36% and an average Dice coefficient of 93.81 ± 10.03% across all classes.
The results show the effectiveness of the proposed approach in segmenting and identifying lower limb bone structures and overlapping structures in radiographs with strong bone superposition and highly variable configurations, as well as in radiographs containing only small pieces of bone structures.

© 2022. CARS.