Intraoperative navigation during liver resection remains difficult and requires high radiologic skills because liver anatomy is complex and patient-specific. Augmented reality (AR) during open liver surgery could be helpful to guide hepatectomies and optimize resection margins but faces many challenges when large parenchymal deformations take place. We aimed to experiment a new vision-based AR to assess its clinical feasibility and anatomical accuracy.Based on preoperative CT scan 3-D segmentations, we applied a non-rigid registration method, integrating a physics-based elastic model of the liver, computed in real time using an efficient finite element method. To fit the actual deformations, the model was driven by data provided by a single RGB-D camera. Five livers were considered in this experiment. In vivo AR was performed during hepatectomy (n = 4), with manual handling of the livers resulting in large realistic deformations. Ex vivo experiment (n = 1) consisted in repeated CT scans of explanted whole organ carrying internal metallic landmarks, in fixed deformations, and allowed us to analyze our estimated deformations and quantify spatial errors.In vivo AR tests were successfully achieved in all patients with a fast and agile setup installation (< 10 min) and real-time overlay of the virtual anatomy onto the surgical field displayed on an external screen. In addition, an ex vivo quantification demonstrated a 7.9 mm root mean square error for the registration of internal landmarks.

Reference link- https://link.springer.com/article/10.1007/s11605-020-04519-4

Author