Share Email Print
cover

Proceedings Paper

Towards augmented reality-based suturing in monocular laparoscopic training
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Minimally Invasive Surgery (MIS) techniques have gained rapid popularity among surgeons since they offer significant clinical benefits including reduced recovery time and diminished post-operative adverse effects. However, conventional endoscopic systems output monocular video which compromises depth perception, spatial orientation and field of view. Suturing is one of the most complex tasks performed under these circumstances. Key components of this tasks are the interplay between needle holder and the surgical needle. Reliable 3D localization of needle and instruments in real time could be used to augment the scene with additional parameters that describe their quantitative geometric relation, e.g. the relation between the estimated needle plane and its rotation center and the instrument. This could contribute towards standardization and training of basic skills and operative techniques, enhance overall surgical performance, and reduce the risk of complications. The paper proposes an Augmented Reality environment with quantitative and qualitative visual representations to enhance laparoscopic training outcomes performed on a silicone pad. This is enabled by a multi-task supervised deep neural network which performs multi-class segmentation and depth map prediction. Scarcity of labels has been conquered by creating a virtual environment which resembles the surgical training scenario to generate dense depth maps and segmentation maps. The proposed convolutional neural network was tested on real surgical training scenarios and showed to be robust to occlusion of the needle. The network achieves a dice score of 0.67 for surgical needle segmentation, 0.81 for needle holder instrument segmentation and a mean absolute error of 6.5 mm for depth estimation.

Paper Details

Date Published: 16 March 2020
PDF: 7 pages
Proc. SPIE 11315, Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, 113150X (16 March 2020); doi: 10.1117/12.2550830
Show Author Affiliations
Chandrakanth Jayachandran Preetha, Univ. Hospital Heidelberg (Germany)
Otto-von-Guericke Univ. Magdeburg (Germany)
Jonathan Kloss, Univ. Hospital Heidelberg (Germany)
Otto-von-Guericke Univ. Magdeburg (Germany)
Fabian Siegfried Wehrtmann, Univ. Hospital Heidelberg (Germany)
Lalith Sharan, Univ. Hospital Heidelberg (Germany)
Carolyn Fan, Univ. Hospital Heidelberg (Germany)
Beat Peter Müller-Stich, Univ. Hospital Heidelberg (Germany)
Felix Nickel, Univ. Hospital Heidelberg (Germany)
Sandy Engelhardt, Univ. Hospital Heidelberg (Germany)


Published in SPIE Proceedings Vol. 11315:
Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling
Baowei Fei; Cristian A. Linte, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray