Share Email Print
cover

Proceedings Paper • new

StreoScenNet: surgical stereo robotic scene segmentation
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Surgical robot technology has revolutionized surgery toward a safer laparoscopic surgery and ideally been suited for surgeries requiring minimal invasiveness. Sematic segmentation from robot-assisted surgery videos is an essential task in many computer-assisted robotic surgical systems. Some of the applications include instrument detection, tracking and pose estimation. Usually, the left and right frames from the stereoscopic surgical instrument are used for semantic segmentation independently from each other. However, this approach is prone to poor segmentation since the stereo frames are not integrated for accurate estimation of the surgical scene. To cope with this problem, we proposed a multi encoder and single decoder convolutional neural network named StreoScenNet which exploits the left and right frames of the stereoscopic surgical system. The proposed architecture consists of multiple ResNet encoder blocks and a stacked convolutional decoder network connected with a novel sum-skip connection. The input to the network is a set of left and right frames and the output is a mask of the segmented regions for the left frame. It is trained end-to-end and the segmentation is achieved without the need of any pre- or post-processing. We compare the proposed architectures against state-of-the-art fully convolutional networks. We validate our methods using existing benchmark datasets that includes robotic instruments as well as anatomical objects and non-robotic surgical instruments. Compared with the previous instrument segmentation methods, our approach achieves a significant improved Dice similarity coefficient.

Paper Details

Date Published: 8 March 2019
PDF: 9 pages
Proc. SPIE 10951, Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, 109510P (8 March 2019); doi: 10.1117/12.2512518
Show Author Affiliations
Ahmed Mohammed, Norwegian Univ. of Science and Technology (Norway)
Sule Yildirim, Norwegian Univ. of Science and Technology (Norway)
Ivar Farup, Norwegian Univ. of Science and Technology (Norway)
Marius Pedersen, Norwegian Univ. of Science and Technology (Norway)
Øistein Hovde, Gjøvik and Institute of Clinical Medicine, Univ. of Oslo (Norway)


Published in SPIE Proceedings Vol. 10951:
Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling
Baowei Fei; Cristian A. Linte, Editor(s)

© SPIE. Terms of Use
Back to Top