Conference 12928 > Paper 12928-19
Paper 12928-19

WS-SfMLearner: self-supervised monocular depth and ego-motion estimation on surgical videos with unknown camera parameters

20 February 2024 • 12:10 PM - 12:30 PM PST

Abstract

Depth estimation in surgical video plays a crucial role in many image-guided surgery procedures. However, it is difficult and time consuming to create depth map ground truth datasets in surgical videos due in part to inconsistent brightness and noise in the surgical scene. Therefore, building an accurate and robust self-supervised depth and camera ego-motion estimation system is gaining more attention from the computer vision community. Although several self-supervision methods alleviate the need for ground truth depth maps and poses, they still need known camera intrinsic parameters, which are often missing or not recorded. Moreover, the camera intrinsic prediction methods in existing works depend heavily on the quality of datasets. In this work, we aim to build a self-supervised depth and ego-motion estimation system which can predict not only accurate depth maps and camera pose, but also camera intrinsic parameters. We propose a cost-volume-based supervision approach to give the system auxiliary supervision for camera parameters prediction.

Presenter

Vanderbilt Univ. (United States)
He is a PhD student from Vanderbilt University. His research interests is mainly focus on using deep learning and computer vision technique to solve real-world medical and surgical problems.
Presenter/Author
Vanderbilt Univ. (United States)
Author
Vanderbilt Univ. (United States)