Share Email Print

Proceedings Paper • new

RGB-D dense mapping with feature-based method
Author(s): Xingyin Fu; Feng Zhu; Qingxiao Wu; Rongrong Lu
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Simultaneous Localization and Mapping (SLAM) plays an important role in navigation and augmented reality (AR) systems. While feature-based visual SLAM has reached a pre-mature stage, RGB-D-based dense SLAM becomes popular since the birth of consumer RGB-D cameras. Different with the feature-based visual SLAM systems, RGB-D-based dense SLAM systems, for example, KinectFusion, calculate camera poses by registering the current frame with the images raycasted from the global model and produce a dense surface by fusing the RGB-D stream. In this paper, we propose a novel reconstruction system. Our system is built on ORB-SLAM2. To generate the dense surface in real-time, we first propose to use truncated signed distance function (TSDF) to fuse the RGB-D frames. Because camera tracking drift is inevitable, it is unwise to represent the entire reconstruction space with a TSDF model or utilize the voxel hashing approach to represent the entire measured surface. We use moving volume proposed in Kintinuous to represent the reconstruction region around the current frame frustum. Different with Kintinuous which corrects the points with embedded deformation graph after pose graph optimization, we re-fuse the images with the optimized camera poses and produce the dense surface again after the user ends the scanning. Second, we use the reconstructed dense map to filter out the outliers of the features in the sparse feature map. The depth maps of the keyframes are raycasted from the TSDF volume according to the camera pose. The feature points in the local map are projected into the nearest keyframe. If the discrepancy between depth values of the feature and the corresponding point in the depth map exceeds the threshold, the feature is considered as an outlier and removed from the feature map. The discrepancy value is also combined with feature pyramid layer to calculate the information matrix when minimizing the reprojection error. The features in the sparse map reconstructed near the produced dense surface will impose large influence in camera tracking. We compare the accuracy of the produced camera trajectories as well as the 3D models to the state-of-the-art systems on the TUM and ICL-NIUM RGB-D benchmark datasets. Experimental results show our system achieves state-of-the-art results.

Paper Details

Date Published: 12 December 2018
PDF: 10 pages
Proc. SPIE 10845, Three-Dimensional Image Acquisition and Display Technology and Applications, 108450K (12 December 2018); doi: 10.1117/12.2505305
Show Author Affiliations
Xingyin Fu, Shenyang Institute of Automation (China)
Univ. of Chinese Academy of Sciences (China)
Feng Zhu, Shenyang Institute of Automation (China)
Qingxiao Wu, Shenyang Institute of Automation (China)
Rongrong Lu, Shenyang Institute of Automation (China)
Univ. of Chinese Academy of Sciences (China)

Published in SPIE Proceedings Vol. 10845:
Three-Dimensional Image Acquisition and Display Technology and Applications
Byoungho Lee; Yongtian Wang; Liangcai Cao; Guohai Situ, Editor(s)

© SPIE. Terms of Use
Back to Top