Share Email Print
cover

Proceedings Paper • new

Sparse 3D point clouds segmentation considering 2D image feature extraction with deep learning
Author(s): Yusheng Li; Yong Tian; Jiandong Tian
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Three-dimensional (3D) point cloud segmentation plays an important role in autonomous navigation systems, such as mobile robots and autonomous cars. However, the segmentation is challenging because of data sparsity, uneven sampling density, irregular format, and lack of color texture. In this paper, we propose a sparse 3D point cloud segmentation method based on 2D image feature extraction with deep learning. Firstly, we jointly calibrate the camera and lidar to get the external parameters (rotation matrix and translation vector). Then, we introduce the Convolutional Neural Network (CNN)-based object detectors to generate 2D object region proposals in the RGB image and classify object. Finally, based on the external parameters of joint calibration, we extract point clouds that can be projected to 2D object region from 16-lines RS-LIDAR-16 scanner, and further fine segmentation in the extracted point cloud according to prior knowledge of the classification features. Experiments demonstrate the effectiveness of the proposed sparse point cloud segmentation method.

Paper Details

Date Published: 14 August 2019
PDF: 9 pages
Proc. SPIE 11179, Eleventh International Conference on Digital Image Processing (ICDIP 2019), 111790B (14 August 2019); doi: 10.1117/12.2539780
Show Author Affiliations
Yusheng Li, Shenzhen Univ. (China)
Yong Tian, Shenzhen Univ. (China)
Jiandong Tian, Shenzhen Univ. (China)


Published in SPIE Proceedings Vol. 11179:
Eleventh International Conference on Digital Image Processing (ICDIP 2019)
Jenq-Neng Hwang; Xudong Jiang, Editor(s)

© SPIE. Terms of Use
Back to Top