Share Email Print
cover

Proceedings Paper

GPGPU-based parallel processing of massive LiDAR point cloud
Author(s): Xun Zeng; Wei He
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Processing the massive LiDAR point cloud is a time consuming process due to the magnitude of the data involved and the highly computational iterative nature of the algorithms. In particular, many current and future applications of LiDAR require real- or near-real-time processing capabilities. Relevant examples include environmental studies, military applications, tracking and monitoring of hazards. Recent advances in Graphics Processing Units (GPUs) open a new era of General-Purpose Processing on Graphics Processing Units (GPGPU). In this paper, we seek to harness the computing power available on contemporary Graphic Processing Units (GPUs), to accelerate the processing of massive LiDAR point cloud. We propose a CUDA-based method capable of accelerating processing of massive LiDAR point cloud on the CUDA-enabled GPU. Our experimental results showed that we are able to significantly reduce processing time of constructing TIN from LiDAR point cloud with GPGPU based parallel processing implementation, in comparison with the current state-of-the-art CPU-based algorithms.

Paper Details

Date Published: 30 October 2009
PDF: 6 pages
Proc. SPIE 7497, MIPPR 2009: Medical Imaging, Parallel Processing of Images, and Optimization Techniques, 749716 (30 October 2009); doi: 10.1117/12.833740
Show Author Affiliations
Xun Zeng, Wuhan Univ. (China)
Wei He, Wuhan Geotechnical Engineering and Surveying Institute (China)


Published in SPIE Proceedings Vol. 7497:
MIPPR 2009: Medical Imaging, Parallel Processing of Images, and Optimization Techniques
Faxiong Zhang; Faxiong Zhang, Editor(s)

© SPIE. Terms of Use
Back to Top