
Proceedings Paper
Road traffic sign detection and classification from mobile LiDAR point cloudsFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Traffic signs are important roadway assets that provide valuable information of the road for drivers to make safer and easier driving behaviors. Due to the development of mobile mapping systems that can efficiently acquire dense point clouds along the road, automated detection and recognition of road assets has been an important research issue. This paper deals with the detection and classification of traffic signs in outdoor environments using mobile light detection and ranging (Li- DAR) and inertial navigation technologies. The proposed method contains two main steps. It starts with an initial detection of traffic signs based on the intensity attributes of point clouds, as the traffic signs are always painted with highly reflective materials. Then, the classification of traffic signs is achieved based on the geometric shape and the pairwise 3D shape context. Some results and performance analyses are provided to show the effectiveness and limits of the proposed method. The experimental results demonstrate the feasibility and effectiveness of the proposed method in detecting and classifying traffic signs from mobile LiDAR point clouds.
Paper Details
Date Published: 2 March 2016
PDF: 7 pages
Proc. SPIE 9901, 2nd ISPRS International Conference on Computer Vision in Remote Sensing (CVRS 2015), 99010A (2 March 2016); doi: 10.1117/12.2234911
Published in SPIE Proceedings Vol. 9901:
2nd ISPRS International Conference on Computer Vision in Remote Sensing (CVRS 2015)
Cheng Wang; Rongrong Ji; Chenglu Wen, Editor(s)
PDF: 7 pages
Proc. SPIE 9901, 2nd ISPRS International Conference on Computer Vision in Remote Sensing (CVRS 2015), 99010A (2 March 2016); doi: 10.1117/12.2234911
Show Author Affiliations
Published in SPIE Proceedings Vol. 9901:
2nd ISPRS International Conference on Computer Vision in Remote Sensing (CVRS 2015)
Cheng Wang; Rongrong Ji; Chenglu Wen, Editor(s)
© SPIE. Terms of Use
