Share Email Print
cover

Proceedings Paper

PanDAR: a wide-area, frame-rate, and full color lidar with foveated region using backfilling interpolation upsampling
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

LIDAR devices for on-vehicle use need a wide field of view and good fidelity. For instance, a LIDAR for avoidance of landing collisions by a helicopter needs to see a wide field of view and show reasonable details of the area. The same is true for an online LIDAR scanning device placed on an automobile. In this paper, we describe a LIDAR system with full color and enhanced resolution that has an effective vertical scanning range of 60 degrees with a central 20 degree fovea. The extended range with fovea is achieved by using two standard Velodyne 32-HDL LIDARs placed head to head and counter rotating. The HDL LIDARS each scan 40 degrees vertical and a full 360 degrees horizontal with an outdoor effective range of 100 meters. By positioning them head to head, they overlap by 20 degrees. This creates a double density fovea. The LIDAR returns from the two Velodyne sensors do not natively contain color. In order to add color, a Point Grey LadyBug panoramic camera is used to gather color data of the scene. In the first stage of our system, the two LIDAR point clouds and the LadyBug video are fused in real time at a frame rate of 10 Hz. A second stage is used to intelligently interpolate the point cloud and increase its resolution by approximately four times while maintaining accuracy with respect to the 3D scene. By using GPGPU programming, we can compute this at 10 Hz. Our backfilling interpolation methods works by first computing local linear approximations from the perspective of the LIDAR depth map. The color features from the image are used to select point cloud support points that are the best points in a local group for building the local linear approximations. This makes the colored point cloud more detailed while maintaining fidelity to the 3D scene. Our system also makes objects appearing in the PanDAR display easier to recognize for a human operator.

Paper Details

Date Published: 8 February 2015
PDF: 13 pages
Proc. SPIE 9406, Intelligent Robots and Computer Vision XXXII: Algorithms and Techniques, 94060K (8 February 2015); doi: 10.1117/12.2078348
Show Author Affiliations
T. Nathan Mundhenk, DAQRI (United States)
HRL Labs., LLC (United States)
Kyungnam Kim, HRL Labs. LLC (United States)
Yuri Owechko, HRL Labs. LLC (United States)


Published in SPIE Proceedings Vol. 9406:
Intelligent Robots and Computer Vision XXXII: Algorithms and Techniques
Juha Röning; David Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top