Share Email Print

Proceedings Paper

Real-time visual processing in support of autonomous driving
Author(s): Marilyn Nashman; Henry Schneiderman
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Autonomous driving provides an effective way to address traffic concerns such as safety and congestion. There has been increasing interest in the development of autonomous driving in recent years. Interest has included high-speed driving on highways, urban driving, and navigation through less structured off-road environments. The primary challenge in autonomous driving is developing perception techniques that are reliable under the extreme variability of outdoor conditions in any of these environments. Roads vary in appearance. Some are smooth and well marked, while others have cracks and potholes or are unmarked. Shadows, glare, varying illumination, dirt or foreign matter, other vehicles, rain, and snow also affect road appearance. This paper describes a visual processing algorithm that supports autonomous driving. The algorithm requires that lane markings be present and attempts to track the lane markings on each of two lane boundaries in the lane of travel. There are three stages of visual processing computation: extracting edges, determining which edges correspond to lane markers, and updating geometric models of the lane markers. A fourth stage computes a steering command for the vehicle based on the updated road model. All processing is confined to the 2-D image plane. No information about the motion of the vehicle is used. This algorithm has been used as part of a complete system to drive an autonomous vehicle, a high mobility multipurpose wheeled vehicle (HMMWV). Autonomous driving has been demonstrated on both local roads and highways at speeds up to 100 kilometers per hour (km/h). The algorithm has performed well in the presence of non-ideal road conditions including gaps in the lane markers, sharp curves, shadows, cracks in the pavement, wet roads, rain, dusk, and nighttime driving. The algorithm runs at a sampling rate of 15 Hz and has a worst case processing delay time of 150 milliseconds. Processing is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) architecture and runs on a dedicated image processing engine and a VME-based microprocessor system.

Paper Details

Date Published: 26 February 1997
PDF: 11 pages
Proc. SPIE 2962, 25th AIPR Workshop: Emerging Applications of Computer Vision, (26 February 1997); doi: 10.1117/12.267816
Show Author Affiliations
Marilyn Nashman, National Institute of Standards and Technology (United States)
Henry Schneiderman, Carnegie Mellon Univ. (United States)

Published in SPIE Proceedings Vol. 2962:
25th AIPR Workshop: Emerging Applications of Computer Vision
David H. Schaefer; Elmer F. Williams, Editor(s)

© SPIE. Terms of Use
Back to Top