Share Email Print
cover

Proceedings Paper

Continuous and embedded learning in autonomous vehicles: adapting to sensor failures
Author(s): Alan C. Schultz; John J. Grefenstette
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This project describes an approach to creating autonomous systems that can continue to learn throughout their lives, that is, to be adaptive to changes in the environment and in their own capabilities. Evolutionary learning methods have been found to be useful in several areas in the development of autonomous vehicles. In our research, evolutionary algorithms are used to explore the alternative robot behaviors within a simulation model as a way of reducing the overall knowledge engineering effort. The learned behaviors are then tested in the actual robot and the results compared. Initial research demonstrated the ability to learn reasonable complex robot behaviors such as herding, and navigation and collision avoidance using this offline learning approach. In this work, the vehicle is always exploring different strategies via an internal simulation model; the simulation in term, is changing over time to better match the world. This model, which we call Continuous and Embedded Learning (also referred to as Anytime Learning), is a general approach to continuous learning in a changing environment. The agent's learning module continuously tests new strategies against a simulation model of the task environment, and dynamically updates the knowledge base used by the agent on the basis of the results. The execution module controls the agent's interaction with the environment, and includes a monitor that can dynamically modify the simulation model based on its observations of the environment. When a simulation model is modified, the learning process continues on the modified model. The learning system is assume to operate indefinitely, and the execution system uses the results of learning as they become available. Early experimental studies demonstrate a robot that can learn to adapt to failures in its sonar sensors.

Paper Details

Date Published: 10 July 2000
PDF: 8 pages
Proc. SPIE 4024, Unmanned Ground Vehicle Technology II, (10 July 2000); doi: 10.1117/12.391649
Show Author Affiliations
Alan C. Schultz, Naval Research Lab. (United States)
John J. Grefenstette, George Mason Univ. (United States)


Published in SPIE Proceedings Vol. 4024:
Unmanned Ground Vehicle Technology II
Grant R. Gerhart; Robert W. Gunderson; Chuck M. Shoemaker, Editor(s)

© SPIE. Terms of Use
Back to Top