Share Email Print
cover

Proceedings Paper • new

Navigation and collision avoidance with human augmented supervisory training and fine tuning via reinforcement learning
Author(s): Christopher J. Maxey; E. Jared Shamwell
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Robust navigation and orientation under complex conditions is a must for autonomous drones operating in new and varied environments. Creating drones with adequate behaviors can be a challenge from both a training standpoint and a generalization standpoint. Using human expertise data is an option to help bootstrap the learning process; however, using the human data can lead to side consequences that are not immediately intuitive. This study focuses on applying varying levels of human input to an agent to determine how this input affects the agent's performance. The Unreal Engine and the Airsim plugin are used to train a quadcopter agent in an abstract "blocks world" type environment. Six agents in total are trained, with the first five having increasing amounts of human input and the sixth agent having no human input. A variety of metrics are looked at, including total goals achieved and time to achieve some number of goals.

Paper Details

Date Published: 13 May 2019
PDF: 10 pages
Proc. SPIE 10982, Micro- and Nanotechnology Sensors, Systems, and Applications XI, 1098228 (13 May 2019); doi: 10.1117/12.2518551
Show Author Affiliations
Christopher J. Maxey, U.S. Army Research Lab. (United States)
Univ. of Maryland, College Park (United States)
E. Jared Shamwell, U.S. Army Research Lab. (United States)


Published in SPIE Proceedings Vol. 10982:
Micro- and Nanotechnology Sensors, Systems, and Applications XI
Thomas George; M. Saif Islam, Editor(s)

© SPIE. Terms of Use
Back to Top