Share Email Print

Proceedings Paper

Reinforcement learning framework for collaborative agents interacting with soldiers in dynamic military contexts
Author(s): Sean L. Barton; Derrik Asher
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Modern Soldiers increasingly rely on computational and autonomous systems for the completion of their missions1,2. Challenges arise around the use of such systems that depend largely on the relationship between agents and human actors3 . Improving agents in the field requires the development of adaptive, human-aware systems that learn behaviors based on the needs of their human counterparts, acting effectively as teammates rather than tools4 . The development of such agent teammates is non-trivial, but recent advances in machine-learning and artificial-intelligence are promising. We identify deep reinforcement learning (RL)5 , multi-agent RL6 , and human-guided RL7 as powerful tools for the creation of adaptive agent teammates. We propose a three-armed approach to the development of agent teammates that leverages these advances in RL. First, multi-agent deep learning can be used to solve increasingly complex problems. Second, human-guided reinforcement can be used to constrain agent behavior and speed up the discovery of optimal strategies. Third, human behavioral profiles derived from surveys of work-interest variables for specific military occupation specialty (MOS) codes can be used to tailor agent behavior to the needs of Soldiers. This approach addresses the necessary computational framework, the learning paradigm needed to discover behavior, and the human dimension that contextualizes behavior.

Paper Details

Date Published: 27 April 2018
PDF: 14 pages
Proc. SPIE 10653, Next-Generation Analyst VI, 1065303 (27 April 2018); doi: 10.1117/12.2303827
Show Author Affiliations
Sean L. Barton, U.S. Army Research Lab. (United States)
Derrik Asher, U.S. Army Research Lab. (United States)

Published in SPIE Proceedings Vol. 10653:
Next-Generation Analyst VI
Timothy P. Hanratty; James Llinas, Editor(s)

© SPIE. Terms of Use
Back to Top