Share Email Print
cover

Proceedings Paper • new

Reasoning with an uncertainty of information measure: decision making for military and non-military applications
Author(s): Adrienne Raglin; Andre Harrison
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Intelligent agents are devices, software, and simulations that perceive the environment and take actions to achieve a goal through the use of artificial intelligence. These AI agents are increasingly incorporated into every aspect of our lives. This is particularly true for soldiers and analysts as they must increasingly perform tasks in varied, dynamic, and fast paced operational environments. There is a common idea that, in the future, the pace of operations will increasingly far exceed soldiers’ or analysts’ ability to react to extreme, complex activities. Accelerated decision making in Army operations will relying on AI agents and enabling technologies such as autonomous systems and simulations. However, what happens when the decisions from these AI agents are wrong, produce results contrary to expectations, or simply in disagreement with a person? Explanations can help resolve these issues. Any errors or uncertainty from the AI agent in an accelerated environment will present unique and unforeseen challenges that may potentially inhibit analysts’ or soldiers’ ability to make decisions effectively and efficiently. Providing explanations for AI outputs, predictions, or behaviors is challenging. Algorithms or techniques frequently obfuscate features and how actions are decided. In addition, results from these systems do not always include uncertainty information related to the factors that influenced the actions or decisions. Therefore, information on the uncertainty explicitly in the explanation is necessary. We explore the use of abductive reasoning to provide explanations for situations where an agents answers are not in line with human assessment nor provide uncertainty information needed for human interpretation of the answers. The primary goal of this work is to strengthen the communication of information and increase the effectiveness of interactions between humans and non-human agents.

Paper Details

Date Published: 27 April 2018
PDF: 10 pages
Proc. SPIE 10653, Next-Generation Analyst VI, 1065308 (27 April 2018); doi: 10.1117/12.2304537
Show Author Affiliations
Adrienne Raglin, U.S. Army Research Lab. (United States)
Andre Harrison, U.S. Army Research Lab. (United States)


Published in SPIE Proceedings Vol. 10653:
Next-Generation Analyst VI
Timothy P. Hanratty; James Llinas, Editor(s)

© SPIE. Terms of Use
Back to Top