Share Email Print
cover

Proceedings Paper

Multisource deep learning for situation awareness
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

The resurgence of interest in artificial intelligence (AI) stems from impressive deep learning (DL) performance such as hierarchical supervised training using a Convolutional Neural Network (CNN). Current DL methods should provide contextual reasoning, explainable results, and repeatable understanding that require evaluation methods. This paper discusses DL techniques using multimodal (or multisource) information that extend measures of performance (MOP). Examples of joint multi-modal learning include imagery and text, video and radar, and other common sensor types. Issues with joint multimodal learning challenge many current methods and care is needed to apply machine learning methods. Results from Deep Multimodal Image Fusion (DMIF) using Electro-optical and infrared data demonstrate performance modeling based on distance to better understand DL robustness and quality to provide situation awareness.

Paper Details

Date Published: 14 May 2019
PDF: 12 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880M (14 May 2019); doi: 10.1117/12.2519236
Show Author Affiliations
Erik Blasch, Air Force Office of Scientific Research (United States)
Zheng Liu, The Univ. of British Columbia Okanagan (Canada)
Yufeng Zheng, Alcorn State Univ. (United States)
Uttam Majumder, Air Force Research Lab. (United States)
Alex Aved, Air Force Research Lab. (United States)
Peter Zulch, Air Force Research Lab. (United States)


Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray