Share Email Print
cover

Proceedings Paper • new

Explainable automatic target recognition (XATR)
Author(s): Sundip R. Desai; Nhat X. Nguyen; Moses W. Chan
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

An explainable automatic target recognition (XATR) algorithm with part-based representation of 2D and 3D objects is presented. The algorithm employs a two-phase approach. In the first phase, a collection of Convolutional Neural Networks (CNNs) recognizes major parts of these objects, also known as the vocabulary. A Markov Logic Network (MLN) and structure learning mechanism are used to learn the geometric and spatial relationships between the parts in the vocabulary that best describe the objects. The resultant network offers three unique features: 1) the inference results are explainable with qualitative information involving the vocabulary that make up the object; 2) the part-based approach achieves robust recognition performance in cases of partially occluded objects or images of hidden object under canopy; and 3) different object representations can be created by varying the vocabulary and permuting learned relationships.

Paper Details

Date Published: 14 May 2019
PDF: 10 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 1098807 (14 May 2019); doi: 10.1117/12.2517161
Show Author Affiliations
Sundip R. Desai, Lockheed Martin Space (United States)
Nhat X. Nguyen, Lockheed Martin Space (United States)
Moses W. Chan, Lockheed Martin Space (United States)


Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)

© SPIE. Terms of Use
Back to Top