
Proceedings Paper
Explainable automatic target recognition (XATR)Format | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
An explainable automatic target recognition (XATR) algorithm with part-based representation of 2D and 3D objects is presented. The algorithm employs a two-phase approach. In the first phase, a collection of Convolutional Neural Networks (CNNs) recognizes major parts of these objects, also known as the vocabulary. A Markov Logic Network (MLN) and structure learning mechanism are used to learn the geometric and spatial relationships between the parts in the vocabulary that best describe the objects. The resultant network offers three unique features: 1) the inference results are explainable with qualitative information involving the vocabulary that make up the object; 2) the part-based approach achieves robust recognition performance in cases of partially occluded objects or images of hidden object under canopy; and 3) different object representations can be created by varying the vocabulary and permuting learned relationships.
Paper Details
Date Published: 14 May 2019
PDF: 10 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 1098807 (14 May 2019); doi: 10.1117/12.2517161
Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)
PDF: 10 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 1098807 (14 May 2019); doi: 10.1117/12.2517161
Show Author Affiliations
Sundip R. Desai, Lockheed Martin Space (United States)
Nhat X. Nguyen, Lockheed Martin Space (United States)
Nhat X. Nguyen, Lockheed Martin Space (United States)
Moses W. Chan, Lockheed Martin Space (United States)
Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)
© SPIE. Terms of Use
