21 - 25 April 2024
National Harbor, Maryland, US
This conference on Automatic Target Recognition (ATR) emphasizes all aspects relating to the modern automatic and machine assisted target recognition technologies. Novel methods in these key areas are of particular interest: deep-learning and model-based object/target recognition,, adaptive and learning approaches, and advanced signal and image processing concepts for detection, multi-target and High Value Target (HVT) tracking. ATR solutions for various sensors such as sonar/acoustic, neuromorphic (event) sensors, electro-optical, infrared, radar, laser radar, multispectral, and hyperspectral sensors will be considered. Papers dealing with the entire spectrum of algorithms, systems, and architecture in ATR will be also considered.

An extremely important challenge for ATR is the evaluation and prediction of ATR performance given the practical limitation that data sets cannot represent the extreme variability of the real world. Methods are sought that allow a rapid insertion of new targets and adaptive algorithms capable of supporting flexible and sustained employment of ATR. A key technical challenge is the development of affordable ATR solutions that employ an open architecture to provide timely hardware and software insertion.


Papers are solicited in the following and related topics:

Machine learning for ATR Geospatial remote sensing systems IR-based systems Hyperspectral-based systems Radar/laser radar-based systems New methodologies


Panel discussion on machine learning for automatic target recognition (ML4ATR)
Following the great success of past ML4ATR sessions, we intend to organize another session in 2024. The Machine Learning for Automatic Target Recognition (ML4ATR) session at SPIE Defense + Security (ATR conference) highlights the accomplishments to date and challenges ahead in designing and deploying deep learning and big data analytics algorithms, systems, and hardware for ATR. It provides a forum for researchers, practitioners, solution architects and program managers across all the widely varying disciplines of ATR involved in connecting, engaging, designing solutions, setting up requirements, testing and evaluating to shape the future of this exciting field. ML4ATR topics of interest include training deep-learning-based ATR with limited measured/real data, multi-modal satellite/hyperspectral/sonar/FMV imagery analytics, graph analytic multi-sensory fusion, change detection, pattern-of-life analysis, adversarial learning, trust, and ethics. We invite experts in the field to join this panel discussion in 2024. Each panelist gives a short keynote talk about their projects on machine learning for ATR.


Best Paper Award and Best Student Paper Award
To be eligible for this award, you must submit a manuscript, be accepted for an oral presentation, and you or your co-author must present your paper on-site. All students are eligible if the abstract was accepted during the academic year the student graduated. Students are required to be enrolled in a university degree granting program. Manuscripts will be judged on technical merit, presentation/speaking skills, and audience interaction. Winners will be announced after the meeting and will be included in the proceedings. All winners will receive an Award Certificate and recognition on SPIE.org.



Joint Session
A joint session on artificial intelligence/deep learning (AI/DL) is being planned with the Infrared Technology and Applications conference. We expect to be cover AI/DL in design of IR systems, subsystems, and components (military as well as commercial), and DL in IR-based detection, tracking, and recognition systems.
;
In progress – view active session
Conference 13039

Automatic Target Recognition XXXIV

22 - 24 April 2024 | National Harbor 5
All sponsors
Show conference sponsors + Hide conference sponsors
View Session ∨
  • Opening Remarks
  • 1: Machine Learning for Automatic Target Recognition I: Joint Session with Conferences 13036 and 13039
  • 2: Electro-Optical and Infrared Detection and Tracking I
  • 3: Radar Frequency and Synthetic Aperture Radar Automatic Target Recognition I
  • 4: Electro-Optical and Infrared Detection and Tracking II
  • Symposium Plenary
  • Symposium Panel on Microelectronics Commercial Crossover
  • Opening Remarks
  • 5: Machine Learning for Automatic Target Recognition II
  • 6: Panel Discussion: Machine Learning for Automatic Target Recognition
  • 7: Radar Frequency and Synthetic Aperture Radar Automatic Target Recognition II
  • Poster Session
  • Symposium Plenary on AI/ML + Sustainability
  • Artificial Intelligence and Deep Learning: Joint Session with Conferences 13039 and 13046
Opening Remarks
22 April 2024 • 8:00 AM - 8:10 AM EDT | National Harbor 5
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
Opening remarks for Automatic Target Recognition XXXIV.
Session 1: Machine Learning for Automatic Target Recognition I: Joint Session with Conferences 13036 and 13039
22 April 2024 • 8:10 AM - 10:10 AM EDT | National Harbor 5
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
13039-1
Author(s): Raghuveer M. Rao, DEVCOM Army Research Lab. (United States)
22 April 2024 • 8:10 AM - 8:50 AM EDT | National Harbor 5
13039-2
Author(s): Sophia Abraham, Steve Cruz, Univ. of Notre Dame (United States); Suya You, DEVCOM Army Research Lab. (United States); Jonathan D. Hauenstein, Walter J. Scheirer, Univ. of Notre Dame (United States)
22 April 2024 • 8:50 AM - 9:10 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
The intricacies of visual scenes in Automatic Target Recognition (ATR) necessitate sophisticated models for nuanced interpretation. Vision-language models, notably CLIP (Contrastive Language-Image Pre-training), bridge visual perception and linguistic description. However, their effectiveness in ATR relies on targeted fine-tuning, challenged by the unimodal nature of datasets like the Defense Systems Information Analysis Center (DSIAC) ATR data. We propose a novel fine-tuning approach for CLIP, enriching DSIAC data with algorithmically generated captions for a multimodal training environment. Central to our innovation is a homotopy-based multi-objective optimization strategy, adept at balancing model accuracy, generalization, and interpretability—key factors for ATR success. Implemented in PyTorch Lightning, our approach propels the frontier of ATR model optimization while also effectively addressing the intricacies of real-world ATR requirements.
13039-3
Author(s): Rohan Putatunda, Kelvin U. Echenim, Univ. of Maryland, Baltimore County (United States)
22 April 2024 • 9:10 AM - 9:30 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
This study introduces a depth-aware approach for detecting small-scale camouflaged objects, leveraging the Swin Transformer and Ghost Convolution Layer. We employ multimodal depth maps to enhance spatial understanding, which is crucial for identifying camouflaged items. The Swin Transformer captures extensive contextual data, while the Ghost Convolution Layer boosts computational efficiency. We validate our method on unique quasi-synthetic and comparative synthetic datasets created for this study. An ablation study and GRAD-CAM visualization further substantiate the model's effectiveness. This research offers a novel framework for improving object detection in challenging camouflaged environments.
13039-4
Author(s): Scott G. Hodes, The Pennsylvania State Univ. (United States), Applied Research Lab. (United States); Kory J. Blose, The Applied Research Lab at The Pennsylvania State University (United States), The Pennsylvania State University Department of Agricultural and Biological Engineering (United States); Timothy J. Kane, The Pennsylvania State University School of Electrical Engineering and Computer Science (United States), The Applied Research Lab at The Pennsylvania State University (United States)
22 April 2024 • 9:30 AM - 9:50 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
This work involves performing black box adversarial attacks using light as a medium against image classifier neural networks. The method of generating these adversarial examples involves querying the target network to inform decisions on designing a pattern upon the Fourier plane. The shapes are designed to target regions of the Fourier domain effectively without being able to back-propagate loss toward said plane.
13039-5
Author(s): Khaled Obaideen, Univ. of Sharjah (United Arab Emirates); Yousuf Faroukh, Sharjah Academy for Astronomy, Space Sciences & Technology (United Arab Emirates); Mohammad AlShabi, Univ. of Sharjah (United Arab Emirates)
22 April 2024 • 9:50 AM - 10:10 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
In recent decades, there have been notable advancements in Automatic Target Recognition (ATR) systems. One technique that has played a crucial role in improving the accuracy and efficiency of these systems is dictionary-learning. This paper provides a thorough examination, documenting the evolutionary progression of dictionary-learning methodologies in the field of Automatic Target Recognition (ATR). Commencing with initial approaches such as K-SVD and MOD, we examine their fundamental influence and subsequent evolution towards more adaptable methodologies, such as online and convolutional dictionary learning. The focus is on comprehending the enhancements in target recognition achieved by dictionary-learning methods, particularly in demanding scenarios characterized by factors such as noise, occlusions, and diverse target orientations. In addition, we investigate the recent incorporation of deep learning principles into conventional dictionary-based frameworks, revealing a hybrid paradigm that holds the potential to significantly transform automatic target recognition (ATR) capabilities.
Break
Coffee Break 10:10 AM - 11:00 AM
Session 2: Electro-Optical and Infrared Detection and Tracking I
22 April 2024 • 11:00 AM - 12:00 PM EDT | National Harbor 5
Session Chair: Kenny Chen, Lockheed Martin Missiles and Fire Control (United States)
13039-7
Author(s): Peter A. Torrione, Joe Camilo, Covar, LLC (United States); Marie Marie Talbott, U.S. Army (United States), DEVCOM C5ISR (United States)
22 April 2024 • 11:00 AM - 11:20 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Advances in camera design have resulted in the development of next generation “event-based” imaging sensors. These imaging sensors provide super-high-temporal resolution in individual pixels, but only pick-up changes in the scene. This enables interesting new capabilities like bullet-tracking and hostile fire detection, and their low power-consumption is important for edge AiTR systems. However, AiTR algorithms require massive amounts of data for system training and development and these data collections are expensive and time-consuming. Therefore, it is of interest to explore whether and how current data can be modified to simulate event-based images for training and evaluation. In this work, we present results from training and testing CNN and non-CNN architectures on both simulated and real event-based imaging sensor systems.
13039-8
Author(s): Daniel Carvalho, DCS Corp. (United States); Art Lompado, Polaris Sensor Technologies, Inc. (United States); Riccardo Consolo, Abhijit Bhattacharjee, The MathWorks, Inc. (United States); Jarrod P. Brown, Air Force Research Lab. (United States)
22 April 2024 • 11:20 AM - 11:40 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
In this study, we present a real-time vehicle detection program that combines the YOLO-X object detection algorithm with a multi-object Kalman filter tracker, specifically designed for analyzing 3-Dimensional (3-D) LiDAR data. Our approach involves capturing videos of 8 vehicles using an ASC 3-D Flash LiDAR camera, which provides intensity and range data sequences. These sequences are then converted into representative RGB images, used to train the YOLO-X object detector neural network. To further enhance the detection accuracy for obscured vehicles and minimize the missed detection rate, we integrate Kalman filter trackers into the detection algorithm. The resulting algorithm is lightweight and capable of producing highly accurate inference results in near real-time on a live stream of LiDAR data. To demonstrate the applicability of our approach on small, unmanned vehicles/drones, we deploy the application on NVIDIA's Jetson Orin Nano embedded processor for AI.
13039-9
Author(s): Matthias Pijarowski, HENSOLDT Optronics GmbH (Germany), Hochschule Aalen - Technik und Wirtschaft (Germany); Alexander Wolpert, HENSOLDT Optronics GmbH (Germany); Martin Heckmann, Hochschule Aalen - Technik und Wirtschaft (Germany); Michael Teutsch, HENSOLDT Optronics GmbH (Germany)
22 April 2024 • 11:40 AM - 12:00 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Visually detecting camouflaged objects is a hard problem for humans as well as computer vision algorithms. Strong similarities between object and background appearance make the task significantly more challenging than traditional object detection or segmentation tasks. Current state-of-the-art models use either convolutional neural networks or vision transformers as feature extractors. They are trained in a fully supervised manner and thus need a large amount of labeled training data. In this paper, self-supervised frugal learning methods are introduced to camouflaged object detection. The overall goal is to fine-tune two methods, namely SINet-V2 and HitNet, pre-trained for camouflaged animal detection to the task of camouflaged human detection. Therefore, we use the public dataset CPD1K that contains camouflaged humans in a forest environment. We create a strong baseline using supervised frugal transfer learning for the fine-tuning task. Then, we analyze three pseudo-labeling approaches to perform the fine-tuning task in a self-supervised manner. Our experiments show that we achieve similar performance by pure self-supervision compared to supervised frugal learning.
Break
Lunch Break 12:00 PM - 1:30 PM
Session 3: Radar Frequency and Synthetic Aperture Radar Automatic Target Recognition I
22 April 2024 • 1:30 PM - 2:30 PM EDT | National Harbor 5
Session Chair: Kristen Jaskie, Prime Solutions Group, Inc. (United States)
13039-11
Author(s): Ismail I. Jouny, Lafayette College (United States)
22 April 2024 • 1:30 PM - 1:50 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Radar target recognition with Random Forests (RF) and using stepped-frequency radar features is the focus of this paper. Recent comparative studies between RF and convolutional neural networks (CNN) showed that RF yields reliable robust target recognition results with relatively fast training and testing time. The appeal of RF is that they can be implemented in parallel and have far fewer tunable parameters than CNN. In addition to providing measures of variable significance, and permitting differential class weighting, RF can help with imputation of missing data [1]. These RF properties make them a good target recognition alternative tool especially in scenarios where the data is occluded, or corrupted with extraneous scatterer, or when the target signature at certain azimuth position changes drastically compared to other likely positions (or aspect angles). This paper uses real radar data of commercial aircraft models recorded in a compact range. The results show that RF offers a fast and reliable alternative for target recognition systems especially under realistic radar operating conditions.
13039-12
Author(s): John G. Warner, U.S. Naval Research Lab. (United States); Vishal Patel, Johns Hopkins University (United States)
22 April 2024 • 1:50 PM - 2:10 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Reliable computer vision object classification in imagery is import in security applications that may have high stakes decisions made from automated algorithms. In real world scenarios, it is often impractical to meet the implicit assumption that all relevant, labelled data may be attained prior to training. To avoid performance degradation, the recently developed Open-set Automatic Target Recognition (ATR) frame is applied to the classification of ships from clutter in satellite, Electro-Optical (EO) imagery, and is shown to reliably identify data that is out of distribution from training data. This enables and operator to know whether to believe classification results from the deep learning-based algorithm.
13039-31
Author(s): Todd W. Du Bosq, U.S. Army CCDC C5ISR Ctr. Night Vision & Electronic Sensors Directorate (United States); Olivia Pavlic, Timothy Lang, Naval Surface Warfare Ctr. Crane Div. (United States); Kevin Nielson, Terence Haran, Georgia Tech Research Institute (United States); Pieter Piscaer, Nicolas Boehrer, Judith Dijk, TNO (Netherlands); Martin Laurenzis, Institut Franco-Allemand de Recherches de Saint-Louis (France); Jürgen Limbach, Peter Lutzmann, Gabriela Paunescu, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB (Germany); Jonathan Piper, O. Ben Elphick, Jacob Franks, Defence Science and Technology Lab. (United Kingdom); Sebastian Strecker, Wehrtechnische Dienststelle für Waffen und Munition (Germany)
22 April 2024 • 2:10 PM - 2:30 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
This paper describes a comprehensive computational imaging field trial conducted in Meppen, Germany, aimed at assessing the performance of cutting-edge computational imaging systems (compressive hyperspectral, visible/shortwave infrared single-pixel, wide-area infrared, neuromorphic, high-speed, photon counting cameras, and many more) by the members of NATO SET-RTG-310. The trial encompassed a diverse set of targets, including dismounts equipped with various two-handheld objects and adorned with a range of camouflage patterns, as well as fixed and rotary-wing Unmanned Aerial System (UAS) targets. These targets covered the entire spectrum of spatial, temporal, and spectral signatures, forming a comprehensive trade space for performance evaluation of each system. The trial, which serves as the foundation for subsequent data analysis, encompassed a multitude of scenarios designed to challenge the limits of computational imaging technologies. The diverse set of targets, each with its unique set of challenges, allows for the examination of system performance across various environmental and operational conditions.
Break
Coffee Break 2:30 PM - 3:05 PM
Session 4: Electro-Optical and Infrared Detection and Tracking II
22 April 2024 • 3:05 PM - 3:45 PM EDT | National Harbor 5
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
13039-17
Author(s): Patrick V. Haggerty, Sydney E. Matthys, Benjamin A. Strasser, General Dynamics Mission Systems (United States)
22 April 2024 • 3:05 PM - 3:25 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Hybrid dynamical systems are a natural model for missions in which several behaviors are required to achieve the goal of the mission. Missions are tasks featuring interacting subtasks, such as the decision of where and how to search and when to transition between behaviors. While the discrete nature of mission actions (which subtask to accomplish) and the continuous nature of real-world physical state spaces make hybrid systems a good model, control in such systems is poorly understood. Despite this, we find the formalism to have significant value and develop hierarchical state estimation tools to control agents in a hybrid framework and execute missions. In past work, we developed hierarchical dynamic target modeling to estimate the progress of search and track scenarios. In this work, we consider the related problem of searching for stationary targets that appear in formation. Executing such a search efficiently and gaining situational awareness presents unique challenges. We develop a generative hierarchical model for target locations that relies on stochastic clustering techniques and ideas from object Simultaneous Location and Mapping (SLAM) to address these challenges.
13039-18
Author(s): Minas Benyamin, Geoffrey H. Goldman, DEVCOM Army Research Lab. (United States)
22 April 2024 • 3:25 PM - 3:45 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Low-resolution image object recognition and tracking is often required for battlefield reconnaissance. We propose a fast detector-agnostic multi-hypothesis tracker for improving situational awareness using electro-optical video data. Our approach uses standard techniques such as YOLO, match filters, or shape transforms to segment objects of interest in an image. From two or more successive detections, we initialize a linear quadratic estimator that extrapolates probable trajectories of the target of interest in the image space.
Symposium Plenary
22 April 2024 • 5:00 PM - 6:30 PM EDT | Potomac A
Session Chairs: Tien Pham, The MITRE Corp. (United States), Douglas R. Droege, L3Harris Technologies, Inc. (United States)

View Full Details: spie.org/dcs/symposium-plenary

Chair welcome and introduction
22 April 2024 • 5:00 PM - 5:05 PM EDT

DoD's microelectronics for the defense and commercial sensing ecosystem (Plenary Presentation)
Presenter(s): Dev Shenoy, Principal Director for Microelectronics, Office of the Under Secretary of Defense for Research and Engineering (United States)
22 April 2024 • 5:05 PM - 5:45 PM EDT

NATO DIANA: a case study for reimagining defence innovation (Plenary Presentation)
Presenter(s): Deeph Chana, Managing Director, NATO Defence Innovation Accelerator for the North Atlantic (DIANA) (United Kingdom)
22 April 2024 • 5:50 PM - 6:30 PM EDT

Symposium Panel on Microelectronics Commercial Crossover
23 April 2024 • 8:30 AM - 10:00 AM EDT | Potomac A

View Full Details: spie.org/dcs/symposium-panel

The CHIPS Act Microelectronics Commons network is accelerating the pace of microelectronics technology development in the U.S. This panel discussion will explore opportunities for crossover from commercial technology into DoD systems and applications, discussing what emerging commercial microelectronics technologies could be most impactful on photonics and sensors and how the DoD might best leverage commercial innovations in microelectronics.

Moderator:
John Pellegrino, Electro-Optical Systems Lab., Georgia Tech Research Institute (retired) (United States)

Panelists:
Shamik Das, The MITRE Corporation (United States)
Erin Gawron-Hyla, OUSD (R&E) (United States)
Carl McCants, Defense Advanced Research Projects Agency (United States)
Kyle Squires, Ira A. Fulton Schools of Engineering, Arizona State Univ. (United States)
Anil Rao, Intel Corporation (United States)

Break
Coffee Break 10:00 AM - 10:20 AM
Opening Remarks
23 April 2024 • 10:20 AM - 10:30 AM EDT | National Harbor 5
Session Chair: Riad I. Hammoud, PlusAI, Inc. (United States)
Opening remarks for Automatic Target Recognition XXXIV.
Session 5: Machine Learning for Automatic Target Recognition II
23 April 2024 • 10:30 AM - 12:10 PM EDT | National Harbor 5
Session Chair: Kristen Jaskie, Prime Solutions Group, Inc. (United States)
13039-19
Author(s): Scott McCloskey, Kitware, Inc. (United States)
23 April 2024 • 10:30 AM - 11:10 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Many sensors produce data that rarely, if ever, is viewed by a human, and yet sensors are often designed to maximize subjective image quality. For sensors whose data is intended for embedded exploitation, maximizing the subjective image quality to a human will generally decrease the performance of downstream exploitation. In recent years, computational imaging researchers have developed end-to-end learning methods that co-optimize the sensing hardware with downstream exploitation via end-to-end machine learning. This talk will describe two such approaches at Kitware. In the first, we use an end-to-end ML approach to design a multispectral sensor that’s optimized for scene segmentation and, in the second, we optimize post-capture super-resolution in order to improve the performance of airplane detection in overhead imagery.
13039-32
Author(s): Zhu Li, Univ. of Missouri-Kansas City (United States)
23 April 2024 • 11:10 AM - 11:50 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Remotes sensing and vision problems like object detection and recognition from various active and passive sensors are of great value to many DoD use cases. Usually due to sensor or communication link limitations the images received are of low resolution, quality and have compression artifacts. To combat this we developed a new direct vision task feature pyramid recovery with a joint frequency and pixel domain neural learning approach. It had many successes in problems like ATR from low resolution SAR and EO images, joint delburring and target detection, as well as the very low bit rate complex SAR image compression for phase recovery.
13039-21
Author(s): Salil Naik, Arizona State Univ. (United States); Nolan Vaughn, Prime Solutions Group, Inc. (United States); Glen Uehara, Andreas S. Spanias, Arizona State Univ. (United States); Kristen Jaskie, Prime Solutions Group, Inc. (United States), Arizona State Univ. (United States)
23 April 2024 • 11:50 AM - 12:10 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
The field of quantum computing, especially quantum machine learning (QML), has been the subject of much research in recent years. Leveraging the quantum properties of superposition and entanglement promises exponential decrease in computation costs. With the promises of increased speed and accuracy in the quantum paradigm, many classical machine learning algorithms have been adapted to run on quantum computers, typically using a quantum-classical hybrid model. While some work has been done to compare classical and quantum classification algorithms in the Electro-Optical (EO) image domain, this paper will compare the performance of classical and quantum-hybrid classification algorithms in their applications on Synthetic Aperture Radar (SAR) data using the MSTAR dataset. We find that there is no significant difference in classification performance when training with quantum algorithms in ideal simulators as compared to their classical counterparts. However, the true performance benefits will become more apparent as the hardware matures.
Break
Lunch/Exhibition Break 12:10 PM - 1:40 PM
Session 6: Panel Discussion: Machine Learning for Automatic Target Recognition
23 April 2024 • 1:40 PM - 3:10 PM EDT | National Harbor 5
Session Chair: Peter A. Torrione, Covar, LLC (United States)
Automatic Target Recognition (ATR), traditionally rooted in predefined algorithms and rule-based systems, has long been a cornerstone in defense operations. In its conventional form, ATR relied on handcrafted features and rigid frameworks, meeting the challenges of its time. However, the landscape of modern defense scenarios demands a paradigm shift.

In response to evolving complexities, ATR is seamlessly transitioning into the realm of artificial intelligence (AI), embracing a future marked by innovation and adaptability. The traditional rule-based approaches are giving way to dynamic, data-driven methodologies empowered by AI. This shift is not merely a technological upgrade; it represents a strategic move to tackle the intricate challenges of contemporary defense.

The integration of Transformer-based architectures in ATR reflects a fundamental departure from the limitations of predefined algorithms. This evolution is particularly notable in tasks requiring nuanced understanding, such as anomaly detection and trajectory prediction. Moreover, explainable AI (XAI) techniques are becoming integral, ensuring accuracy, transparency, and user trust in ATR systems.

Looking forward, the trajectory of ATR is shaped by the promise of Generative Adversarial Networks (GANs) addressing data scarcity and Quantum Machine Learning optimizing high-dimensional analyses. Academic pursuits like federated learning and meta-learning provide the intellectual backbone for a future-ready ATR landscape.

This narrative unfolds at the intersection of tradition and innovation, where the definition of ATR expands beyond its conventional boundaries, ushering in an era where AI becomes the linchpin in meeting the challenges of modern defense.

Moderators:
Asif Mehmood, Chief Digital and Artificial Intelligence Office (United States)

Panelists:
Zhu Li University of Missouri (United States)
Shuvra Bhattacharyya University of Maryland (United States)
Edmund Zelnio Air Force Research Lab. (United States)
Peter A. Torrione Covar, LLC (United States)
Break
Coffee Break 3:10 PM - 3:40 PM
Session 7: Radar Frequency and Synthetic Aperture Radar Automatic Target Recognition II
23 April 2024 • 3:40 PM - 5:00 PM EDT | National Harbor 5
Session Chair: Kristen Jaskie, Prime Solutions Group, Inc. (United States)
13039-22
Author(s): Tian Ye, The Univ. of Southern California (United States); Rajgopal Kannan, DEVCOM Army Research Lab. (United States); Viktor Prasanna, Xu Wang, The Univ. of Southern California (United States); Carl Busart, DEVCOM Army Research Lab. (United States)
23 April 2024 • 3:40 PM - 4:00 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
This paper summarizes our work in alleviating the vulnerability of neural networks for Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) to adversarial perturbations. We propose an approach of robust SAR image classification that integrates Bayesian Neural Networks (BNNs) to harness epistemic uncertainty for distinguishing between clean and adversarially manipulated SAR images. Additionally, we introduce a visual explanation method that employs a probabilistic variant of Guided Backpropagation (GBP) specifically adapted for BNNs. This method generates saliency maps highlighting critical pixels, thereby aiding human decision-makers in identifying adversarial scatterers within SAR imagery. Our experiments demonstrate the effectiveness of our approach in maintaining high True Positive Rates (TPR) while limiting False Positive Rates (FPR), and in accurately identifying adversarial scatterers, showcasing our method's potential to enhance the reliability and interpretability of SAR ATR systems in the face of adversarial threats.
13039-23
Author(s): Nolan Vaughn, Bo Sullivan, Kristen Jaskie, Prime Solutions Group, Inc. (United States)
23 April 2024 • 4:00 PM - 4:20 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
We compare the effectiveness of using a trained-from-scratch, unsupervised deep generative Variational Autoencoder (VAE) model as a solution to generic representation learning problems for Synthetic Aperture Radar (SAR) data as compared to the more common approach of using an Electric Optical (EO) transfer learning method. We find that a simple, unsupervised VAE training framework outperforms an EO transfer learning model at classification.
13039-24
Author(s): Johannes Bauer, Efrain Gonzalez, William M. Severa, Craig M. Vineyard, Sandia National Labs. (United States)
23 April 2024 • 4:20 PM - 4:40 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
The lack of large, relevant labeled datasets for SAR ATR poses a challenge for deep neural network approaches. Transfer learning offers promise to train on a data rich source domain and fine tune a model on a data poor target domain. Here, we apply a set of model and dataset transferability analysis techniques to investigate the efficacy of transfer learning for SAR ATR. We use multiple neural network models trained on different source SAR datasets to test the insights of these transferability analysis techniques.
13039-25
Author(s): Johannes Bauer, Efrain Gonzalez, William M. Severa, Craig M. Vineyard, Sandia National Labs. (United States)
23 April 2024 • 4:40 PM - 5:00 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
In this paper, we first provide an overview of explainability and interpretability techniques introducing their concepts and the insights they produce. Next we summarize several methods for computing specific approaches to explainability and interpretability as well as analyzing their outputs. Finally, we demonstrate the application of several attribution map methods and apply both attribution analysis metrics as well as localization interpretability analysis to six neural network models trained on the SAMPLE dataset to illustrate the insights these methods offer for analyzing SAR ATR performance.
Poster Session
23 April 2024 • 6:00 PM - 7:30 PM EDT | Potomac C
Conference attendees are invited to attend the symposium-wide poster session on Tuesday evening. Come view the SPIE DCS posters, enjoy light refreshments, ask questions, and network with colleagues in your field. Authors of poster papers will be present to answer questions concerning their papers. Attendees are required to wear their conference registration badges to the poster session.

Poster Setup: Tuesday 12:00 PM - 5:30 PM
Poster authors, view poster presentation guidelines and set-up instructions at http://spie.org/DCSPosterGuidelines.
13039-26
Author(s): Quinton Davidson, U.S. Naval Research Lab. (United States)
On demand | Presented live 23 April 2024
Show Abstract + Hide Abstract
Common automatic target recognition (ATR) algorithms based on convolutional neural network (CNN) and detection transformer (DETR) architectures have shown diminished performance when inferencing on data containing anomalies outside of the original training dataset. This papers seeks to characterize the comparative performance impact of common optical sensor calibration artifacts as they affect industry-standard CNN and DETR-based object detectors identifying targets in satellite imagery.
Symposium Plenary on AI/ML + Sustainability
24 April 2024 • 8:30 AM - 10:00 AM EDT | Potomac A
Session Chairs: Latasha Solomon, DEVCOM Army Research Lab. (United States), Ann Marie Raynal, Sandia National Labs. (United States)

Welcome and opening remarks
24 April 2024 • 8:30 AM - 8:40 AM EDT

Army intelligence data and AI in modern warfare (Plenary Presentation)
Presenter(s): David Pierce, U.S. Army Intelligence (United States)
24 April 2024 • 8:40 AM - 9:20 AM EDT

FUTUR-IC: A three-dimensional optimization path towards building a sustainable microchip industry (Plenary Presentation)
Presenter(s): Anu Agarwal, Massachusetts Institute of Technology, Microphotonics Ctr. and Materials Research Lab. (United States)
24 April 2024 • 9:20 AM - 10:00 AM EDT

Artificial Intelligence and Deep Learning: Joint Session with Conferences 13039 and 13046
24 April 2024 • 1:20 PM - 3:20 PM EDT | National Harbor 2
Session Chairs: Michael T. Eismann, Air Force Research Lab. (United States), Kenny Chen, Lockheed Martin Missiles and Fire Control (United States)
13039-27
Author(s): Abhijit Bhattacharjee, Birju Patel, Alexander Taylor, The MathWorks, Inc. (United States); Joseph A. Rivera, Lockheed Martin Corp. (United States)
24 April 2024 • 1:20 PM - 1:40 PM EDT | National Harbor 2
Show Abstract + Hide Abstract
We present a streamlined pipeline that generates a YOLO object detection application using MATLAB and NVIDIA hardware. The application utilizes MATLAB’s GPU Coder toolbox and NVIDIA TensorRT to accelerate inferencing on NVIDIA processors, specifically the latest Jetson Orin embedded processor. We evaluated the object detector on the open U.S. Army Automated Target Recognition (ATR) Development Image Dataset (ADID) for multi-class vehicle detection and classification. Overall, this workflow decreases development time over traditional approaches and provides a quick route to low-code deployment on the latest NVIDIA Jetson Orin. This work offers value to researchers and practitioners aiming to harness the power of NVIDIA processors for rapid, efficient object detection solutions.
13046-42
Author(s): Shotaro Miwa, Shun Otsubo, Jia Qu, Yasuaki Susumu, Mitsubishi Electric Corp. (Japan)
24 April 2024 • 1:40 PM - 2:00 PM EDT | National Harbor 2
Show Abstract + Hide Abstract
Traditionally, computer vision systems, like object detection, primarily relied on supervised learning and predetermined object categories. However, this approach's limitations in terms of generality and the need for additional labeled data are more pronounced for infrared images due to the difficulty of obtaining training datasets. In contrast, the rise of contrastive vision-language models, such as CLIP, has transformed the field. These models, pre-trained on vast image-text pairs, offer more versatile visual representations aligned with rich language semantics. CLIP's feature transferability has become a foundation for various visible image tasks. This paper introduces zero-shot object detection for infrared images using pre-trained vision and language models, extending CLIP's benefits to this domain. Experimental results show the promise of this approach, and the paper initiates a preliminary exploration of domain shift issues between infrared and visible images.
13046-43
Author(s): Art Stout, Kedar Madineni, Teledyne FLIR LLC (United States)
24 April 2024 • 2:00 PM - 2:20 PM EDT | National Harbor 2
Show Abstract + Hide Abstract
The availability of high power mobile processors designed for embedded products featuring multiple compute cores including CPUs, GPUs, DSPs and ISPs enables system developers the ability to integrated software capabilities never possible on embedded processors. Mobile processors from Qualcomm and NVIDIA now feature 50 TOPS compute power while operating on as low as 5 watts. Teledyne FLIR will describe the creation of libraries compiled to run Qualcomm Open CL and NVIDIA CUDA based hardware.
13039-28
Author(s): Sophia P. Bragdon, Vuong H. Truong, Andrew C. Trautz, Matthew D. Bray, Jay L. Clausen, U.S. Army Engineer Research and Development Ctr. (United States)
24 April 2024 • 2:20 PM - 2:40 PM EDT | National Harbor 2
Show Abstract + Hide Abstract
Automatic target recognition (ATR) algorithms that rely on machine learning approaches are limited by the quality of the training dataset and the out-of-domain performance. The performance of a two-step ATR algorithm that on fusing thermal imagery with environmental data is investigated using thermal imagery containing buried and surface object collected in New Hampshire, Mississippi, Arizona, and Panama. A autoencoder neural network is used to encode the salient environmental conditions for a given climatic condition into an environmental feature vector. The environmental feature vector allows for the inclusion of environmental data with varying dimensions to robustly treat missing data. Using this architecture, we evaluate the performance of the two-step ATR on a test dataset collected in an unseen climatic condition, e.g., tropical wet climate when the training dataset contains imagery collected in a similar condition, e.g., subtropical, and dissimilar climates. Lastly, it is shown that performance for out-of-domain climates can be further improved by incorporating physics-based synthetic data into the training dataset.
13039-29
Author(s): Jacob Ross, Rajith Weerasinghe, Justin Lastrapes, Ryan J. Shaver, Etegent Technologies, Ltd. (United States); Paul Sotirelis, Air Force Research Lab. (United States)
24 April 2024 • 2:40 PM - 3:00 PM EDT | National Harbor 2
Show Abstract + Hide Abstract
Traditional data collects of high priority targets require immense planning and resources. When novel operating conditions (OCs) or imaging parameters need to be explored, typically synthetic simulations are leveraged. While synthetic data can be used to assess automatic target recognitions (ATR) algorithms; some simulation environments may inaccurately represent sensor phenomenology. To levitate this issue, a scale model approach is utilized to provide accurate data in a laboratory setting. This work demonstrates the effectiveness of a resource cognizant approach for collecting IR imagery suitable to assessing ATR algorithms. A target of is interest is 3D printed at 1/60th scale with a commercial printer and readily available materials. The printed models are imaged with a commercially available IR camera in a simple laboratory setup. The collected imagery is used to test ATR algorithms when trained on a standard IR ATR dataset; the publicly available ARL Comanche FLIR dataset. The performance of the selected ATR algorithms when given sampled of scale model data is compared to the performance of the same algorithms when using the provided measured data
13046-45
Author(s): Jeremy W. Mares, Mark Martino, Alex R. Irwin, Christopher K. Renshaw, CREOL, The College of Optics and Photonics, Univ. of Central Florida (United States)
24 April 2024 • 3:00 PM - 3:20 PM EDT | National Harbor 2
Show Abstract + Hide Abstract
The ability to accurately ascertain an observer’s position directly from imaged scenery is an important technological capacity, especially in light of the susceptibility of global positioning system (GPS) signals to interference. Horizon matching is a technique that shows promise as a method of self-localization in regions with sufficiently rich topography. By utilizing preexisting digital elevation data, simulated imagery of horizon contours and other landscape features is first generated. Real imagery is then processed to extract these features, and they are compared and analytically fitted to simulation in order to back-calculate the imager position. Here, we demonstrate this functionality with a vehicle-integrated, gimbal mounted multi-band imaging platform that facilitates platform geopositioning with visible, NIR, SWIR, MWIR and LWIR imagers. We compare the benefits and limitations of these bands, evaluating their localization accuracies given competing range performance, image resolution, and target-to-background contrast.
Conference Chair
Lockheed Martin Missiles and Fire Control (United States)
Conference Chair
PlusAI, Inc. (United States)
Conference Chair
Prime Solutions Group, Inc. (United States)
Program Committee
Hunter College (United States)
Program Committee
Wright State Univ. (United States)
Program Committee
The City College of New York (United States)
Program Committee
Megan King
U.S. Army Combat Capabilities Development Command (United States)
Program Committee
Lockheed Martin Corp. (United States)
Program Committee
Jason P. Luck
Lockheed Martin Missiles and Fire Control (United States)
Program Committee
The Univ. of Arizona (United States)
Program Committee
Joint Artificial Intelligence Ctr. (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Univ. of Central Florida (United States)
Program Committee
West Virginia Univ. (United States)
Program Committee
Mayachitra, Inc. (United States)
Program Committee
Anurag Paul
PlusAI, Inc. (United States)
Program Committee
Univ. of Houston (United States)
Program Committee
California State Univ., Northridge (United States)
Program Committee
Emerging Concepts Laboratory LLC (United States)
Program Committee
Inderjot Singh Saggu
PlusAI, Inc. (United States)
Program Committee
Systems & Technology Research (United States)
Program Committee
ESPOL Polytechnic Univ. (Ecuador), Vintra Inc. (United States), Univ. Autònoma de Barcelona (Spain)
Program Committee
Office of Naval Research (United States)
Program Committee
HENSOLDT Optronics GmbH (Germany)
Program Committee
Naval Postgraduate School (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Air Force Research Lab. (United States)
Additional Information

View call for papers

 

What you will need to submit:

  • Presentation title
  • Author(s) information
  • Speaker biography (1000-character max including spaces)
  • Abstract for technical review (200-300 words; text only)
  • Summary of abstract for display in the program (50-150 words; text only)
  • Keywords used in search for your paper (optional)
  • Check the individual conference call for papers for additional requirements (i.e. extended abstract PDF upload for review or instructions for award competitions)
Note: Only original material should be submitted. Commercial papers, papers with no new research/development content, and papers with proprietary restrictions will not be accepted for presentation.