13 - 17 April 2025
Orlando, Florida, US

Post-deadline submissions will be considered for poster, or oral if space is available


This conference on Automatic Target Recognition (ATR) emphasizes all aspects relating to the modern automatic/aided and machine assisted target recognition technologies. Novel methods in these key areas are of particular interest: deep-learning and model-based object/target recognition, adaptive and machine learning approaches, and advanced signal and image processing concepts for detection, recognition, and identification. ATR solutions for various sensors such as sonar/acoustic, neuromorphic (event) sensors, electro-optical, infrared, radar, LiDAR, multispectral, and hyperspectral sensors will be considered. Papers dealing with the entire spectrum of algorithms, systems, and architectures in ATR will be also considered.

A special theme in the ATR conference in 2025 is sustainability. Topics and submissions around sustainable ATR are encouraged but not required. Examples of sustainable ATR include energy-efficient algorithms, training strategies with low carbon footprint, resource-efficient deployment on processing hardware, etc.


Papers are solicited in the following and related topics:

Machine learning for ATR Geospatial remote sensing systems IR-based systems Hyperspectral-based systems Radar/laser radar-based systems New methodologies

Panel discussion on machine learning for automatic target recognition (ML4ATR)
Following the great success of past ML4ATR sessions, we intend to organize another session in 2025. The Machine Learning for Automatic Target Recognition (ML4ATR) session at SPIE Defense + Security (ATR conference) highlights the accomplishments to date and challenges ahead in designing and deploying deep learning and big data analytics algorithms, systems, and hardware for ATR. It provides a forum for researchers, practitioners, solution architects and program managers across all the widely varying disciplines of ATR involved in connecting, engaging, designing solutions, setting up requirements, testing and evaluating to shape the future of this exciting field. ML4ATR topics of interest include training deep-learning-based ATR with limited measured/real data, multi-modal satellite/hyperspectral/sonar/FMV imagery analytics, graph analytic multi-sensory fusion, change detection, pattern-of-life analysis, adversarial learning, trust, and ethics. We invite experts in the field to join this panel discussion in 2025. Each panelist gives a short keynote talk about their projects on machine learning for ATR.

Joint Session
A joint session on artificial intelligence/machine learning (AI/ML) is being planned with the Infrared Technology and Applications conference. We expect to be cover AI/ML in design of IR systems, subsystems, and components (both military and commercial), and IR-based detection, recognition, and identification systems.


Best Paper Award and Best Student Paper Award
To be eligible for this award, you must submit a manuscript, be accepted for an oral presentation, and you or your co-author must present your paper on-site. All students are eligible if the abstract was accepted during the academic year the student graduated. Students are required to be enrolled in a university degree granting program. Manuscripts will be judged on technical merit, presentation/speaking skills, and audience interaction. Winners will be announced after the meeting and will be included in the proceedings. All winners will receive an award certificate and recognition on SPIE.org.


;
In progress – view active session
Conference 13463

Automatic Target Recognition XXXV

14 - 16 April 2025 | Osceola 3, Ballroom Level
All sponsors
Show conference sponsors + Hide conference sponsors
View Session ∨
  • Opening Remarks
  • 1: Automatic Target Recognition
  • 2: Panel Discussion: Machine Learning for Automatic Target Recognition
  • 3: Deep Learning and Performance I
  • 4: Image and Data Processing for Automatic Target Recognition I
  • Symposium Plenary
  • Symposium Panel on Space Sensing: Emerging Topics, Needs, and Crossover Technology
  • Opening Remarks
  • 5: Deep Learning and Performance II
  • 6: Image and Data Processing for Automatic Target Recognition II
  • 7: Infrared Synthetic Data for Automatic Target Recognition: Joint session with 13459 and 13463
  • 8: Machine Learning for Infrared Sensing: Joint Session with Conferences 13463 and 13469
Opening Remarks
14 April 2025 • 8:00 AM - 8:10 AM EDT | Osceola 3, Ballroom Level
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
Opening remarks for Automatic Target Recognition XXXV.
Session 1: Automatic Target Recognition
14 April 2025 • 8:10 AM - 9:50 AM EDT | Osceola 3, Ballroom Level
Session Chair: Kristen Jaskie, Prime Solutions Group, Inc. (United States)
13463-1
Author(s): Edison Mucllari, Univ. of Kentucky (United States); Aswin N. Raghavan, Zachary A. Daniels, SRI International (United States)
14 April 2025 • 8:10 AM - 8:30 AM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Automatic target recognition (ATR) models generally consist of machine learning (ML) models trained on a collection of clean, well-labeled data samples with a fixed known set of target classes. During deployment, it is assumed that these models operate over new data samples that align with those from the training data distribution. Practical systems require updating ATR models over time in response to changing sensor technology, sensor degradation, changing environmental conditions, and adversarial interference. Our work focuses on class-incremental synthetic aperture radar-based ATR where new classes are sequentially added to an existing ML model, and the model has limited access to data samples from past targets during training. The model must balance the ability to adapt to unseen targets with the ability to maintain high recognition accuracy on past targets. Continual learning introduces additional avenues for corrupting models during data collection. We study class-incremental SAR ATR under three types of noise that may arise during data collection that may interfere with learning: noisy labels, perturbed data, and adversarial poisoning attacks.
13463-7
Author(s): Steven Senczyszyn, Michigan Technological Univ. (United States); Ian Helman, Michigan Tech Research Institute (United States); Timothy C. Havens, Michigan Technological Univ. (United States); Adam J. Webb, Michigan Tech Research Institute (United States); Steven R. Price, U.S. Army Engineer Research and Development Ctr. (United States)
14 April 2025 • 8:30 AM - 8:50 AM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Automatic target recognition (ATR) on synthetic aperture radar (SAR) data can be a challenging task due to the limited availability of publicly available measured datasets. Prior work has focused on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset and the Synthetic and Measured Paired and Labeled Experiment (SAMPLE/SAMPLE+) datasets. The primary objective of this research is to expand on the real-to-synthetic dataset generation by exploring ATR performance on not only measured data with well-matched synthetically generated data, but also with synthetic surrogate scattering objects designed to produce similar SAR images to the real and modeled objects. The detection performance of the ATR on these synthetic surrogate targets is evaluated as the sparsity of the surrogate target changes. The saliency is also evaluated to better explain the reasoning behind the classification decision.
13463-5
Author(s): Louis Y. Kim, Draper Lab. (United States); Michelle Karker, The Charles Stark Draper Lab., Inc. (United States); Victoria Valledor, Draper Lab. (United States); Seiyoung C. Lee, Karl F. Brzoska, The Charles Stark Draper Lab., Inc. (United States); Margaret Duff, The Charles Stark Draper Laboratory, Inc. (United States); Anthony Palladino, The Charles Stark Draper Lab., Inc. (United States)
14 April 2025 • 8:50 AM - 9:10 AM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Recent advances in open-vocabulary object detection models will enable ATR systems to be sustainable and repurposed by non-technical end-users for a variety of applications or missions. New, and potentially nuanced, classes can be defined with natural language text descriptions in the field, immediately before runtime, without needing to retrain the model. We present an approach for improving non-technical users’ natural language text descriptions of their desired targets of interest, using a combination of analysis techniques on the text embeddings, and proper combinations of embeddings for contrastive examples. We quantify the improvement that our feedback mechanism provides by demonstrating performance with multiple publicly-available open-vocabulary object detection models, on multiple open-vocabulary object detection benchmarks.
13463-11
Author(s): Claire Thorp, Air Force Research Lab. (United States); Sean Sisti, Casey Schwartz, Air Force Research Lab. (United States)
14 April 2025 • 9:10 AM - 9:30 AM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
In this paper, the authors address the challenges of building accurate and robust models for defense applications, such as Automatic Target Recognition (ATR), that rely on limited labeled data. The authors propose a multi-task approach that uses Out-of-Distribution (OOD) scoring in an adversarial relationship with an image classifier to provide a set of potentially informative samples that will drive an Active Learning (AL) loop. This approach aims to address the issue of catastrophic failure and high confidence in incorrect predictions that can occur when models are trained on limited datasets. The authors' contributions include practical recommendations for enabling an active learning task using OOD samples specifically in the context of ATR and overhead imagery. **Approved for Public Release: AFRL-2024-5320; Distribution Unlimited: PA CLEARANCE
13463-23
Author(s): Kristen Jaskie, Timothy L. Overman, Marv Kleine, Prime Solutions Group, Inc. (United States)
14 April 2025 • 9:30 AM - 9:50 AM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
We present an approach for automatic target recognition (ATR) using a neuromorphic camera and real-time object detection. To address the limited availability of neuromorphic datasets, we generated synthetic training data using electro-optical (EO) data and a neuromorphic camera digital twin. The synthetic data was used to train a YOLOv5 model, which was then tested on real neuromorphic data. Results demonstrate the effectiveness of this approach, showing promising performance on real neuromorphic inputs and providing a novel solution for advancing neuromorphic-based ATR systems.
Break
Coffee Break 9:50 AM - 10:20 AM
Session 2: Panel Discussion: Machine Learning for Automatic Target Recognition
14 April 2025 • 10:20 AM - 12:20 PM EDT | Osceola 3, Ballroom Level

View Full Details: spie.org/dcs/ml-for atr-panel

In response to evolving complexities, automatic target recognition (ATR) is seamlessly transitioning into the realm of artificial intelligence (AI), embracing a future marked by innovation and adaptability. The traditional rule-based approaches are giving way to dynamic, data-driven methodologies empowered by AI.

Break
Lunch Break 12:20 PM - 1:50 PM
Session 3: Deep Learning and Performance I
14 April 2025 • 1:50 PM - 3:10 PM EDT | Osceola 3, Ballroom Level
Session Chair: Matthew D. Reisman, Bedrock Research LLC (United States)
13463-2
Author(s): Donald Waagen, Air Force Research Lab. (United States); Don Hulsey, Dynetics, Inc. (United States); Katie Rainey, Erin Hausmann, Naval Information Warfare Ctr. Pacific (United States); David Gray, Air Force Research Lab. (United States)
14 April 2025 • 1:50 PM - 2:10 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Understanding the relationships between data points in the latent decision space derived by the deep learning system is critical to evaluating and interpreting the performance of the system on real world data. Detecting “Out- of-Distribution” (OOD) data for deep learning systems continues to be an active research topic. We investigate nonparametric online and batch approaches for estimating distributional separation or “outlierness”. Using open source simulated and measured Synthetic Aperture RADAR (SAR) datasets, we empirically demonstrate that the concepts of OOD and “Out-of-Task” are not synonymous.
13463-12
Author(s): Keefa Nelson, Lily Pederson, Randy Peirce, Air Force Research Lab. (United States)
14 April 2025 • 2:10 PM - 2:30 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Waiting for PA approval
13463-13
Author(s): Taeseung Lee, Changhan Park, Junyoung Ko, Moonsung Huh, Byoungjun Kim, Hanwha Systems Co., Ltd. (Korea, Republic of); Heewoo Lee, Byungtae Oh, Wookyung Lee, Korea Aerospace Univ. (Korea, Republic of)
14 April 2025 • 2:30 PM - 2:50 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Unlike imaging using electro-optical payloads, the images acquired through satellites equipped with Synthetic Aperture Radar (SAR) are generated based on electromagnetic waves and ground surface characteristics. Due to these characteristics, SAR images have the advantage of being able to be used in a variety of environments, and can be operated in all-weather environments depending on the use of electromagnetic wave technology. However, the ground surface is time-dependent, and the scattering results appear different even if images taken of the same area due to geometric distortion, complex scattering or radiation from structures. Therefore, it is difficult to apply the similarity evaluation method applied to conventional optical images due to the unique characteristic of SAR image that the similarity between images may appear low even if they were taken at a similar time. In this paper, we propose a new similarity evaluation method based on statistical quality measurement that reflects real SAR image characteristics and describe the experimental results with real SAR images using fake SAR images generated through synthesis or data augmentation.
13463-26
Author(s): Adam Cuellar, Univ. of Central Florida (United States); Daniel Brignac, Abhijit Mahalanobis, The Univ. of Arizona (United States); Wasfy Mikhael, Univ. of Central Florida (United States)
14 April 2025 • 2:50 PM - 3:10 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Recognizing targets in infra-red images while rejecting unknown objects is crucial for security applications. This paper introduces a novel method, Simultaneous Classification of Objects and Unknown Rejection (SCOUR), to enhance existing classifiers without retraining them. SCOUR employs a secondary regression-based network that evaluates the primary classifier's decision to identify potentially unknown inputs. Utilizing a Bayesian framework, SCOUR combines the primary classifier's confidence with the secondary network's output to effectively separate unknown objects from known targets. Importantly, our method does not require any out-of-distribution samples for training the secondary network. Demonstrated on both CIFAR-10 and a medium-wave infra-red (MWIR) dataset, SCOUR outperforms state-of-the-art methods in rejecting unknown targets while maintaining accuracy on known classes.
Break
Coffee Break 3:10 PM - 3:40 PM
Session 4: Image and Data Processing for Automatic Target Recognition I
14 April 2025 • 3:40 PM - 4:40 PM EDT | Osceola 3, Ballroom Level
Session Chair: Kristen Jaskie, Prime Solutions Group, Inc. (United States)
13463-8
Author(s): Phillip Lei, Derek T. Anderson, Brendan Alvey, Logan Brenningmeyer, Thomas Asmar, Univ. of Missouri (United States)
14 April 2025 • 3:40 PM - 4:00 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Multi-spectral imagery, such as data from the visual spectrum, has traditionally been the primary source for AI-based computer vision algorithms, with three-dimensional data also playing a significant role. However, in many scenarios, active sensors or stereoscopic setups are unavailable. While methods like structure from motion and multi-view stereo have emerged as alternatives, single image depth estimation (SIDE) presents a promising approach. In this work, we propose a method for fusing color information with 3D features derived from a SIDE network.
13463-10
Author(s): Ismail I. Jouny, Lafayette College (United States)
14 April 2025 • 4:00 PM - 4:20 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
This paper develops a graph model for the impulse response (or high range resolution profile) of a radar target. This graph model is based on the number of scatterers, distance between scatterers, sequence and degree of dispersion of each scatterer. This graph model is then fed into a graph neural network for target recognition. The paper examines the performance of this graph-based target recognition system using real commercial aircraft backscatter (as recorded in a compact range). Issues of azimuth ambiguity, extraneous scatterers, and azimuth mismatch, missing features, and noise contamination are addressed in terms of the impact on target recognition performance.
13463-14
Author(s): Taeseung Lee, Moonsung Huh, Byoungjun Kim, Junyoung Ko, Youngdon Shin, Hanwha Systems Co., Ltd. (Korea, Republic of)
14 April 2025 • 4:20 PM - 4:40 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Recently, in the military surveillance and reconnaissance systems, the demand for SAR images is increasing for recognition, tracking and anomaly detection of time-sensitive emergency targets through AI technology. To improve the performance of AI models, a large amount of learning data is essential, but SAR images have significantly less data compared to EO images. Specifically, it is more difficult to acquire SAR images including a certain-purpose objects such as Transporter Erector Launcher (TEL), which are the highest priority in terms of detection and identification among the time-sensitive emergency targets. In this paper, we propose a method of generating a precise 3D CAD model by scanning a target with a shape as similar as possible to the TEL, acquiring target data called a SAR chip through EM simulation, and generating a fake SAR image with excellent performance for similarity. We aim to demonstrate the superiority of the proposed method by using quantitative indicators such as PSNR and SSIM for similarity between real and fake SAR images.
Symposium Plenary
14 April 2025 • 5:30 PM - 7:00 PM EDT | Osceola Ballroom C, Ballroom Level

View Full Details: spie.org/dcs/symposium-plenary

Chair welcome and introduction
14 April 2025 • 5:30 PM - 5:40 PM EDT

Bring the future faster (Plenary Presentation)
Presenter(s): Jason E. Bartolomei, Brigadier General, United States Air Force, Air Force Research Laboratory (United States)
14 April 2025 • 5:40 PM – 6:20 PM EDT

R&D in an era of crisis operations (Plenary Presentation)
Presenter(s): Thomas Braun, Chief Scientist, National Geospatial-Intelligence Agency (United States)
14 April 2025 • 6:20 PM – 7:00 PM EDT

Symposium Panel on Space Sensing: Emerging Topics, Needs, and Crossover Technology
15 April 2025 • 8:30 AM - 10:00 AM EDT | Osceola Ballroom A, Ballroom Level

View Full Details: spie.org/dcs/symposium-panel

Crossover sensing and autonomy technologies are pushing satellite systems toward lower-cost, smaller, distributed architectures with shortened cycles in all areas. Join our illustrious panelists and moderator as we discuss emerging topics, needs, and crossover technology at this symposium-wide panel on space sensing.

Break
Coffee and Exhibition Break 10:00 AM - 11:00 AM
Opening Remarks
15 April 2025 • 11:00 AM - 11:10 AM EDT | Osceola 3, Ballroom Level
Session Chair: Riad I. Hammoud, PlusAI, Inc. (United States)
Opening remarks for Automatic Target Recognition XXXV.
Session 5: Deep Learning and Performance II
15 April 2025 • 11:10 AM - 12:30 PM EDT | Osceola 3, Ballroom Level
Session Chair: Matthew D. Reisman, Bedrock Research LLC (United States)
13463-18
Author(s): David F. Ramirez, Arizona State Univ. (United States); Timothy L. Overman, Prime Solutions Group, Inc. (United States); Kristen Jaskie, Prime Solutions Group, Inc. (United States), Arizona State Univ. (United States); Marv Kleine, Prime Solutions Group, Inc. (United States); Andreas Spanias, Arizona State Univ. (United States)
15 April 2025 • 11:10 AM - 11:30 AM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
This research explores the application of large language-vision models (LLVM), similar to OpenAI's GPT-4, for remote sensing and synthetic aperture radar (SAR) imagery. The study examines transformer-based LLVM, including miniGPT-4 and LLaVA, assessing their performance against existing remote sensing visual question-answering (VQA) benchmarks. We present our newly assembled MSTAR-VQA dataset, consisting of 14,108 SAR images and 423,240 question-answer triplets of contextual military vehicle qualities. This challenge dataset includes nuanced ATR details, including vehicle type, named variant acronyms, unique serial number identification, radar collection angle determination, and background location recognition. We train state-of-the-art LLVM methods to identify these target qualities. This work aims to enhance ATR for SAR applications, addressing the critical challenge of identifying vehicle types, which typically require extensive training for human analysts. The findings represent a significant advancement in applying LLVM for SAR remote sensing tasks, highlighting the potential for improved machine-assisted ATR.
13463-19
Author(s): Khaled Obaideen, Univ. of Sharjah (United Arab Emirates); Alexandre McCafferty-Leroux, Waleed Hilal, McMaster Univ. (Canada); Mohammad AlShabi, Univ. of Sharjah (United Arab Emirates); S. Andrew Gadsden, McMaster Univ. (Canada)
15 April 2025 • 11:30 AM - 11:50 AM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Automatic Target Recognition (ATR) systems have witnessed significant advancements due to the integration of deep learning (DL) techniques. This bibliometric paper analyzes research trends in DL-based ATR from 2010 to 2024, focusing on key algorithms, applications, and evolving challenges. By employing bibliometric mapping techniques, we identify influential publications, prominent research institutions, and major funding bodies that have contributed to ATR advancements. The study also explores how DL algorithms such as convolutional neural networks (CNNs) and generative adversarial networks (GANs) have been utilized in tasks like object detection, classification, and tracking within ATR systems. We analyze keyword co-occurrence networks to uncover emerging areas such as real-time target recognition and the integration of multimodal sensor data. The paper also discusses regional contributions, with emphasis on leading research efforts in the United States, China, and Europe. Our analysis highlights the challenges of processing large datasets in real-time and future directions for improving ATR systems using AI and DL methodologies.
13463-20
Author(s): Paul Hill, Nantheera Anantrasirichai, Alin Achim, David Bull, Univ. of Bristol (United Kingdom)
15 April 2025 • 11:50 AM - 12:10 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Atmospheric turbulence significantly hinders the interpretation and analysis of surveillance imagery, complicating tasks like object classification and scene tracking. This turbulence also diminishes the effectiveness of conventional methods used for detecting and tracking targets. While deep learning-based object detection methods perform well in normal conditions, they cannot be directly applied to sequences affected by atmospheric distortion. To address this, we propose a novel framework that learns and compensates for distorted features to improve object detection and classification. Specifically, 3D deformable convolutions and newly 3D Mamba are used to handle spatial displacements caused by turbulence. Features are extracted in a pyramid manner and passed to the detector. We evaluate the performance of two real-time detectors, YOLO (You Only Look Once) and Real-Time Detection Transformer (RT-DETR), integrated with our feature extractor. The framework is tested on both synthetic and real-world datasets.
13463-25
Author(s): Sophia Abraham, Steve Cruz, Jonathan Hauenstein, Walter Scheirer, Univ. of Notre Dame (United States)
15 April 2025 • 12:10 PM - 12:30 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
We introduce a novel framework that combines self-supervised learning with Vision Transformers (ViTs) for Automatic Target Recognition (ATR). By leveraging large-scale, unlabeled data, our approach dynamically adjusts key model parameters, such as patch size and attention heads, based on domain-specific feedback during training. This allows the system to adapt to different sensor modalities, including electro-optical, radar, and hyperspectral imagery. The framework is designed to enhance ATR performance by learning efficient representations and optimizing resource use, offering a promising solution for defense and security applications.
Break
Lunch Break 12:30 PM - 2:00 PM
Session 6: Image and Data Processing for Automatic Target Recognition II
15 April 2025 • 2:00 PM - 3:00 PM EDT | Osceola 3, Ballroom Level
Session Chair: Riad I. Hammoud, PlusAI, Inc. (United States)
13463-15
Author(s): Don Yates, Univ. of West Florida (United States); Arash Mahyari, Florida Institute for Human & Machine Cognition (United States); Hakki Sevil, Univ. of West Florida (United States)
15 April 2025 • 2:00 PM - 2:20 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Multi-view clustering has garnered attention, with many methods showing success on simple datasets like MNIST. However, many state-of-the-art methods lack sufficient representational capability. This paper presents results of deep embedded clustering on multi-view real-world aerial imaging data. Adapting previous methods to the challenges of aerial imagery, the approach introduces a ResNet-18 autoencoder backbone and data augmentation techniques to handle complex images and diverse environmental conditions. Advanced feature extraction using convolutional autoencoders captures intricate patterns and spatial relationships. By integrating multi-view data, the unsupervised method enhances clustering accuracy and robustness, advancing aerial image analysis for environmental monitoring, urban planning, and first responder efforts.
13463-21
Author(s): Bingcheng C. Li, Lockheed Martin Corp. (United States)
15 April 2025 • 2:20 PM - 2:40 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
Corner detection and blob detection have great applications in motion analysis, target tracking, target detection, and 3D reconstruction. Extensive research has been conducted to explore different approaches to detect corners and blobs. All the research by far performs corner detection and blob detection separately. Diffusion equation evolution of data is another technique for image processing and feature extraction. In this paper, a graph approach is developed to represent image data. With the graph model, a spatial and spectral combined diffusion equation evolution approach is proposed for corner and blob detection. The proposed approach has clear geometric meaning, and unify the corner and blob detection, performing the same evolution, same formula, and same criteria to detect both blobs and corners. In addition, no correlation matrix implementation and no eigen value computation are needed, comparing to the traditional second order Gaussian directional directive (SOGDD) methods. Test results show that the proposed method is simple in implementation and also has higher performance than the traditional methods
13463-24
Author(s): Ahsan Habib Akash, Rizwan Ahamed, Nasser M. Nasrabadi, Shoaib M. Sami, West Virginia Univ. (United States); Stacey F. Jones, O Analytics Inc. (United States); Md Mahedi Hasan, West Virginia Univ. (United States)
15 April 2025 • 2:40 PM - 3:00 PM EDT | Osceola 3, Ballroom Level
Show Abstract + Hide Abstract
In this study, we present an advanced methodology to enhance the resolution of low Earth orbit (LEO) satellite imagery using state-of-the-art super-resolution algorithms. By applying deep learning-based upscaling and refinement, we significantly improve image clarity and detail, enabling more accurate classification of satellite types such as communication and navigation systems. This enhancement is crucial for effective space situational awareness and optimized space traffic management. Our approach addresses the limitations of traditional imaging methods, resulting in better identification of key satellite features and robust monitoring of space assets. Experimental evaluations demonstrate the efficacy of our method, highlighting its potential for improving space operations and sustainability.
Session 7: Infrared Synthetic Data for Automatic Target Recognition: Joint session with 13459 and 13463
16 April 2025 • 1:30 PM - 2:50 PM EDT | Osceola Ballroom B, Ballroom Level
Session Chairs: Kimberly E. Manser, DEVCOM C5ISR (United States), Michael T. Eismann, Air Force Research Lab. (United States)
13459-30
Author(s): Keith F. Prussing, Georgia Tech Research Institute (United States)
16 April 2025 • 1:30 PM - 1:50 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
The application of artificial intelligence and machine learning to developing automated track and recognition algorithms has been led over the past years by progress in the visible spectrum. This is due primarily to the easy availability of large data sets of visible imagery which are necessary to satisfy the quantity of inputs needed to train these algorithms. Some progress has been made applying these trained algorithms to the infrared portion of the spectrum with mixed results. Ideally, the infrared algorithms would be trained on infrared imagery; however, comparably large sets of imagery are not available in the infrared. Instead, developers must turn to synthetic image generation tools to supplement the measured data. Developers must consider different factors when selecting a specific tool. This paper outlines factors to consider when selecting a synthetic image generation tool and provides a brief comparison among image generators.
13459-31
Author(s): Luis Bolanos, Garrett Urwin, Reece Walsh, Ryan Clark, Jozsef Hamari, Mohsen Zardadi, TerraSense Analytics (Canada)
16 April 2025 • 1:50 PM - 2:10 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
Automatic target recognition (ATR) utilizes both electro-optical (EO) and infrared (IR) data for accurate predictions. Collecting airborne IR data for ATR poses significant challenges, including data imbalance, high costs, and logistical constraints. Public datasets primarily focus on electro-optical (EO) data, leaving a gap in available airborne IR datasets. To address these limitations, TerraSense Analytics has explored physics-based rendering and data-driven synthetic IR data generation methods. We propose a novel generative pipeline for synthetic IR data generation using ControlNet, built on top of a pretrained stable diffusion model. Our ControlNet takes either a real or synthetic EO image as an input, along with a corresponding prompt, and outputs a paired synthetic IR image. We demonstrate that our EO2IR ControlNet is more robust than previous generative methods, improving image generation quality, thermal accuracy, and performance in downstream object detection tasks.
13459-32
Author(s): Gregory P. Spell, Peter Torrione, Covar, LLC (United States); Kimberly Manser, DEVCOM C5ISR (United States)
16 April 2025 • 2:10 PM - 2:30 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
Synthetic infrared (IR) imagery has been leveraged to improve model performance for aided target recognition (AiTR) systems when using a combination of real and synthetic imagery. However, a performance gap remains between models trained only on synthetic data and those trained only on real. A hypothesis is that this performance gap is due to differences in the appearance between real and synthetic imagery: the so-called “realism gap”. Closing this realism gap is expected to further increase the advantages of using synthetic data for AiTR algorithm training and to expand the capabilities of using synthetic data. This work describes research that examines features of synthetic data, specifically comparing to features of real IR data. Using a variety of feature representations, we seek to characterize and quantify the realism gap. This characterization allows design of experiments to observe how the realism gap changes when certain modifications to the synthetic data are affected. This work describes our methodology for characterizing the realism gap between synthetic IR and real data and our findings from experiments designed to close the gap.
13463-17
Author(s): Matthew D. Reisman, Kevin LaTourette, Dominic LeDuc, Bedrock Research LLC (United States); Peter Shagnea, Avi Lindenbaum, AgileView, Inc. (United States)
16 April 2025 • 2:30 PM - 2:50 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
Data labeling is often the most time consuming and expensive component of new automatic target recognition (ATR) model development in remote sensing. Synthetic data have long been desired for ATR model development in scenarios with limited real data available but have not been fully adopted in defense due to compute constraints and reliability for critical applications. This work combines AgileView’s synthetic data generation platform with Bedrock Research’s remote sensing foundation models to perform automated active learning. We seed initial model training on an example use case of maritime detection and classification solely with synthetic data, and then an iterative feedback loop of foundation model fine tuning with exclusively real data fully automates the active learning process and minimizes the human hours required for comprehensive data labeling. This provides a fast and trustworthy approach to ATR data and model curation for widespread defense applications.
Break
Coffee and Exhibition Break 2:50 PM - 4:00 PM
Session 8: Machine Learning for Infrared Sensing: Joint Session with Conferences 13463 and 13469
16 April 2025 • 4:00 PM - 5:40 PM EDT | Osceola Ballroom B, Ballroom Level
Session Chairs: Michael T. Eismann, Air Force Research Lab. (United States), Timothy L. Overman, Prime Solutions Group, Inc. (United States)
13469-48
Author(s): Martin Gerken, HENSOLDT Optronics GmbH (Germany)
16 April 2025 • 4:00 PM - 4:20 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
The paper presents a novel zoom sensor system designed for military applications, featuring two coaxial optical cameras with varying focal lengths and high-resolution CMOS detectors. This setup minimizes the need for mechanical components, thereby reducing complexity. The zoom capability is achieved electronically by adjusting the regions of interest on the detectors, requiring only a single focus drive. This electronic approach enhances reliability and simplifies system operation. The sensor system is compatible with both near- and shortwave infrared spectral bands, ensuring seamless integration with existing visual cameras and enabling cost-effective upgrades to current military systems. By utilizing electronic adjustments instead of mechanical zoom mechanisms, the system offers a more efficient and potentially more durable solution for military imaging needs. In summary, the proposed zoom sensor system marks a significant advancement in military imaging technology, providing a simplified, reliable, and cost-effective alternative to traditional mechanical zoom systems while maintaining compatibility with existing equipment.
13469-49
Author(s): John C. Liobe, Brendan Murphy, John Wieners, Andrew Eckhardt, Jay Yu, John Tagle, Sean Houlihan, Michael J. Evans, Sensors Unlimited, a Collins Aerospace Co. (United States)
16 April 2025 • 4:20 PM - 4:40 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
Sensors Unlimited, Inc. (SUI), a Raytheon company, has integrated cutting-edge artificial intelligence (AI) technology into its latest suite of infrared imaging solutions in support of broadening applications outside of simply standard imaging. By merging advanced hardware with intelligent software, SUI is setting a new standard for performance in infrared imaging systems.
13463-3
Author(s): Anthony Buschiazzo, Shuowen Hu, DEVCOM Army Research Lab. (United States); Brennan Peace, Benjamin Riggan, Univ. of Nebraska-Lincoln (United States)
16 April 2025 • 4:40 PM - 5:00 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
Object detection for military applications, typically referred to as automatic target recognition (ATR), is highly challenging due to the diversity of operational environments and oftentimes long range of the targets, yielding relatively small numbers of pixels on target appearing in high clutter backgrounds. The traditional object detection approach, consisting of a multi-class object detection model trained using a list of non-hierarchical target classes, is not ideal as oftentimes there is not sufficient pixels on target to predict a specific class. By using a hierarchical classifier, the root class predictions can be fed to subsequent classifiers that are more specialized than their parent node subclasses to achieve more fine-grained predictions in the ontology.
13463-6
Author(s): Corentin Lanusse-Malhéné, Benjamin Pannetier, CS Group (France); Nicolas Rivière, ONERA (France); Olivier Bartheye, Ctr. de Recherche de l'École de l'Air (France); Anita Schilling, ONERA (France); Lionel Gardenal, CS Group (France)
16 April 2025 • 5:00 PM - 5:20 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
We assess the applicability of classical approaches developed for autonomous vehicles and ground object detection and tracking using on-vehicle 3D LiDAR sensors, in an anti-drone defense context in urban areas. This assessment focuses on detecting and tracking protean UAV based on degraded 3D LiDAR measurements. We identify innovation avenues to improve or surpass these approaches for this purpose.
13463-16
Author(s): Daniel A. C. Pearce, Nikhil Jawade, Steve Chappell, James Whicker, Chris Wood, Tim Stephens, Living Optics (United Kingdom)
16 April 2025 • 5:20 PM - 5:40 PM EDT | Osceola Ballroom B, Ballroom Level
Show Abstract + Hide Abstract
Camouflage materials and dismounted infantry uniforms can take a variety of forms on the modern battlefield. Living Optics demonstrates the ability of coded aperture snapshot hyperspectral imaging to detect camouflage materials in rural environments through a range of algorithms. This paper illustrates the algorithmic adjustments needed for camouflage material detection in ground-to-ground imaging geometries. The detection performance within the Nvidia Orin GPU compute envelope is explored for live video feedback on the battlefield. Results for dismounted infantry uniform identification at a fixed range are presented for a spatial-spectral classifier. These results are given in the context of automated segmentation and identification of dismounts by a mast mounted Living Optics snapshot vis-NIR hyperspectral development kit.
Conference Chair
Lockheed Martin Missiles and Fire Control (United States)
Conference Chair
PlusAI, Inc. (United States)
Conference Chair
Prime Solutions Group, Inc. (United States)
Program Committee
Hunter College (United States)
Program Committee
Wright State Univ. (United States)
Program Committee
Lockheed Martin Corp. (United States)
Program Committee
The Univ. of Arizona (United States)
Program Committee
Joint Artificial Intelligence Ctr. (United States)
Program Committee
Univ. of Central Florida (United States)
Program Committee
West Virginia Univ. (United States)
Program Committee
Univ. of Houston (United States)
Program Committee
California State Univ., Northridge (United States)
Program Committee
Systems & Technology Research (United States)
Program Committee
Office of Naval Research (United States)
Program Committee
HENSOLDT Optronics GmbH (Germany)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Air Force Research Lab. (United States)
Additional Information

POST-DEADLINE ABSTRACT SUBMISSIONS CLOSED

View the Call for Papers PDF