13 - 17 April 2025
Orlando, Florida, US
This conference addresses advances in all aspects of systems and algorithms used in all levels of information fusion. This conference encourages a range of issues pertinent to the target presence/recognition, such as signal/image data processing, exploitation, and dissemination; feature extraction and tracking; multisensor/data/information fusion; resource management, processing and computational complexity, decision-making and human’s role, and deployment image compression, compressive sensing, and processor architectures. Defense, security as well as dual-use and commercial applications of the acquisition, signal processing, and information fusion problems will be considered.

Papers are solicited, but not limited to, the following and related topics: In addition, this conference plans to host in an invited panel composed of internationally recognized experts.

Best student paper award
We are happy to announce a best student paper award, to be judged at the conference (papers must acknowledge first student author when submitting). The best student paper award for the SPIE Signal Processing, Sensor/Information Fusion, and Target Recognition conference is picked by a committee delegated by the program chairs of the conference. It recognizes the very best work appearing at the conference where the first author is a student.
;
In progress – view active session
Conference 13479

Signal Processing, Sensor/Information Fusion, and Target Recognition XXXIV

14 - 16 April 2025
View Session ∨
  • 1: Multisensor Fusion, Multitarget Tracking, and Resource Management I
  • 2: Multisensor Fusion, Multitarget Tracking, and Resource Management II
  • Panel Discussion: LLMs for Information Fusion
  • Symposium Plenary
  • Symposium Panel on Space Sensing: Emerging Topics, Needs, and Crossover Technology
  • 3: Information Fusion Methodologies and Applications I
  • 4: Information Fusion Methodologies and Applications II
  • 5: Information Fusion Methodologies and Applications III
  • Poster Session
  • 6: Signal and Image Processing, and Information Fusion Applications I
  • 7: Signal and Image Processing, and Information Fusion Applications II
  • 8: Signal and Image Processing, and Information Fusion Applications III
  • 9: Signal and Image Processing, and Information Fusion Applications IV
Information

Want to participate in this program?
Post-deadline abstract submissions accepted through 17 February. See "Additional Information" tab for instructions.

Session 1: Multisensor Fusion, Multitarget Tracking, and Resource Management I
14 April 2025 • 8:00 AM - 9:50 AM EDT
Session Chair: Ivan Kadar, Interlink Systems Sciences, Inc. (United States)
8:00 - 8:10 AM: Opening Remarks
13479-1
Author(s): Naseem Alsadi, Stephen Andrew Gadsden, McMaster Univ. (Canada); John Yawney, Adastra Corp. (Canada)
14 April 2025 • 8:10 AM - 8:30 AM EDT
Show Abstract + Hide Abstract
Kalman filtering is a widely used method for state estimation across various applications. Its distributed variant, the Distributed Kalman Filter (DKF), is crucial in decentralized systems, especially where sensor nodes have varying reliability. This paper introduces an Adaptive Deep Learning-based DKF that dynamically adjusts to changes in sensor reliability and network conditions. By integrating deep learning, the filter adapts in real-time, improving estimation accuracy in complex, heterogeneous environments. Simulations demonstrate the proposed approach’s enhanced performance over traditional DKF methods, making it a robust solution for decentralized applications in smart industries and IoT networks.
13479-2
Author(s): Naseem Alsadi, Stephen Andrew Gadsden, McMaster Univ. (Canada); John Yawney, Adastra Corp. (Canada)
14 April 2025 • 8:30 AM - 8:50 AM EDT
Show Abstract + Hide Abstract
The Sliding Sigmoid Filter (SSF) is a type of predictor-corrector estimator that integrates sliding mode control concepts into the state estimation process. Unlike traditional Kalman filters, which rely on linear corrections, the SSF, similar to its predecessor the Sliding Innovation Filter (SIF), adjusts the system's gain based on the magnitude of the innovation. However, the SSF employs the Sigmoid function to implement a update of the state. By modifying the correction step to account for non-linearities smoothly, the SSF enhances estimation accuracy, making it particularly useful in dynamic environments where the system model or sensor data may be prone to errors or fluctuations. This paper derives the recursive equations utilized in the SSF and makes advances in these equations with the implementation of an optimization approach to provide zero-knowledge covariance optimization.
13479-3
Author(s): Quade Butler, Waleed Hilal, McMaster Univ. (Canada); Youssef Ziada, Ford Motor Co. (United States); Stephen Andrew Gadsden, McMaster Univ. (Canada)
14 April 2025 • 8:50 AM - 9:10 AM EDT
Show Abstract + Hide Abstract
Cubature rules approximate multidimensional integrals by a weighted sum of functions evaluated at carefully selected points with corresponding weights. Most cubature rules are characterized by their degree of exactness. That is, the degree of (multivariate) polynomial for which they integrate exactly. Therefore, efficient and stable cubature rules that use a minimum number of points while achieving the highest degree possible are desired. Many efficient cubature methods have been used to approximate Gaussian-weighted integrals in the Gaussian filtering framework. This has led to a number of cubature Kalman filters, and consequently many nonlinear filters for designers to choose from. However, recently, cubature rules that use more evaluation points have been preferred for their high accuracy. In this paper, we compare and analyze contemporary cubature Kalman filters based on their attainable accuracy, computational complexity, and robustness to noise and model uncertainties. Notably, we compare to a novel cubature Kalman filter that is principled in high rates of convergence as opposed to a high degree of exactness.
13479-4
Author(s): Brad Killen, Audrey L. Aldridge, Paul Barrett, Mississippi State Univ. (United States); Jeremy Davis, The Univ. of Alabama in Huntsville (United States); Daniel Carruth, Cindy L. Bethel, Mississippi State Univ. (United States)
14 April 2025 • 9:10 AM - 9:30 AM EDT
Show Abstract + Hide Abstract
Searching for specific objects across multiple videos is currently a resource-heavy task that is costly to accomplish, whether it is computational processing or personnel. This paper presents a framework designed to allow for object detection and tracking across multiple videos with minimal effort. By combining several computer vision tools such as SAM (Segment Anything Model), YOLOv8 (You Only Look Once-version 8), and DINOv2 (self-distillation with no labels-version 2), this framework requires minimal training across machine learning (ML) models and can ease the burden placed on users when parsing and monitoring hours of video footage. By optimizing this system, the time, effort, and resources spent processing videos is reduced to a fraction of the time, allowing for more flexibility in the system's application. Evaluation addresses accuracy, precision, speed, and how optimizing each of these performance metrics will affect resource and memory consumption.
13479-5
Author(s): Patrick Kosierb, Yuandi Wu, Quade Butler, Brett Sicard, Stephen Andrew Gadsden, McMaster Univ. (Canada)
14 April 2025 • 9:30 AM - 9:50 AM EDT
Show Abstract + Hide Abstract
The magnetorheological (MR) damper is a promising device enabling control in suspension systems. The MR damper has a variable and non-linear damping force which depends on the input current, and where the non-linearity stems from the MR damper's hysteric behaviours. The force can be modelled using models such as the Bingham model and the Bouc-Wen model, each varying in complexity and accuracy. There are a variety of methods in the literature that are used to identify parameters, such as non-deterministic approaches like machine learning (ML) algorithms, or deterministic approaches like Kalman estimation strategies. This paper uses a combination of ML and the unscented Kalman filter (UKF) for parameter and state estimation on the popular Bouc-Wen model in a forced response dynamic setup. In addition, fine-tuning of individual parameters using an augmented state vector approach is explored.
Break
Coffee Break 9:50 AM - 10:20 AM
Session 2: Multisensor Fusion, Multitarget Tracking, and Resource Management II
14 April 2025 • 10:20 AM - 11:40 AM EDT
Session Chair: Ivan Kadar, Interlink Systems Sciences, Inc. (United States)
13479-6
Author(s): Iffat Ara Ebu, Mississippi State Univ. (United States); Fahmida Islam, Western Kentucky Univ. (United States); Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball, Mississippi State Univ. (United States)
14 April 2025 • 10:20 AM - 10:40 AM EDT
Show Abstract + Hide Abstract
The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as camera or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an Extended Kalman Filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF is identified, and the algorithm is validated with real-world data.
13479-7
Author(s): Rosario Di Carlo, Leonardo M. Millefiori, Paolo Braca, STO-CMRE (Italy)
14 April 2025 • 10:40 AM - 11:00 AM EDT
Show Abstract + Hide Abstract
Multi-sensor multi-target tracking (MTT) remains a challenging problem due to false measurements, missed detections, and increasing computational complexity when scaling the number of sensors. Current approaches struggle to effectively reduce the estimation error as the number of sensors increases. This paper introduces TALES, a Transformer-based association and maximum likelihood estimation model that demonstrates improved performance by processing measurements from multiple sensors in a single forward step. Experimental results show more consistent error reduction compared to classic MTT algorithms, with potential for future integration into other algorithmic frameworks.
13479-8
Author(s): Frederick E. Daum, Raytheon (United States)
14 April 2025 • 11:00 AM - 11:20 AM EDT
Show Abstract + Hide Abstract
We derive a new theory of Bayesian deep learning with particle flow that bullet proofs the algorithm against stiffness. Bayesian tools, such as STAN, which uses Hamiltonian Monte Carlo, are plagued by extremely stiff flows. Moreover, if you look under the hood of the famous Adam algorithm for deep learning, you will see that Adam was carefully designed to mitigate stiffness. Nevertheless, it is common to use extremely small learning rates for deep learning (e.g., 0.00001) despite the use of Adam!!! Our new theory avoids such a waste of precious GPU resources. As a result, our new theory speeds up Bayesian deep learning by many orders of magnitude. Furthermore, the new theory allows us to avoid fancy stiff ODE solvers in STAN that require a large amount of computer run time, and which are not parallelizable. Our Bayesian particle flow is embarrassingly parallelizable. Our paper generalizes the recent work in Dai & Daum, IEEE AESS Transactions June 2023.
13479-9
Author(s): Frederick E. Daum, Raytheon (United States)
14 April 2025 • 11:20 AM - 11:40 AM EDT
Show Abstract + Hide Abstract
We compare the performance of our new Bayesian deep learning method with several state-of-the-art algorithms. We use a new theory of Bayesian deep learning with particle flow that bullet proofs the algorithm against stiffness. Bayesian tools, such as STAN, which uses Hamiltonian Monte Carlo, are plagued by extremely stiff flows. Moreover, if you look under the hood of the famous Adam algorithm for deep learning, you will see that Adam was carefully designed to mitigate stiffness. Nevertheless, it is common to use extremely small learning rates for deep learning (e.g., 0.00001) despite the use of Adam!!! Our new theory avoids such a waste of precious GPU resources. As a result, our new theory speeds up Bayesian deep learning by many orders of magnitude. Furthermore, the new theory allows us to avoid fancy stiff ODE solvers in STAN that require a large amount of computer run time, and which are not parallelizable. Our Bayesian particle flow is embarrassingly parallelizable.
Break
Lunch Break 11:40 AM - 1:50 PM
Panel Discussion: LLMs for Information Fusion
14 April 2025 • 1:50 PM - 4:50 PM EDT
Session Chair: Erik P. Blasch, Air Force Research Lab. (United States)

Join this informative panel discussion on large language models for information fusion.

Symposium Plenary
14 April 2025 • 5:30 PM - 7:00 PM EDT

View Full Details: spie.org/dcs/symposium-plenary

Chair welcome and introduction
14 April 2025 • 5:30 PM - 5:40 PM EDT

Bring the future faster (Plenary Presentation)
Presenter(s): Jason E. Bartolomei, Brigadier General, United States Air Force, Air Force Research Laboratory (United States)
14 April 2025 • 5:40 PM – 6:20 PM EDT

To be determined (Plenary Presentation)
Presenter(s): To be determined
14 April 2025 • 6:20 PM – 7:00 PM EDT

Symposium Panel on Space Sensing: Emerging Topics, Needs, and Crossover Technology
15 April 2025 • 8:30 AM - 10:00 AM EDT

View Full Details: spie.org/dcs/symposium-panel

Join our illustrious panelists and moderator as we discuss emerging topics, needs, and crossover technology at this symposium-wide panel on space sensing.

Break
Coffee and Exhibition Break 10:00 AM - 11:00 AM
Session 3: Information Fusion Methodologies and Applications I
15 April 2025 • 11:00 AM - 12:20 PM EDT
Session Chair: Erik P. Blasch, Air Force Research Lab. (United States)
13479-10
Author(s): Djedjiga Belfadel, Fairfield Univ. (United States); David Haessig, Cherif Chibane, AuresTech Inc. (United States)
15 April 2025 • 11:00 AM - 11:20 AM EDT
Show Abstract + Hide Abstract
Accurate estimation of accelerometer biases in IMUs is essential for reliable UAV navigation, especially in GPS-denied environments. Uncorrected biases cause accumulating errors in position and velocity. This paper examines the impact of accelerometer bias and introduces a vision-aided navigation system that fuses data from an IMU, altimeter, and optical flow sensor using an Extended Kalman Filter. This approach estimates both accelerometer biases and the UAV's position and velocity, reducing error accumulation. Simulation experiments with UAVs performing circular and square motions validate the method. Results demonstrate significant improvements over dead reckoning during GPS outages, providing more accurate state and bias estimates while reducing error growth.
13479-11
Author(s): Qi M. Zheng, Joud N. Satme, Korebami O. Adebajo, Ryan Yount, Austin R. J. Downey, Univ. of South Carolina (United States)
15 April 2025 • 11:20 AM - 11:40 AM EDT
Show Abstract + Hide Abstract
Structural health monitoring (SHM) often faces challenges with manual sensor placement in complex environments. UAVs offer a solution for autonomous sensor deployment, but limited spatial awareness from onboard cameras complicates precision docking. This research advances UAV docking systems by utilizing a dual-camera setup—onboard cameras monitor the landing zone, while an external camera tracks the UAV’s trajectory, providing a broader perspective. We developed an object-tracking algorithm and an end-to-end deep learning controller that uses visual data from the external camera to autonomously guide the UAV, replicating pilot actions for full automation. Positioned at eye level to simulate a pilot’s viewpoint, the external camera enhances situational awareness and facilitates the training of the deep learning model for future fully automated docking. Initial tests demonstrated the system's effectiveness in tracking UAV components and reducing human intervention, with implications for both SHM and broader defense and commercial UAV applications.
13479-12
Author(s): Christopher Naverschnigg, Technische Univ. Wien (Austria)
15 April 2025 • 11:40 AM - 12:00 PM EDT
Show Abstract + Hide Abstract
This paper presents a deep learning-based approach for the application of flight path prediction to enhance the robustness of optical UAV detection systems, which combine camera systems and a steerable mount. A Temporal Fusion Transformer algorithm for time series forecasting is trained on a synthetic data set of UAV flight trajectories to enable flight path prediction. The percentage of total tracks (success rate), with the UAV being within the FOV after a certain prediction horizon, is evaluated on a test dataset consisting of footage captured through a telescope-based optical UAV detection system during multiple field tests and is compared against a conventional approach using a Kalman filter. For prediction horizons up to 1s both algorithms achieve a success rate above 75%. For an increasing prediction horizon up to 10s the trained Temporal Fusion Transformer outperforms the Kalman Filter by a factor of up to 1.2.
13479-13
Author(s): Pierre Pathe, CS Group (France), CREA (France); Benjamin Pannetier, CS Group (France); Olivier Bartheye, Ctr. de Recherche de l'École de l'Air (France)
15 April 2025 • 12:00 PM - 12:20 PM EDT
Show Abstract + Hide Abstract
This research focuses on improving the detection of abnormal behaviors in unmanned aerial vehicles (UAVs) using advanced data fusion techniques. Leveraging real-world sensor data, the study aims to enhance the reliability and accuracy of UAV anomaly detection systems by addressing uncertainties caused by environmental factors, sensor inaccuracies, and incomplete information. Conducted at the CS Research Lab and the French Air Force Research Center (CREA), the work integrates innovative data fusion methods to model and manage complex UAV operations. The outcomes of this research contribute to the development of safer and more efficient UAV monitoring systems for critical applications.
Break
Lunch Break 12:20 PM - 1:50 PM
Session 4: Information Fusion Methodologies and Applications II
15 April 2025 • 1:50 PM - 3:30 PM EDT
Session Chair: Erik P. Blasch, Air Force Research Lab. (United States)
13479-14
Author(s): Sora C. Haley, Daniel J. Breton, Jordan J. Bates, M. A. Niccolai, U.S. Army Engineer Research and Development Ctr. (United States)
15 April 2025 • 1:50 PM - 2:10 PM EDT
Show Abstract + Hide Abstract
Complex terrains and weather environments degrade sensing capabilities across all modalities. Currently, prediction of that degradation is a manual, time-consuming process. The time and expertise required for this manual evaluation of sensor performance is formidable. These challenges hinder the widespread use of sensor performance estimates in assessing both current and future operations. The GRIPS system aims to provide expert automation to analyze and gather the key parameters for sensor performance problems based on sensor meta-data including active sensor network and physics-based, geo-referenced sensor performance in relation to terrain, landcover, and weather. GRIPS can drastically reduce the burden on expert analysts. One critical task in GRIPS is recommending sensors for specific missions based on how feasible it is for the sensors to accomplish the mission. In this work, we introduce GRIPS and define the feasibility mathematically across several categories. We also investigate feasibility in various scenarios within the GRIPS recommendation system.
13479-15
Author(s): Erik P. Blasch, Air Force Research Lab. (United States); Yu Chen, Binghamton Univ. (United States); Jia Li, Oakland Univ. (United States); Arlsan Munir, Florida Atlantic Univ. (United States); Erika Ardiles-Cruz, Robert Ewing, Air Force Research Lab. (United States)
15 April 2025 • 2:10 PM - 2:30 PM EDT
Show Abstract + Hide Abstract
A phenomenal amount of data is collected and processed at the edge which presents engineering challenges in data collection, processing, and control. Current methods have an impatient motivation for processing data at the edge which needs to be balanced against that performance of cloud or fog designs. Much like the human eye and the superior colliculus that represents processing at the edge, there is need to empower edge devices to maximize responsiveness and development such as the retina at the far edge. This paper focuses on the power of the upstream data fusion at the decentralized edge for enhanced situational assessment (SAS) and situational awareness (SAW). Examples are developed across different domains which require far-edge processing from the societies metaverse to healthcare. For the purpose of this paper, the far edge represents edge devices that are out of centralized coordination control from a set of systems that management and orchestrate collections. Leveraging a previous example for disaster response from seismic, acoustic, electro-optical, and radio-frequency unattended ground systems that are deployed beyond a control systems, and analogy would be provided
13479-16
Author(s): Erik P. Blasch, Air Force Research Lab. (United States); Yu Chen, Binghamton Univ. (United States); Fred Daum, Raytheon (United States); Genshe Chen, Intelligent Fusion Technology, Inc. (United States); Andreas Savakis, Rochester Institute of Technology (United States)
15 April 2025 • 2:30 PM - 2:50 PM EDT
Show Abstract + Hide Abstract
Traditionally, sensor fusion methods included close-in multimodal measurements associated with ground robotics but evolved to incorporate many types of data from different sources. As the explosion of computational processing was available, the terms sensor, data, and information fusion enabled “big data” methods such as large-area imaging from electro-optical sensors and hundreds of sensors monitoring a wide geographical area. To process such large quantities of data, powerful computers and systems resulted in cloud computing approaches as centralized sensor fusion. However, there was still a need to develop methods for decentralized and distributed sensor fusion from sensors at the edge. To balance these approaches of centralized-cloud and decentralized-edge techniques, notions of fog-enabled distributed sensor fusion methods were developed to orchestrate the data flow, processing, and analysis. The panel focuses on the trends that seek to utilize edge-computing platforms amongst a large corpus of sensor, contextual, and social sources. For example, the advent of recent concepts of consideration includes digital twin technology, large language models, and deep learning.
13479-17
Author(s): Erik P. Blasch, Alex Aved, Air Force Research Lab. (United States)
15 April 2025 • 2:50 PM - 3:10 PM EDT
Show Abstract + Hide Abstract
A principle of VAULT (visible, accessible, understandable, linked, and trusted) was designed to support the advent and deployment of artificial intelligence, machine learning (AI/ML) systems. Among the issues addressed were the AI principles of interest; however, many of these have been discussed without much unification of the agreement on the implementation. While this paper does not seek to provide answers, it is an overview of the VAULT constructs and the basis for their development, current progress, and the advocacy of continent to develop them. Examples for fusion include AI/ML developments that seek to deliver trusted AI at scale (TASA). The paper discusses the needs for the VAULT but the different levels of abstraction for the cloud, fog, edge, and far when concerning multi-level information fusion, security, and safety.
13479-18
Author(s): Ivan Kadar, Interlink Systems Sciences, Inc. (United States)
15 April 2025 • 3:10 PM - 3:30 PM EDT
Show Abstract + Hide Abstract
Abstract Text :The concept of human perceptual reasoning (PR) is well known. The author in several papers over the years had introduced PR as an adaptive model of the IF processes, viz. a PR framework of the Joint Directors of Laboratories (JDL) model at that time In order to perceive, one needs to: (1) sense and deliver a stimulus to the “system”, and (2) the “and when “properly stimulated” can deliver feedback (“reinforcement learning”) to the system’s input in order to modify the system’s output and modify objectives. The generalization of a perceptual system and its adaptive feedback control process is termed the PRM. Viewed as a “meta-level information management system”, PRM consists of a closed loop feedback planning and resource management (RM) system whose interacting elements are: “gather/assess (IF)”, “anticipate” and “preplan/act/predict”. Details of the interactions and feedback among the above elements and data bases will be depicted. Relationship of PRM to the JDL, and to its updated model, the Data Fusion Information Group (DFIG) Model will be reviewed. Under the PRM framework the system can interactively control fusion performance by mapping information fusion lev
Break
Coffee Break 3:30 PM - 4:00 PM
Session 5: Information Fusion Methodologies and Applications III
15 April 2025 • 4:00 PM - 5:20 PM EDT
Session Chair: Alex L. Chan, DEVCOM Army Research Lab. (United States)
13479-19
Author(s): Youssef Bazzi, Univ. of Detroit Mercy (United States)
15 April 2025 • 4:00 PM - 4:20 PM EDT
Show Abstract + Hide Abstract
Sensor fusion method is used to provide an output to make the decision process faster. It is common to assume independent between sensors output. This assumption proved to affect certainty and reliability of this decision due to the existence of dependency which was ignored. The proposed formula introduces a dependency parameter D, which is a measure of the overlapping area between the two probability distribution functions of the sensors readings in the system over a period of time, T.
13479-20
Author(s): Xinjia Chen, Northwestern State Univ. (United States)
15 April 2025 • 4:20 PM - 4:40 PM EDT
Show Abstract + Hide Abstract
Information systems must intelligently process vague, incomplete, and uncertain information. Although fuzzy logic is widely used for decision-making under uncertainty, it has significant limitations. Its truth assignment is arbitrary and subjective, and it fails to account for the independence and correlation of statements. Moreover, fuzzy logic violates essential logical principles, such as the law of noncontradiction and the law of the excluded middle, making it inconsistent with human intuition. In this paper, we introduce Statemental Credibility Logic (SCL), a new reasoning system that mimics human reasoning for information processing. SCL extends classical logic to handle statements exhibiting various degrees of vagueness and randomness. We develop statemental algebras, truth measures, and a deduction principle that rigorously manages uncertainty, ensuring soundness and completeness. SCL allows systems to make more nuanced decisions, enhancing their robustness, flexibility, and capability in real-world applications, including natural language processing, control systems, and AI integration.
13479-21
Author(s): Connar Hite, Sean Sauds, Univ. of Dayton (United States); Ashley Diehl, Air Force Research Lab. (United States)
15 April 2025 • 4:40 PM - 5:00 PM EDT
Show Abstract + Hide Abstract
Performing object classification is challenging under diverse sets of operating conditions. In electro-optical (EO) data, the position of the sun and sensor angle impact the appearance of objects. The pose of the object can impact performance in synthetic aperture radar (SAR) data. By combining multiple sensors, we can reduce the performance drop when operating conditions in your training set and testing set diverge significantly. Traditional multi-sensor fusion methods have considered fusion as a flat problem. Flat fusion does not consider relationships between classes. These relationships can be used to extract additional information and allow us to provide partial decisions (e.g., pick-up truck instead of Ford F-150). In this paper, we extend several decision-level and feature-level multi-sensor fusion methods to work with hierarchical methods. We evaluate the fusion methods on two multi-sensor datasets: 1) visible EO (EO-vis) plus SAR dataset and 3) EO-vis plus near-infrared dataset.
13479-22
Author(s): Codie Lewis, U.S. Naval Research Lab. (United States)
15 April 2025 • 5:00 PM - 5:20 PM EDT
Show Abstract + Hide Abstract
In practice, Chernoff fusion requires numerical integration and some assumptions to make it computationally feasible. To assess the accuracy of a numerical integration scheme for Chernoff fusion, it is possible to choose weighting optimizations for a related algorithm, covariance intersection, so that the fusion results are guaranteed to be identical when using Gaussian inputs. This provides a point of comparison for the effectiveness of an integration scheme for Chernoff fusion. The purpose of this paper will therefore be to provide computational results regarding how three types of Fibonacci point sets are different and how well they serve as integration points for Chernoff fusion on the unit square specifically. An improvement in three metrics was observed for the fused distribution when using the Fibonacci points over other commonly used point sets.
Poster Session
15 April 2025 • 5:30 PM - 7:00 PM EDT
Conference attendees are invited to attend the symposium-wide poster session on Tuesday evening. Come view the posters, enjoy light refreshments, ask questions, and network with colleagues in your field. Poster authors will be present to answer questions concerning their papers. Attendees are required to wear their conference registration badges to the poster session.

Poster Setup: Tuesday 12:00 PM - 5:30 PM
Poster authors, view poster presentation guidelines and set-up instructions at http://spie.org/DCSPosterGuidelines.
13479-44
Author(s): Sonjae Wallace, Lehman College, The City Univ. of New York (United States); Lou Massa, Hunter College, The City Univ. of New York (United States); Scott Ramsey, Samuel G. Lambrakos, U.S. Naval Research Lab. (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
Vibrational absorption spectra are presented for isolated molecules of a set of explosives, which are calculated using density function theory (DFT). This study further demonstrates using DFT for characterizing IR-spectral features of substances whose detection is of significant interest. DFT calculated absorption spectra of isolated molecules represent quantitative estimates that can be correlated with additional information obtained from laboratory measurements. The DFT software GAUSSIAN was used for calculating the infrared (IR) spectra presented here. DFT calculated spectra can be used to construct templates, which are for spectral-feature comparison, and thus detection of spectral-signature features associated with target materials.
13479-45
Author(s): Paul Schrader, Air Force Research Lab. - Rome (United States); Honey Love, SUNY Polytechnic Institute (United States); Thomas Breimer, Union College (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
A contested environment with multiple targets monitored for custody involving multimodal data collection devices produces a deluge of heterogeneous data to interpret for well informed (near) real time decision speed and situational awareness. These challenges motivated rigorous scientific creativity, particularly leveraging the mathematical properties of the network’s ‘digital oil’ or data. Topology, both computational and algebraic, through various topological data analysis (TDA) based methodologies provides high fidelity accessibility to these properties but requires significant technical expertise to leverage. In the 2023 and 2024 SPIE DCS, Schrader introduced a successful algorithm interfacing TDA with existing AI/ML architectures for modality agnostic automatic target recognition tasking of multiple aerial and ground targets, the TDAML. Concurrently, he and his co-authors designed an intuitive user interface prototype, the TDA2TRU, for broadening TDAML’s accessibility. This presentation briefly reviews the workflow of the TDAML, introduces the TDA2TRU, and demos its most current prototype. Distribution A. Approved for public release: distribution unlimited. AFRL-2024-4881.
13479-46
Author(s): William G. Warren, Jared Allanigue, Hannah C. Blackmore, Samuel G. Lambrakos, U.S. Naval Research Lab. (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
This study presents initial examination of potential data preprocessing techniques applicable to RADAR pulse data, prior to its deinterleaving. Deinterleaving in this study is affected using an algorithm previously developed. The preprocessing techniques examined in this study are sub-setting and amplitude filtering, which are for improvement in deinterleaver performance, quantified by runtime and the percentage of RADAR pulses kept in scans. The results are examined quantitatively, in terms of performance metrics, and qualitatively, in terms of possible explanations for structure of features occurring within deinterleaved pulse sequences.
13479-47
Author(s): Spencer Pollard, Sam B. Siewert, California State Univ., Chico (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
Kalman filters are a well-vetted method for performing state estimation for sensor fusion. However, with the emergence of deep learning techniques that can model higher-dimensional and more complex systems, there is an opportunity to explore how these approaches compare. In this study, we evaluate both methods for state estimation on a rigid-body double pendulum system. Using Lagrangian mechanics, we derive the equations of motion and apply Kalman filtering to estimate the angle of each member relative to its pivot. We then build and train a convolutional neural network (CNN) on image data of the real pendulum system to perform the same task. Overall, the goal is to compare results to show the advantage of deep learning models compared to classic methods for state estimation and prediction for complex, nonlinear systems.
13479-48
Author(s): Yewon Jang, Sungho Kim, Yeungnam Univ. (Korea, Republic of)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
Traditional emissivity measurements are conducted using reflection-based spectrometers, which require flat, small samples. Infrared cameras provide an alternative for measuring emissivity across various objects when spectrometers are not accessible. However, in the long-wave infrared(LWIR) spectrum, the sensors detect not only the emitted energy from a target but also background environmental reflections from the target surface and atmospheric interference entering directly into the sensor. In indoor environments, a significant limitation arises when the target’s temperature is equal to the ambient conditions, leading to reduced accuracy in emissivity estimation. To address this, specific setups and compensation methods are necessary, such as heating the target above the background temperature, using high-infrared-reflectivity gold mirrors to capture reflections, or using atmospheric compensation models based on measured and simulated data to mitigate interference. This paper experiments with various setups and compensation methods to enhance emissivity estimation accuracy with LWIR cameras based on temperature and emissivity separation.
13479-49
Author(s): Do SaeByeol, Sungho Kim, Yeungnam Univ. (Korea, Republic of)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
This study aims to develop an image identification model robust to various forms of noise in test images from optical databases using domain generalization techniques. Existing models often experience performance degradation due to noise that was not encountered during training. To address this issue, we explore methods to minimize the impact of noise in test images while maximizing the model's generalization ability. By applying domain generalization techniques, the goal of this research is to develop a model that can consistently perform identification tasks even in noisy environments. Experimental results demonstrate that the proposed approach significantly improves identification accuracy on noisy test datasets, highlighting the potential for more robust image identification performance in real-world scenarios.
13479-50
Author(s): Macarena Varela, Wulf-Dieter Wirth, Ravali R. N. Nalla, Fraunhofer-Institut für Kommunikation, Informationsverarbeitung und Ergonomie FKIE (Germany)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
Drone detection and localization in outdoor environments require precise methods to mitigate disruptive signals. This study experimentally explores the concept of pattern notches in the acoustic field, where directional zeros are strategically placed to suppress unwanted sound signals, thereby enhancing the performance of beamforming techniques and improving drone detection capabilities. The approach is applied to experimental field data collected in free-field conditions, focusing on detecting and localizing drone rotor noise among various types of interference. Experimental results showcase the effectiveness of pattern notches in mitigating disruptive signals and improving the accuracy of drone detection, direction estimation, and localization within outdoor environments.
13479-51
Author(s): Luke McEvoy, Daniel Tafone, Yong Meng Sua, Yuping Huang, Stevens Institute of Technology (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
Single-photon imaging systems enable picosecond temporal resolution, but their reliance on raster scanning limits performance in capturing-fast moving, multi-dimensional scenes. To address this, compressive sampling has been used to downsample scans by up to 98%, reconstructing images with only 2% of the pixels, in just 2% of the time. However, these methods often rely on random sampling patterns lacking decision-making based on the scene. This paper introduces a Physics-Informed AI approach to enhance downsampled scanning by intelligently selecting scanning patterns based on acquired photon counts. Using graph traversal algorithms - Depth First Search, Breadth First Search, Dijkstra’s algorithm, and A* - augmented with physics insights like shot noise and dark counts, the AI effectively identifies edges, corners, and surfaces, optimizing information acquisition. The approach improves signal collection, image reconstruction, and the overall performance across single-photon systems in various applications.
13479-52
Author(s): Igor Shraifel, Igor Maltcev, Irina Mikhailova, Don State Technical Univ. (Russian Federation); Viacheslav V. Voronin, Evgenii Semenishchev, Moscow State Univ. of Technology "STANKIN" (Russian Federation)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
The article proposes an algorithm and mathematical for choosing the degree of polynomial approximation of a noisy signal in order to isolate its useful component. The proposed algorithm implements a two-level analysis of a digital signal. At the first stage, the signal is divided into components. To determine the points of sharp changes in the signal shape, a multicriteria filtering method. Its use allows us to isolate a low component function, determine function outliers, and also detect areas of sharp inflection of the function. Next, a polynomial approximation of the input sequence is applied, limited by the sections specified at the first stage of the algorithm. The paper presents a rationale for choosing depending data on the approximation type of the input component and determining the points of sharp change. For example, use line of an image obtained in the IR range to demonstrate the operation of the algorithm. The algorithm is used to detect the boundaries of objects on image.
13479-53
Author(s): Mohiuddin Ahmed, HRL Labs., LLC (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
Despite huge advances in modern high frequency RF wireless communications technologies, there are several domains in which physics limitations render such technologies unusable (primarily due to the absorption of high frequency EM waveforms). For example: underwater, underground, terrestrial (non-satellite) long range communications. For such scenarios, low to medium frequency waveforms (LF-MF) are still the only recourse. Leveraging complementary progress in active antenna technologies for low frequency waveforms, this paper discusses a class of optimized waveforms, specifically the prolate spheroidal waveform functions (PSWF), that are well suited for LF-MF communication systems.
13479-54
Author(s): Ian Tomeo, Andreas Savakis, Rochester Institute of Technology (United States); Panagiotis Markopoulos, The Univ. of Texas at San Antonio (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
Factor analysis is often used when classifying or compressing datasets. The goal is to project onto a subspace wherein relationships between the underlying variables in a dataset can more easily be characterized. Inter-battery factor analysis is a technique in which we jointly find the factors of two datasets which may share factors. We do this by maximizing the correlation of projected datasets. This is used as simple method of data fusion where one may study relationships between the datasets. We propose a method of robust inter-battery factor analysis for use on datasets containing outliers. We show how a binary optimization problem for robust inter-battery factor analysis is derived from L1-norm based factor analysis. The robust method is compared to nominal inter-battery factor analysis by assessing computational complexity. Performance of the algorithm is evaluated on the task of fusing datasets with corrupt or missing data.
13479-55
Author(s): Shida Ye, Yaakov Bar-Shalom, Peter K. Willett, Ahmed Zaki, Univ. of Connecticut (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
This paper applies Maximum Likelihood Estimation to the identification of a stochastic model for inertial sensor errors, incorporating a comprehensive drift model consisting of three terms: a Wiener process, a first-order Gauss-Markov process, and additive white noise. This model aligns with the one used in the standard Allan variance approach, which commonly considers these three types of drift in inertial sensor error modeling. Within the steady-state Kalman filter framework, the drift terms are represented in a state-space model where the likelihood function is derived, enabling an explicit expression of the log-likelihood function as a quadratic function of the measurements. This formulation facilitates direct evaluation of the Cramér-Rao Lower Bound, thereby allowing for rigorous testing and confirmation of the statistical efficiency of the ML estimators. Simulations demonstrate the performance of the estimators, and applications to real gyroscope and accelerometer data demonstrate drift modeling accuracy in comparison with the Allan variance method.
13479-56
Author(s): John Wertz, Air Force Research Lab. (United States); Laura Homa, Univ. of Dayton Research Institute (United States)
15 April 2025 • 5:30 PM - 7:00 PM EDT
Show Abstract + Hide Abstract
Electrical impedance tomography (EIT) is an electromagnetic nondestructive evaluation method used to characterize the conductivity of a domain from voltage measurements on the boundary. This method can map damage to structural materials that demonstrate stimulant-responsive conductivity, such as unidirectional ply or woven polymer-matrix composites (PMCs). In this work, we demonstrate the effect of damage size, depth, and location on EIT distinguishability of volumetric damage in a woven PMC panel. A simulated design-of-experiment was developed to determine sensitivity to these three variables in the presence of estimated noise. Other design variables, like the thickness of the composite, the geometry, electrode count, and electrode pattern were fixed. Then, a metric was developed to quantify the distinguishability of each damage case, and conclusions were drawn from this data. Experimental testing on woven PMC panels with a similar thickness, geometry, and electrode distribution was conducted to examine standout results. Ultimately, this information will be used to develop a process for determining the distinguishable characteristics of damage in realistic structure.
Session 6: Signal and Image Processing, and Information Fusion Applications I
16 April 2025 • 8:10 AM - 9:50 AM EDT
Session Chair: Lynne L. Grewe, California State Univ., East Bay (United States)
13479-23
Author(s): Lynne L. Grewe, Julia U. Reddy, Varshashree Dasuratha, Jesus Rodriguez, Nicholas Ferreira, California State Univ., East Bay (United States)
16 April 2025 • 8:10 AM - 8:30 AM EDT
Show Abstract + Hide Abstract
The development of FashionBody and SmartFashion, components of a fashion recommendation system, is outlined. Both leverage computer vision and machine learning to analyze shapes and identify body parts like arms, shoulders, chest, waist, hips, thighs, and legs. FashionBody’s detection model achieved a 0.5 Intersection over Union (IoU) with an Average Precision (AP) of 0.917 but only 0.52 at the 0.5-0.95 IoU average and an average recall of 0.637. The best regression model had a mean absolute error (MAE) of 1.291. SmartFashion’s detection model achieved an IoU of 0.5, with a mean AP of 0.675 and a mean recall of 0.525, highlighting the challenges posed by diverse garment variations. The regression analysis of 11 garment body parts yielded a MAE of 0.9948, indicating satisfactory prediction accuracy. As discussed later, a “perceived accuracy” is defined for the regression and resulted in a range of values from 83% to 92% for the various clothing parts. The discussion addresses the system's performance, training trends, challenges, and future directions for improvement.
13479-24
Author(s): Lynne L. Grewe, Wendy Zhou, Nicholas Ferreira, Jesus Rodriguez, California State Univ., East Bay (United States)
16 April 2025 • 8:30 AM - 8:50 AM EDT
Show Abstract + Hide Abstract
FitnessBody is a proof-of-concept system that uses computer vision and machine learning to analyze users’ body fitness in situ, aiming to provide feedback and recommend focus areas. Both multi-input multi-output and single-output regression models were developed with careful dataset curation. This system evaluates body metrics such as breadth, symmetry, length, and tone, using detection and regression techniques. It accurately detects individual body parts, including arms, shoulders, chest, waist, hips, thighs, and legs. Once identified, the data is processed through a regression model to analyze body proportions in relation to fashion. Object detection models achieved accuracy rates of 80.77% for arms, 83.33% for chest, 88.46% for hips, 50% for legs, 62.82% for shoulders, 37.18% for waist, and 64.1% for the overall body. Multi-output regression models yielded mean absolute errors (MAE) of 1.186 for waist and 1.328 for shoulders, while single-output models had MAEs of 1.332 for hips and 1.585 for legs.
13479-25
Author(s): Pavlo A. Molchanov, IPD Scientific, LLC (United States)
16 April 2025 • 8:50 AM - 9:10 AM EDT
Show Abstract + Hide Abstract
Human vision-based target recognition is challenging due to the vast volume of redundant imaging data, complex image recognition algorithms, and the bottleneck of transferring imaging data. Nature-inspired artificial insect vision, which focuses on fusing critical for simple recognition information signatures, offers a solution to these challenges, particularly in addressing big data and enabling fast, simultaneous multi-target recognition. Multi-sensor monopulse systems can provide artificial insect vision, offering fast, multi-channel tracking and recognition of multiple targets by using simple target signatures and recognition algorithms. These fusion signatures may consist of target spectrum data, including Doppler components, along with additional information from sources such as optical filters or temperature sensors. The proposed artificial intelligence fabric represents a significant advancement in this insect-inspired vision technology. Digital control of artificial vision, inspired by human-like processing, simplifies multi-sensor, multi-channel systems, making them more cost-effective and straightforward compared to traditional expensive high-accurate optical systems.
13479-26
Author(s): Seongryeong Lee, Sungho Kim, Yeungnam Univ. (Korea, Republic of)
16 April 2025 • 9:10 AM - 9:30 AM EDT
13479-27
Author(s): Jaeho Kim, Sungho Kim, Yeungnam Univ. (Korea, Republic of)
16 April 2025 • 9:30 AM - 9:50 AM EDT
Show Abstract + Hide Abstract
Respiratory rate (RR) monitoring is an essential diagnostic indicator for identifying several diseases. Since the outbreak of COVID-19 highlighted the importance of remote medical treatment, research on non-contact respiratory signal monitoring using various sensors has been actively proceeding. While these studies have demonstrated the potential for expansion into the future healthcare industry, there has been a limitation that the distance between the sensor and the subject must be closed(~1.5 meters) to detect the respiratory rate accurately. In this paper, we propose an Remote Respiratory Rate (RRR) monitoring that achieves approximately 90% accuracy at ~5 meters of distance using a Long-Wave Infrared (LWIR) camera. Furthermore, we introduce a real-time, automated method incorporating nostril detection through deep learning, noise reduction, and RR calculation. Through this experiment, we evaluate whether our method is appropriate for remote medical treatment.
Break
Coffee Break 9:50 AM - 10:20 AM
Session 7: Signal and Image Processing, and Information Fusion Applications II
16 April 2025 • 10:20 AM - 12:00 PM EDT
Session Chair: Lynne L. Grewe, California State Univ., East Bay (United States)
13479-28
Author(s): Molly M. Scheffe, School of Hard Knocks (United States)
16 April 2025 • 10:20 AM - 10:40 AM EDT
Show Abstract + Hide Abstract
Although the Rice distribution is widely used in signal processing and communications, it might be surprising that it also can greatly benefit medical diagnosis, or that old mathematics can be used here to deduce interesting new properties. Classical Gaussian [normal] distributions don't fit very well the population histograms needed for medical diagnosis, but close relatives derived from Gaussians do. The Hankel transform provides geometric insight here, resulting in some simple closed-form expressions. A simple moment matching scheme will also be introduced to derive reasonable parameter values. The closed-form expressions make it easy to calculate probabilities of missed detections and false alarms, which are just as crucial for correct medical diagnosis as they are for classical target detection, estimation and classification.
13479-29
Author(s): Oladipupo Adeoluwa, Sevgi Gurbuz, Anirban Swakshar, Cooper Coldwell, Karsten Schnier, Margaret Kim, Patrick Kung, The Univ. of Alabama (United States)
16 April 2025 • 10:40 AM - 11:00 AM EDT
Show Abstract + Hide Abstract
Underwater imaging faces significant challenges due to light scattering, absorption, and low contrast, which hinder object detection. Traditional single-polarization systems are often unable to reveal crucial object features in turbid waters. This study introduces a novel framework combining multi-polarization imaging with single-photon detection to enhance underwater object detection. By capturing 16 polarization-resolved images per scan, we leverage the diversity across polarization states to reveal features typically obscured in conventional systems. Using advanced image fusion and deep learning models, our approach improves detection accuracy, contrast, and signal-to-noise ratio. Preliminary results demonstrate enhanced detection performance, revealing critical features that are otherwise imperceptible. This method holds promise for applications in marine exploration, underwater robotics, and environmental monitoring
Show Abstract + Hide Abstract
A Command and Control (C2) system architecture is implemented in the General High-Fidelity Omni-Spectrum Toolbox (GHOST) a simulation environment capable of HWIL. To solve the false alarm problem of all C2 systems, a track manager interface is developed capable of utilizing the Multiple Hypothesis tracker (MHT), a Bayesian association and fusion engine, and a modular prioritization scheduling algorithm. The algorithms are developed in SWIL from synthetic data and tested in HWIL for a real time system.
13479-31
Author(s): Sophia P. Bragdon, U.S. Army Engineer Research and Development Ctr. (United States)
16 April 2025 • 11:20 AM - 11:40 AM EDT
Show Abstract + Hide Abstract
This work explores using thermal imagery coupled with a feature set obtained using wavelet decomposition, edge detection, and thermal contrast to create a multi-channel image that is used for the detection task. We use longwave infrared imagery of surface and buried objects, and we study the features that are learned by machine learning based detection algorithms, such as, FasterRCNN and YOLO. Algorithms are trained to detect the objects within the thermal imagery using only the thermal input and the multi-channel inputs with engineered features, and this study evaluates how the additional input features impact the features learned by the detection algorithms. The goal of this research is to increase the interpretability and reliability of the detection algorithms to understand which features are being used by the algorithms that lead to true detections. By gaining an understanding of the important features, a reliability measure can be developed by pre-processing the images and determine whether the salient features are visible.
13479-32
Author(s): Alex H. Kachergis, Univ. of Connecticut (United States)
16 April 2025 • 11:40 AM - 12:00 PM EDT
Show Abstract + Hide Abstract
This work considers a transmitter (TX) buoy and n (inexpensive) omnidirectional receiver (RX) buoys deployed on the ocean surface in (approximately) a polygon pattern at known locations. A target (TG) is located underwater somewhere in the neighborhood. Each RX measures (with additive noise) the TDOA between the direct signal from the TX and the reflected signal TX-TG-RX. From these n noisy TDOAs we propose a Maximum Likelihood (ML) algorithm that can estimate the 3D location of the target. Also, the pmf (probability mass function) of its 3D location estimate is obtained. This is be done using the “unscented transform” for the Gaussian distribution of the measurement noises. This is a novel approach in parameter estimation. We evaluate the Cramer-Rao Lower Bound (CRLB) and show via Monte-Carlo simulations statistical efficiency of the estimator via hypothesis testing.
Break
Lunch Break 12:00 PM - 1:30 PM
Session 8: Signal and Image Processing, and Information Fusion Applications III
16 April 2025 • 1:30 PM - 2:50 PM EDT
Session Chair: Lynne L. Grewe, California State Univ., East Bay (United States)
13479-33
Author(s): Austin C. Bergstrom, David W. Messinger, Rochester Institute of Technology (United States)
16 April 2025 • 1:30 PM - 1:50 PM EDT
Show Abstract + Hide Abstract
We develop a relatively simple parametric image chain that captures the first order relationships between scene radiance, focal length, aperture diameter, and sensor parameters such as well depth, read noise, and dark current. We use this image chain model to parametrically re-image / distort the Common Objects in Context (COCO) dataset, with the resultant image quality dependent on the combination of sensor parameters selected. We use this parametric image chain to examine the impacts of resolution and blur on computer vision performance under various illumination conditions when these parameters are treated as functions of focal length and aperture diameter respectively, driving coupling to image SNR. This work will help to inform imaging system design for applications in which the primary ``user" of the images may be a computer vision algorithm.
13479-34
Author(s): Howard Dai, Yale Univ. (United States); Jack Chuang, Jian Wang, Samuel Berweger, David Griffith, National Institute of Standards and Technology (United States)
16 April 2025 • 1:50 PM - 2:10 PM EDT
Show Abstract + Hide Abstract
Multipath component (MPC) extraction is critical for channel modeling and joint communications and sensing (JCAS). The super-resolution algorithm known as CLEAN-SAGE is widely used for MPC extraction. Because CLEAN-SAGE is based on maximum likelihood estimation, it can experience false detection events when a mismatch occurs between the theoretical model and the received signal. Moreover, the complexity of CLEAN-SAGE makes it challenging to support real-time or near-real-time applications. This paper proposes two machine learning (ML)-based solutions to address these issues: a classification model for false signal detection and a regression model for direct MPC parameter extraction. The classification model uses a convolutional neural network (CNN) to predict true or false detection from the beamformed received signal. The regression model uses multiple CNN-based encoders, decoders, and several fully connected linear layers to directly predict the range and azimuth angle of each MPC from a heatmap.
13479-35
Author(s): Jeffrey Y. Beyon, Michael J. Kavaya, NASA Langley Research Ctr. (United States)
16 April 2025 • 2:10 PM - 2:30 PM EDT
Show Abstract + Hide Abstract
This paper presents the instrument offset optimization technique for the Doppler Aerosol Wind Lidar (DAWN) profiling algorithm at NASA Langley Research Center (LaRC). Due to the unsteady environment where the data are collected, even a small offset will result in nonsensical results in the parameter estimation process. A brief introduction of the algorithm and the overview of the optimization techniques are presented.
13479-36
Author(s): Laura Homa, Tyler Lesthaeghe, Univ. of Dayton Research Institute (United States); Matthew Cherry, John Wertz, Air Force Research Lab. (United States)
16 April 2025 • 2:30 PM - 2:50 PM EDT
Show Abstract + Hide Abstract
Microtexture regions (MTR) are collections of grains with similar crystallographic orientation. When present in aerospace components, they have the potential to significantly impact component life. Thus, an inspection method to detect and characterize MTR is needed. We consider the fusion of two nondestructive evaluation (NDE) methods, scanning acoustic microscopy (SAM) and eddy current testing (ECT), to address this problem. ECT provides accurate orientation information, but its low spatial resolution can prevent it from determining MTR boundaries well. In contrast, SAM has improved spatial resolution as compared to ECT, but it cannot provide unique orientation information. We present a technique that uses SAM data to inform the inversion of ECT data to find the underlying MTR segmentation. Examples with simulated and experimental data will be shown.
Break
Coffee Break 2:50 PM - 3:20 PM
Session 9: Signal and Image Processing, and Information Fusion Applications IV
16 April 2025 • 3:20 PM - 5:40 PM EDT
Session Chair: Alex L. Chan, DEVCOM Army Research Lab. (United States)
13479-37
Author(s): Josh McGuire, Joud N. Satme, Daniel Coble, Austin R. J. Downey, Jason Bakos, Univ. of South Carolina (United States); Arion Pons, Chalmers Univ. of Technology (Sweden)
16 April 2025 • 3:20 PM - 3:40 PM EDT
Show Abstract + Hide Abstract
This paper addresses the growing demand for deploying machine learning models on lightweight microcontrollers and IoT devices, where off-device processing is impractical due to performance and memory constraints. Standard techniques like model quantization improve memory efficiency but fail to reduce computational operations. To tackle this, we propose using Singular Value Decomposition (SVD) to reduce the number of weights in Long Short-Term Memory (LSTM) models, leading to improvements in both memory footprint and performance, with minimal accuracy loss. The technique results in dense matrices, optimizing computational efficiency compared to sparse matrices. The method is validated on a Teensy 4.0 microcontroller with an ARM Cortex-M7 CPU, where a rank-reduced LSTM model for accelerometer signal compensation in structural health monitoring demonstrated superior memory and computational efficiency. Our findings show the rank-reduced model is better suited for on-device processing in edge-computing scenarios, providing a significant advantage for real-world applications involving low-resource devices.
13479-38
Author(s): Anuj Kumar Mishra, Ripul Ghosh, CSIR - Central Scientific Instruments Organisation (India), Academy of Scientific and Innovative Research (AcSIR) (India)
16 April 2025 • 3:40 PM - 4:00 PM EDT
Show Abstract + Hide Abstract
The detection and classification of airborne acoustic sources are crucial for surveillance, wildlife monitoring, and environmental noise assessment applications. This work addresses the complexity of drone detection using acoustic signals under environmental noisy conditions. Traditional methods struggle to distinguish complex sounds in noisy environments due to limited capture of temporal-spectral and harmonic characteristics. We proposed a novel dual-route deep feature extraction (DFE) architecture based on the MobileViT framework. This framework effectively captures both temporal-spectral and harmonic information using short-time Fourier transform (STFT), harmonic percussive source separation (HPSS), and constant-Q transform (CQT). The architecture integrates MobileNet residual blocks and multi-head self-attention transformer blocks, enabling efficient feature interaction and specialization in learning from dual input streams. Experimental results demonstrate the effectiveness of this architecture in achieving high classification accuracy and low computational cost across various SNR levels.
13479-39
Author(s): Ismail I. Jouny, Lafayette College (United States)
16 April 2025 • 4:00 PM - 4:20 PM EDT
Show Abstract + Hide Abstract
A Pseudo-Bayesian multi-mode retrieval algorithm that was recently proposed is used in this paper to extract target scattering features. Such an algorithm is particularly suited for nonstationary signals and sequentially estimates the time-frequency ridge oof each mode. This algorithm (originally proposed for AM/FM) mode retrieval is adapted here to extract target scatterers (or modes) and estimate their degree of dispersion. The extracted features are presented into a nonparametric target recognition system that is distance based. The envisioned radar is a stepped-frequency radar that transmits a sequence of continuous single tone signals with incremented frequencies, and the complex backscatter is recorded.
13479-40
Author(s): Michael D. Zoltowski, Purdue Univ. (United States)
16 April 2025 • 4:20 PM - 4:40 PM EDT
Show Abstract + Hide Abstract
An innovative MIMO radar waveform diversity design is presented and assessed in terms of its effectiveness in providing high-resolution imaging and in combatting jammers. Waveform diversity serves to lower the background Delay-Doppler sidelobes, to a level where the actual targets are demonstrably pronounced with high-resolution in the composite Delay-Doppler profile. Waveform diversity also provides immunity to various forms of interference, including jammers and spoofers. Our innovation is the design and scheduling of complementary, constant envelope phase-coded waveforms over multiple pulse repetition intervals (PRIs) that enable the waveforms to be transmitted simultaneously from different emitters and, yet, be perfectly separated during the process of matched filtering over multiple PRIs on return. The design is called a Unitary Waveform Matrix. A novel means of providing robustness to Doppler is also developed. Incorporating Machine Learning into the waveform selection process and in the combining of multiple diverse Delay-Doppler profiles, the overall process is shown to provide high-resolution, high-fidelity radar images.
13479-41
Author(s): SaeByeol Do, Sungho Kim, Yeungnam Univ. (Korea, Republic of)
16 April 2025 • 4:40 PM - 5:00 PM EDT
Show Abstract + Hide Abstract
SAR imagery is obtained using radar signals to capture high-resolution images of the Earth's surface in various environments. After transmitting radar signals, it calculates the time of the reflected signals to generate images, allowing it to capture visuals even through clouds, smoke, and darkness, making it usable 24/7 in all weather conditions. Due to these features, SAR imagery is widely utilized in fields such as defense, security, and terrain analysis. However, SAR data is challenging to interpret, and building and operating SAR systems are costly, making it difficult to establish large databases. Therefore, we aim to conduct research that leverages synthetic SAR data to train models and enables effective recognition of real SAR raw data.
13479-42
Author(s): Amir K. Saeed, Benjamin M. Rodriguez, Johns Hopkins Univ. Applied Physics Lab., LLC (United States)
16 April 2025 • 5:00 PM - 5:20 PM EDT
Show Abstract + Hide Abstract
Design of Experiments (DoE) plays a vital role in optimizing defense modeling and simulation environments by establishing relevant bounds and enhancing the understanding of complex systems. This paper proposes a method that uses DoE techniques to systematically explore the parameter space of simulation models, identifying key factors and interactions that influence performance. By leveraging DoE, we can determine the most relevant experimental settings and boundaries for modeling and simulation, allowing for more accurate representation of real-world conditions.
13479-43
Author(s): Miguel A. Goenaga-Jimenez, Josue G. Cardona-Cotto, Manuel A. Millan-Chacon, Alcides Alvear-Suárez, Univ. Ana G. Méndez (United States)
16 April 2025 • 5:20 PM - 5:40 PM EDT
Show Abstract + Hide Abstract
Color Vision Deficiency (CVD) affects an individual’s ability to perceive colors accurately due to malfunctioning cone cells in the retina. The most common form is red-green CVD, yet it often receives limited attention. Color plays a vital role in daily life, making accurate perception essential for tasks like interpreting information and distinguishing objects. Current solutions, such as colored lenses, are often expensive and inaccessible. To address these challenges, we introduce C.A.T. (Colorblind Accessibility Tool), a mobile application that utilizes the smartphone camera to identify colors and display their names on-screen. C.A.T. is designed not only for individuals with CVD but also for those with visual perception difficulties. Rigorous testing showed that C.A.T. significantly improved color recognition accuracy, especially for red-green and blue-yellow CVD users, and increased user confidence in color-related tasks. C.A.T. represents an accessible and effective solution to enhance the quality of life for those with color vision impairments.
Conference Chair
Interlink Systems Sciences, Inc. (United States)
Conference Chair
Air Force Research Lab. (United States)
Conference Chair
California State Univ., East Bay (United States)
Conference Co-Chair
Defence Research and Development Canada (Canada)
Conference Co-Chair
TrackGen Solutions Inc. (Canada)
Program Committee
Univ. of Connecticut (United States)
Program Committee
IBM United Kingdom Ltd. (United Kingdom)
Program Committee
DEVCOM Army Research Lab. (United States)
Program Committee
George Mason Univ. (United States)
Program Committee
General Dynamics Mission Systems (United States)
Program Committee
Independent Consultant (United States)
Program Committee
Raytheon Missiles & Defense (United States)
Program Committee
ONERA (France)
Program Committee
Air Force Research Lab. (United States)
Program Committee
DEVCOM Army Research Lab. (United States)
Program Committee
Aptima, Inc. (United States)
Program Committee
Univ. at Buffalo (United States)
Program Committee
National Geospatial-Intelligence Agency (United States)
Program Committee
Lamar Univ. (United States)
Program Committee
Virginia Commonwealth Univ. (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Rochester Institute of Technology (United States)
Program Committee
Consultant (United States)
Program Committee
National Ctr. for Scientific Research "Demokritos" (Greece)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Edward L. Waltz
Virginia Polytechnic Institute and State Univ. (United States)
Program Committee
Univ. of Connecticut (United States)
Program Committee
Plato Systems (United States)
Program Committee
Rochester Institute of Technology (United States)
Program Committee
The Univ. of Mississippi Medical Ctr. (United States)
Additional Information

POST-DEADLINE ABSTRACTS ACCEPTED UNTIL 17 FEBRUARY
New submissions considered for poster session, or oral session if space becomes available
Contact author will be notified of acceptance by 3 March
View Submission Guidelines and Agreement
View the Call for Papers PDF

Submit Post-Deadline Abstract

What you will need to submit

  • Presentation title
  • Author(s) information
  • Speaker biography (1000-character max including spaces)
  • Abstract for technical review (200-300 words; text only)
  • Summary of abstract for display in the program (50-150 words; text only)
  • Keywords used in search for your paper (optional)
Note: Only original material should be submitted. Commercial papers, papers with no new research/development content, and papers with proprietary restrictions will not be accepted for presentation.