13 - 17 April 2025
Orlando, Florida, US

Post-deadline submissions will be considered for poster, or oral if space is available


Modern technology depends on the ubiquitous collection of data and the application of machine learning to derive insights and create knowledge. Often, machine learning methods are developed considering curated, well-behaved datasets. However, real-world data are often collected in non-ideal conditions, with limited sensing, storage, processing, and labeling capabilities, environmental changes and interference, attacks, and policy restrictions. Accordingly, real-world data presents significant challenges, such as corruptions, outliers, missing entries or labels, bias, distribution shifts, and security/privacy issues, to name just a few. Such challenges often limit the effectiveness of standard machine learning methods to real-world scenarios. The Machine Learning from Challenging Data Conference (MLCD) aims to bridge this gap by advancing practical, efficient, and effective machine learning solutions tailored to complex real-world data challenges.

We invite submissions on machine learning from challenging data. Contributors are encouraged to present highly novel methods, theoretical advancements, strategies for data collection and dataset optimization, and significant applications that demonstrate practical solutions to the complexities of real-world data scenarios.

Examples of data challenges: Examples of methodologies: Examples of application areas:
Best paper award
One paper will be selected for the best paper award among the papers of this conference (accepted, presented, and published). The selection will be made by a designated award sub-committee, comprising three members of the conference program committee and/or chairs. All eligible papers will be evaluated for technical quality and merit. The criteria for evaluation will include: 1) innovation; 2) clarity and quality of the manuscript submitted for publication; and 3) the significance and impact of the work reported.

Best student paper award
One paper will be selected for the best student paper award among the papers of this conference (accepted, presented, and published). The selection will be made by a designated award sub-committee, comprising three members of the conference program committee and/or chairs. All eligible papers will be evaluated for technical quality and merit. The criteria for evaluation will include: 1) innovation; 2) clarity and quality of the manuscript submitted for publication; and 3) the significance and impact of the work reported.

In order to be considered for these awards, the presenter must make their oral presentation and submit their final manuscript as scheduled and according to the due date. There is no monetary prize for this award.
;
In progress – view active session
Conference 13460

Machine Learning from Challenging Data 2025

14 - 15 April 2025 | Ballroom Level, Osceola 2
View Session ∨
  • Opening Remarks
  • 1: Robust Methods
  • 2: Federated Learning Tutorial
  • 3: Communications and Array Processing
  • 4: Key Applications
  • Symposium Plenary
  • Symposium Panel on Space Sensing: Emerging Topics, Needs, and Crossover Technology
  • 5: Compression and Augmentation
  • 6: Computer Vision and Remote Sensing
  • 7: Multiagent Processing and Key Applications
Information

Want to participate in this program?
Post-deadline abstract submissions accepted through 17 February. See "Additional Information" tab for instructions.

Opening Remarks
14 April 2025 • 9:30 AM - 9:40 AM EDT
Session Chair: Panagiotis P. Markopoulos, The Univ. of Texas at San Antonio (United States)
Opening remarks for Machine Learning from Challenging Data 2025.
Session 1: Robust Methods
14 April 2025 • 9:40 AM - 11:00 AM EDT
Session Chair: Panagiotis P. Markopoulos, The Univ. of Texas at San Antonio (United States)
13460-1
Author(s): Shruti Shukla, Dimitris A. Pados, Florida Atlantic Univ. (United States); Kavita Varma, Amazon.com, Inc. (United States); George Sklivanitis, Florida Atlantic Univ. (United States); Elizabeth S. Bentley, Air Force Research Lab. (United States); Michael J. Medley, SUNY Polytechnic Institute (United States)
14 April 2025 • 9:40 AM - 10:00 AM EDT
Show Abstract + Hide Abstract
We develop and present in implementation detail novel theory and methods to curate training datasets for Artificial Intelligence/Machine Learning (AI/ML) classification applications. The curation method itself is AI/ML and operator hands-free. In particular, a conventional feedforward neural network architecture is designed to create a robust (L1-norm geometry) summary representation of the dataset that allows thereafter ML identification of likely faulty points followed by excision. While the process is applicable to any targeted form of AI/ML classification, in this paper we use as an example Support Vector Machines (SVMs) that are widely deployed and successful in solving pattern recognition problems, but are known to be susceptible to faulty training data during support vector selection. Extensive experimentation on real-word datasets reported in this paper illustrates the technical developments and shows remarkable benefit even when the curated training dataset is presumably clean.
13460-2
Author(s): Garrett Cayce, Colleen P. Bailey, Univ. of North Texas (United States)
14 April 2025 • 10:00 AM - 10:20 AM EDT
Show Abstract + Hide Abstract
The accuracy and reliability of machine learning models depend heavily on the quality of the datasets they are trained on. Poor data quality, such as outliers or mislabeling errors, can reduce model performance and lead to suboptimal results. To address this, we propose a method for refining datasets by identifying and removing outliers through feature extraction and variance measurement. This approach improves dataset consistency, leading to more reliable and robust model training. Our method is particularly valuable in domains where dataset quality is crucial, enhancing overall model accuracy and performance.
13460-3
Author(s): Jackson Lee, IBM Corp. (United States)
14 April 2025 • 10:20 AM - 10:40 AM EDT
Show Abstract + Hide Abstract
The Adversarial Robustness Toolbox (ART) created in 2018 by IBM Research, detects and classifies threats against AI systems, identifies vulnerable models, and provides risk mitigation strategies. Widely trusted, with over 600 academic citations and 4.7K GitHub stars, it was donated to the Linux Foundation Data & AI in 2020 as part of Trusted AI projects to establish a vendor-neutral adversarial machine learning standard. ART supports attack-defense operations, offering state-of-the-art methods for evaluating and defending AI models against evasion, poisoning, extraction, and inference attacks. It integrates with a variety of AI frameworks and platforms, allowing users to assess and improve model robustness. An extension, developed in collaboration with the Department of Defense (DoD)’s Chief Digital and AI Office (CDAO) for the Joint AI Test Infrastructure Capability (JATIC) program, focuses on computer vision, enhancing model performance against evasion attacks through standardized, interoperable protocols.
13460-4
Author(s): Dave Cook, The Training Data Project (United States), National Geospatial-Intelligence Agency (United States), Office of the Director of National Intelligence (United States); Tim Klawa, The Training Data Project (United States)
14 April 2025 • 10:40 AM - 11:00 AM EDT
Show Abstract + Hide Abstract
Prepare AI for real-world challenges, not perfection. Our methods focus on improving AI performance in complex environments, particularly for computer vision, multi-modal systems, and large language models. AI models are often trained with idealized data, leaving them unprepared for real-world conditions like obstructed views or adversarial interference, which can hinder their effectiveness in critical applications. We present a risk-based framework that aligns training data with the challenging environments AI models will encounter. Using a dynamic similarity score, this approach continuously evaluates how well the data represents real-world scenarios. This is crucial for computer vision, multi-modal AI, and large language models, where diverse inputs such as images, text, and sensor data are combined. By strategically curating and labeling relevant data, our framework minimizes inefficiencies, mitigates risks, and ensures robust model performance, equipping AI systems for operational readiness in complex, unpredictable environments.
Break
Coffee Break 11:00 AM - 11:30 AM
Session 2: Federated Learning Tutorial
14 April 2025 • 11:30 AM - 12:30 PM EDT
Session Chair: George Sklivanitis, Florida Atlantic Univ. (United States)
13460-5
Author(s): Panagiotis Markopoulos, The Univ. of Texas at San Antonio (United States)
14 April 2025 • 11:30 AM - 12:30 PM EDT
Show Abstract + Hide Abstract
Federated Learning (FL) is an innovative approach to machine learning where models are trained collaboratively across multiple decentralized devices or servers, without sharing raw data. This 1-hour-long tutorial will provide an in-depth introduction to FL, covering its key principles, architectures, and the advantages it offers in preserving privacy while enabling robust machine learning. We will explore practical applications in various domains, highlight key challenges such as communication efficiency and security, and discuss recent advances in federated optimization and aggregation techniques.
Break
Lunch Break 12:30 PM - 2:00 PM
Session 3: Communications and Array Processing
14 April 2025 • 2:00 PM - 3:20 PM EDT
Session Chair: Panagiotis P. Markopoulos, The Univ. of Texas at San Antonio (United States)
13460-6
Author(s): Shelley Su, Fauzia Ahmad, Temple Univ. (United States)
14 April 2025 • 2:00 PM - 2:20 PM EDT
Show Abstract + Hide Abstract
-
13460-7
Author(s): Bryce S. Hinkley, David Akopian, The Univ. of Texas at San Antonio (United States); Marius Necsoiu, DEVCOM Army Research Lab. (United States)
14 April 2025 • 2:20 PM - 2:40 PM EDT
Show Abstract + Hide Abstract
Efficient sharing of narrowband communication channels among multiple signals is a critical challenge in modern systems. Accurate interference measurement is essential for optimizing channel use and maintaining signal integrity. This paper evaluates the performance of Convolutional Neural Networks (CNNs) and transformer models in quantifying interference levels in narrowband channels. Trained on synthetic datasets simulating interference, transformers showed a slight performance edge over CNNs. To enhance usability, explainable AI (XAI) techniques and a visual large language model were integrated, offering insights into model decision-making and improving transparency. Results demonstrate that AI models augmented with explainability improve channel management and communication reliability across critical sectors.
13460-8
Author(s): Syed Ali Hamza, Thomas Flear, Gillian Cruz, Dariel Mejia, Andres Ramirez, Widener Univ. (United States)
14 April 2025 • 2:40 PM - 3:00 PM EDT
Show Abstract + Hide Abstract
In-Cabin radar sensing aims to enhance vehicle safety and passenger comfort through an advanced radar-based system for real-time monitoring. By integrating radar technology, the system provides automated alerts and adaptive climate control based on occupancy and movement, improving safety and comfort. Designed with privacy in mind, the radar system ensures non-intrusive data collection while delivering actionable insights through a user-friendly interface. This paper explores deep learning and beamforming techniques to enhance the performance of the existing algorithms. We propose a MIMO radar system combined with Capon beamforming to accurately isolate passengers in the range-azimuth map. Additionally, filtering techniques are employed to remove clutter and enable precise passenger detection. Radar data is collected and the system’s performance is evaluated under different sensor array configurations, comparing conventional and Capon beamforming methods.
13460-9
Author(s): Syed Ali Hamza, John Kobak, Widener Univ. (United States)
14 April 2025 • 3:00 PM - 3:20 PM EDT
Show Abstract + Hide Abstract
Sparse arrays offer enhanced spatial degrees of freedom through nonuniform sensor spacing across the array aperture, allowing both the array configuration and sensor weights to greatly influence beamforming performance. In this paper, we propose a deep learning-based approach to optimize sparse array configurations, assuming optimal sensor weight design. Our method focuses on wideband signal models, where the array configuration is optimized to maximize the signal-to-interference-plus-noise ratio (SINR) for a desired source in the presence of interfering signals arriving at variable angles. The deep learning model selects array configurations that optimize SINR, while the wideband signal is processed by decomposing it into narrowband components using fast Fourier transform (FFT). A sparse array configuration is then designed to enhance the average SINR across all frequency components, providing superior performance compared to conventional methods.
Break
Coffee Break 3:20 PM - 3:50 PM
Session 4: Key Applications
14 April 2025 • 3:50 PM - 5:10 PM EDT
Session Chair: Bing Ouyang, Harbor Branch Oceanographic Institute (United States)
13460-10
Author(s): Alisa Kunapinun, William Fairman, Paul Wills, Bing Ouyang, Harbor Branch Oceanographic Institute (United States)
14 April 2025 • 3:50 PM - 4:10 PM EDT
Show Abstract + Hide Abstract
Predicting biomass growth is vital for optimizing aquaculture systems and ensuring sustainability. This study introduces a novel approach utilizing a Bidirectional Long Short-Term Memory (Bi-LSTM) model with physics constraint loss functions to forecast biomass in complex aquaculture systems like Integrated Multi-Trophic Aquaculture (IMTA) and pond farms. The Bi-LSTM captures bidirectional temporal dependencies, improving growth predictions over traditional unidirectional models. The integration of physics constraints helps the model respect the biological growth dynamics governing aquaculture ecosystems. Generalizing this method from seaweed growth to broader aquaculture systems improves biomass predictions across varying environmental conditions. Results show that, despite being sparse, the Bi-LSTM model outperforms conventional models, maintaining predictive accuracy through environmental data inputs such as water temperature, nutrient levels, and solar radiation. This study highlights the potential of machine learning to optimize resource management and decision support in aquaculture, contributing to sustainable practices across industry.
13460-11
Author(s): Muhammad Mahmudul M. Hasan, Nezih Pala, Florida International Univ. (United States)
14 April 2025 • 4:10 PM - 4:30 PM EDT
Show Abstract + Hide Abstract
This work applies a neural network model integrated with Fourier Feature Networks (FFN) to accurately capture the terahertz frequency oscillations in plasma wave field-effect transistors (TeraFETs). Modeling high-frequency oscillations in these devices is challenging due to the complex dynamics of the hydrodynamic charge transport system. Our results show that the Fourier Feature Network effectively resolves the terahertz oscillations in the TeraFET channel, providing a better fit than standard neural networks. We used a numerical simulation dataset of Dyakonov-Shur instability in diamond TeraFET to train and test the model. Additionally, we compare the performance of this approach with Physics-Informed Neural Networks (PINNs), which were also tested with Fourier features. Despite this enhancement, the PINN struggled to accurately track the high-frequency solutions, exhibiting difficulties in both convergence and accuracy due to lack of dataset. This work demonstrates the effectiveness of utilizing Fourier features in neural networks for terahertz device modeling and highlights their advantages over PINNs in capturing rapid oscillations in TeraFETs. These findings offer valuable i
13460-12
Author(s): Guoxi Huang, Nantheera Anantrasirichai, Ruirui Lin, David Bull, Univ. of Bristol (United Kingdom)
14 April 2025 • 4:30 PM - 4:50 PM EDT
Show Abstract + Hide Abstract
Videos captured in low-light and underwater conditions often suffer from distortions such as noise, low contrast, color imbalance, and blur. These issues not only limit visibility but also degrade automatic tasks like detection. Post-processing is typically required but can be time-consuming. AI-based tools for video enhancement also demand significantly more computational resources compared to image-based methods. This paper introduces a novel framework, Visual Mamba, designed to reduce memory usage and computational time by leveraging the Visual State Space (VSS) model. The framework consists of two modules: (i) a feature alignment module, where spatio-temporal displacement between input frames is registered in the feature space, and (ii) an enhancement module, where noise removal and brightness adjustment are performed using a UNet-like architecture, with all convolutional layers replaced by VSS blocks. Experimental results show that the Visual Mamba technique outperforms Transformer and convolution-based models in both low-light and underwater video enhancement tasks. Code is available on line at https://github.com/russellllaputa/BVI-Mamba.
13460-13
Author(s): Joanne Lin, Nantheera Anantrasirichai, David Bull, Univ. of Bristol (United Kingdom)
14 April 2025 • 4:50 PM - 5:10 PM EDT
Show Abstract + Hide Abstract
Instance segmentation accurately delineates the precise boundaries of each distinct object in an image or video. However, performing this task in low-light conditions is challenging due to issues such as shot noise from low photon counts, color distortions, and reduced contrast. In this work, we present an end-to-end solution designed to address these complexities. Our approach integrates weighted non-local blocks (wNLB) into the feature extractor, facilitating an inherent denoising process at the feature level. This design eliminates the need for aligned ground truth images during training, making our method well-suited for real-world low-light scenarios. We have also added learnable weights at each layer to better adapt the network to the varying noise characteristics found across different feature scales. Experimental evaluations on several object detectors and trackers show that our method surpasses pretrained networks.
Symposium Plenary
14 April 2025 • 5:30 PM - 7:00 PM EDT | Ballroom Level, Osceola Ballroom C

View Full Details: spie.org/dcs/symposium-plenary

Chair welcome and introduction
14 April 2025 • 5:30 PM - 5:40 PM EDT

Bring the future faster (Plenary Presentation)
Presenter(s): Jason E. Bartolomei, Brigadier General, United States Air Force, Air Force Research Laboratory (United States)
14 April 2025 • 5:40 PM – 6:20 PM EDT

To be determined (Plenary Presentation)
Presenter(s): To be determined
14 April 2025 • 6:20 PM – 7:00 PM EDT

Symposium Panel on Space Sensing: Emerging Topics, Needs, and Crossover Technology
15 April 2025 • 8:30 AM - 10:00 AM EDT

View Full Details: spie.org/dcs/symposium-panel

Join our illustrious panelists and moderator as we discuss emerging topics, needs, and crossover technology at this symposium-wide panel on space sensing.

Break
Coffee and Exhibition Break 10:00 AM - 11:00 AM
Session 5: Compression and Augmentation
15 April 2025 • 11:00 AM - 12:00 PM EDT
Session Chair: George Sklivanitis, Florida Atlantic Univ. (United States)
13460-15
Author(s): Christian Newman-Sanders, Andres Ramirez-Jaime, Nestor Porras-Diaz, Univ. of Delaware (United States); Mark Stephen, NASA Goddard Space Flight Ctr. (United States); Gonzalo R. Arce, Univ. of Delaware (United States)
15 April 2025 • 11:00 AM - 11:20 AM EDT
Show Abstract + Hide Abstract
This work introduces a novel method to optimize illumination patterns for satellite compressive LiDAR using generative adversarial networks (GANs). By employing both binary and m-ary illumination patterns, this approach enhances low-resolution LiDAR data reconstruction, specifically for NASA's adaptive wavelength system, CASALS. Traditional LiDAR systems capture redundant data, inflating storage needs. Our GAN-based technique identifies key regions for efficient sampling, dynamically adapting to terrain and environmental factors, resulting in reduced data volume without sacrificing resolution. This advancement offers promising applications in environmental monitoring and urban planning, streamlining satellite-based Earth observation missions.
13460-16
Author(s): Adrian Stern, Vladislav Kravets, Ben-Gurion Univ. of the Negev (Israel)
15 April 2025 • 11:20 AM - 11:40 AM EDT
Show Abstract + Hide Abstract
The Partial Random Ensemble (PRE) is one of the two main approaches for designing Compressive Sensing (CS) matrices, along with the random modulations approach. In traditional CS literature, various methods for PRE have been proposed to generate CS sensing matrices using different random sampling schemes. Recently, we introduced LPTNet, which uses a model-based deep learning approach to jointly optimize the PRE matrix and a corresponding reconstruction deep neural network (DNN). LPTNet has demonstrated unprecedented CS performance. In this paper, we provide a review of LPTNet and present an interpretable scheme for its DNN.
13460-17
Author(s): Ali Cafer Gurbuz, North Carolina State Univ. (United States)
15 April 2025 • 11:40 AM - 12:00 PM EDT
Show Abstract + Hide Abstract
This work proposes a deep compressed learning framework inferring classification directly from the compressive measurements. While classical approaches separately sense, reconstruct signals, and apply classification on these reconstructions, we jointly learn the sensing and classification schemes utilizing a deep neural network with a novel loss function. Our approach employs a data-driven reconstruction network within the compressed learning framework utilizing a weighted loss that combines both in-network reconstruction and classification losses. The proposed network structure also learns the optimal measure- ment matrices for the goal of enhancing classification performance. Quantitative results demonstrated on CIFAR-10 image dataset shows that the proposed framework provides better classification performance and robustness to noise compared to the tested state-of-the-art deep compressed learning approaches.
Break
Lunch Break 12:00 PM - 1:30 PM
Session 6: Computer Vision and Remote Sensing
15 April 2025 • 1:30 PM - 2:50 PM EDT
Session Chair: George Sklivanitis, Florida Atlantic Univ. (United States)
13460-18
Author(s): Diana Velychko, Rochester Institute of Technology (United States); Panagiotis Markopoulos, The Univ. of Texas at San Antonio (United States); Eli Saber, Jamison Heard, Rochester Institute of Technology (United States)
15 April 2025 • 1:30 PM - 1:50 PM EDT
Show Abstract + Hide Abstract
Object detection models often face performance degradation under domain distributional shifts due to variations in object size, resolution, and temporal differences (e.g., static images to video). In this work, we address these challenges using a YOLOv5 model trained on the DOTA (non-temporal) and transferred to the VISO dataset (temporal). The baseline YOLOv5 model encloses multiple objects within a single bounding box frequently, due to object scale and image resolution distributional shifts. We propose a novel YOLO-based architecture with ConvLSTM layers to exploit temporal dependencies and apply a zoom-in/zoom-out data augmentation to simulate varying object scales, accounting for the variance within remote sensing domain.
13460-19
Author(s): Chiranjibi Shah, Northern Gulf Institute, Mississippi State Univ. (United States); M M Nabi, The School of Engineering and Applied Sciences, Western Kentucky University (United States); Iffat Ara Ebu, Mississippi State Univ. (United States); Jack Prior, Matthew D. Grossi, Matthew D. Campbell, Ryan Caillouet, Timothy Rowell, National Marine Fisheries Service (United States); Farron Wallace, National Oceanic and Atmospheric Administration (United States); John E. Ball, Mississippi State Univ. (United States); Robert Moorhead, Northern Gulf Institute, Mississippi State Univ. (United States)
15 April 2025 • 1:50 PM - 2:10 PM EDT
Show Abstract + Hide Abstract
Our SEAMAPD21 fish dataset consists of underwater images, where the class distribution is highly imbalanced, making fish identification particularly challenging. Additionally, tracking individual fish in this dataset presents further difficulties due to varying environmental conditions and fish behavior. YOLOv10 delivers enhanced detection accuracy, particularly for imbalanced datasets like ours, which contain underwater fish images. By integrating YOLOv10 for detection with ByteTrack for tracking, we significantly improve both the identification and tracking of fish species, leading to better overall performance in challenging underwater environments. The primary goal of this paper is to enhance the performance of the algorithm for fish tracking in this complex dataset, with a focus on accurately tracking and counting fish species. This improvement is crucial for effectively monitoring marine biodiversity and contributing to conservation efforts.
13460-20
Author(s): Dan Zimmerman, George Sklivanitis, Florida Atlantic Univ. (United States); Panagiotis Markopoulos, The Univ. of Texas at San Antonio (United States); Dimitris A. Pados, Florida Atlantic Univ. (United States)
15 April 2025 • 2:10 PM - 2:30 PM EDT
Show Abstract + Hide Abstract
Accurate identification of fish species in ocean environments is increasingly important for marine biodiversity monitoring and conservation, yet challenging due to the optical distortion, water turbidity, and changing illumination in underwater scenes. In this paper, we consider evaluation of state-of-the-art deep learning models for object detection trained and tested on images and video from the Fish4Knowledge dataset. We also consider optimization of deep learning object detection models for low-footprint hardware devices to achieve an optimal trade-off of accuracy and latency. Our tests show a 73.6% improvement in underwater image quality measure (UIQM) across the Fish4Knowledge dataset, with only a mean processing time of 5.7 milliseconds per image. We use YOLOv5 as a baseline model and we implement layerwise relevance propagation to optimize computational complexity. Experiments show a 2.2 GFlop reduction and a 27% decrease in multiply-accumulate operations, with a negligible 1.2% reduction in mean average precision (mAP).
13460-21
Author(s): Xinyang Mu, Xinyang Mu, Michigan State Univ. (United States)
15 April 2025 • 2:30 PM - 2:50 PM EDT
Show Abstract + Hide Abstract
This study presents a semi-supervised learning framework for blueberry detection in canopy images to improve fruit detection while reducing annotation effort. A teacher-student model is used, where the teacher generates pseudo-labels from unlabeled data based on high-confidence predictions, and the student is iteratively trained using both labeled and pseudo-labeled data. An adaptive confidence threshold ensures only reliable predictions contribute to training. The approach is applied to real-time detection models and compared to fully supervised baselines. After detection, fruit counting and blue fruit percentage estimation are performed to assess performance in precision orchard management.
Break
Coffee Break 2:50 PM - 3:20 PM
Session 7: Multiagent Processing and Key Applications
15 April 2025 • 3:20 PM - 4:40 PM EDT
Session Chair: Alisa Kunapinun, Harbor Branch Oceanographic Institute (United States)
13460-22
Author(s): Bhuvaneswari Ramachandran, Univ. of West Florida (United States); William Collins, SAIC (United States); Daniel Carvalho, Air Force Research Lab. (United States); Richard Martin, Air Force Institute of Technology (United States); Christian Keyser, Air Force Research Lab. (United States)
15 April 2025 • 3:20 PM - 3:40 PM EDT
Show Abstract + Hide Abstract
Spectral analysis of LiDAR data contributes a unique approach to the classification process. The techniques and tools that were designed for multispectral imagery can be adapted to LiDAR analysis. Convolutional Neural Networks (CNN)- a class of deep neural networks is one of the most utilized networks for image classification. In this research, a 1D CNN was used for classification of materials in the classes listed in the ASTER and KLUM dataset. It was ensured that the ECOSTRESS/ASTER and KLUM data sets contained common wavelengths so that there was uniformity when they were aggregated. For each of these samples, there were varying numbers of wavelengths used to record reflectivities as well as varying ranges of wavelengths. To combat this issue, pre-processing techniques were employed to filter out samples and retain only those that contain about 15 wavelengths in the range of 1.06 – 1.54 μm. . Due to the imbalance observed in classes, synthetic data was added to the classes where the nnumber of data was found to be smaller than the other classes using a Synthetic Minority Oversampling Technique. Comparison of several machine learning applications to material classification.
13460-23
Author(s): Roger A. Hallman, CAT Labs., Inc. (United States), Thayer School of Engineering at Dartmouth (United States); Aditi Yadav, Northeastern Univ. (United States)
15 April 2025 • 3:40 PM - 4:00 PM EDT
Show Abstract + Hide Abstract
This paper introduces Objective AI systems which are complex tasks in dynamic environments by integrating multi-agent learning, RAG and direct preference optimization. Objective AI systems collaborate and adapt for operations within shared environments, aligning decision making to user preferences. These systems are able to achieve objectives in real time, and are suited to deployment across a number of different environments, including Internet of Things/smart building management, financial trading, government services, law and healthcare.
13460-24
Author(s): Mohammad M. Islam, Univ. of Maryland, Baltimore County (United States); Lavanya Elluri, Texas A&M Univ. - Central Texas (United States); Karuna Joshi, Univ. of Maryland, Baltimore County (United States)
15 April 2025 • 4:00 PM - 4:20 PM EDT
Show Abstract + Hide Abstract
The rapid growth of Internet of Things (IoT) devices has led to the development of data protection regulations, but existing cybersecurity standards like NISTIR 8259A pose challenges for efficient retrieval and contextualization due to their lengthy, non-machine-readable format. This study introduces a knowledge graph to represent key concepts of NISTIR 8259A, facilitating structured data representation and automated rule retrieval for IoT security compliance. We evaluate the knowledge graph's effectiveness using Retrieval-Augmented Generation (RAG) techniques and compare its performance to traditional methods, including its treatment as a vector database. Our findings indicate that integrating RAG with graph data significantly enhances query precision and retrieval efficiency. Using various large language models (LLMs) like LLAMA2, Mistral-7B, and GPT-3.5, we provide a comparative analysis focused on performance metrics. This research offers insights for optimizing LLM integration within knowledge graph systems, advancing cybersecurity information retrieval in IoT networks.
13460-25
Author(s): Indu Shukla, Salhi Abderahim, James Ross, U.S. Army Engineer Research and Development Ctr. (United States)
15 April 2025 • 4:20 PM - 4:40 PM EDT
Show Abstract + Hide Abstract
This study focuses on advancing Multi-Agent Reinforcement Learning (MARL) by exploring environments where multiple agents interact in both cooperative and competitive scenarios. The complexity of these interactions is enhanced through existing MARL scenarios to provide a robust testing framework. Explainability techniques are employed to make agent interactions transparent, policy visualization methods, including heatmaps and decision trees, are applied to illustrate agent behavior across the state space.
Conference Chair
The Univ. of Texas at San Antonio (United States)
Conference Co-Chair
Florida Atlantic Univ. (United States), Harbor Branch Oceanographic Institute (United States)
Conference Co-Chair
Florida Atlantic Univ. (United States)
Program Committee
Temple Univ. (United States)
Program Committee
Univ. of Delaware (United States)
Program Committee
Univ. of North Texas (United States)
Program Committee
Mississippi State Univ. (United States)
Program Committee
Santa Clara Univ. (United States)
Program Committee
Florida Atlantic Univ. (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
The Univ. of Texas Rio Grande Valley (United States)
Program Committee
Ben-Gurion Univ. of the Negev (Israel)
Additional Information

POST-DEADLINE ABSTRACTS ACCEPTED UNTIL 17 February
New submissions considered for poster session, or oral session if space becomes available
Contact author will be notified of acceptance by 3 March
View Submission Guidelines and Agreement
View the Call for Papers PDF

Submit Post-Deadline Abstract

What you will need to submit

  • Presentation title
  • Author(s) information
  • Speaker biography (1000-character max including spaces)
  • Abstract for technical review (200-300 words; text only)
  • Summary of abstract for display in the program (50-150 words; text only)
  • Keywords used in search for your paper (optional)
Note: Only original material should be submitted. Commercial papers, papers with no new research/development content, and papers with proprietary restrictions will not be accepted for presentation.