21 - 25 April 2024
National Harbor, Maryland, US
Modern ubiquitous sensing produces immense data collections (big data) that offer unprecedented opportunities for knowledge extraction, inference, and learning. However, due to their sheer size and complexity, big data also pose non-trivial challenges in terms of their storage, processing, and analysis. The objective of this conference is to provide a consolidated forum for exploring and promoting advances in the broader area of big data. Original papers are invited in the following areas.

Big data and machine learning foundations Big data sensing, infrastructure, and resources Challenging data Big data applications (processing, learning, and analytics)
Best paper award
One paper will be selected for the best paper award among the papers of this conference (accepted, presented, and published). The selection will be made by a designated award sub-committee, comprising three members of the conference program committee and/or chairs. All eligible papers will be evaluated for technical quality and merit. The criteria for evaluation will include: 1) innovation; 2) clarity and quality of the manuscript submitted for publication; and 3) the significance and impact of the work reported.

In order to be considered for the award, the presenter must make their oral presentation and submit their final manuscript as scheduled and according to the due date. There is no monetary prize for this award.
;
In progress – view active session
Conference 13036

Big Data VI: Learning, Analytics, and Applications

22 - 23 April 2024 | Potomac 2
View Session ∨
  • 1: Machine Learning for Automatic Target Recognition I: Joint Session with Conferences 13036 and 13039
  • 2: Methods and Applications I
  • 3: Methods and Applications II
  • 4: Methods and Applications III
  • Symposium Plenary
  • Symposium Panel on Microelectronics Commercial Crossover
  • 5: Methods and Applications IV
  • 6: Methods and Applications V
Session 1: Machine Learning for Automatic Target Recognition I: Joint Session with Conferences 13036 and 13039
22 April 2024 • 8:10 AM - 10:10 AM EDT | National Harbor 5
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
13039-1
Author(s): Raghuveer M. Rao, DEVCOM Army Research Lab. (United States)
22 April 2024 • 8:10 AM - 8:50 AM EDT | National Harbor 5
13039-2
Author(s): Sophia Abraham, Steve Cruz, Univ. of Notre Dame (United States); Suya You, DEVCOM Army Research Lab. (United States); Jonathan D. Hauenstein, Walter J. Scheirer, Univ. of Notre Dame (United States)
22 April 2024 • 8:50 AM - 9:10 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
The intricacies of visual scenes in Automatic Target Recognition (ATR) necessitate sophisticated models for nuanced interpretation. Vision-language models, notably CLIP (Contrastive Language-Image Pre-training), bridge visual perception and linguistic description. However, their effectiveness in ATR relies on targeted fine-tuning, challenged by the unimodal nature of datasets like the Defense Systems Information Analysis Center (DSIAC) ATR data. We propose a novel fine-tuning approach for CLIP, enriching DSIAC data with algorithmically generated captions for a multimodal training environment. Central to our innovation is a homotopy-based multi-objective optimization strategy, adept at balancing model accuracy, generalization, and interpretability—key factors for ATR success. Implemented in PyTorch Lightning, our approach propels the frontier of ATR model optimization while also effectively addressing the intricacies of real-world ATR requirements.
13039-3
Author(s): Rohan Putatunda, Kelvin U. Echenim, Univ. of Maryland, Baltimore County (United States)
22 April 2024 • 9:10 AM - 9:30 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
This study introduces a depth-aware approach for detecting small-scale camouflaged objects, leveraging the Swin Transformer and Ghost Convolution Layer. We employ multimodal depth maps to enhance spatial understanding, which is crucial for identifying camouflaged items. The Swin Transformer captures extensive contextual data, while the Ghost Convolution Layer boosts computational efficiency. We validate our method on unique quasi-synthetic and comparative synthetic datasets created for this study. An ablation study and GRAD-CAM visualization further substantiate the model's effectiveness. This research offers a novel framework for improving object detection in challenging camouflaged environments.
13039-4
Author(s): Scott G. Hodes, The Pennsylvania State Univ. (United States), Applied Research Lab. (United States); Kory J. Blose, The Applied Research Lab at The Pennsylvania State University (United States), The Pennsylvania State University Department of Agricultural and Biological Engineering (United States); Timothy J. Kane, The Pennsylvania State University School of Electrical Engineering and Computer Science (United States), The Applied Research Lab at The Pennsylvania State University (United States)
22 April 2024 • 9:30 AM - 9:50 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
This work involves performing black box adversarial attacks using light as a medium against image classifier neural networks. The method of generating these adversarial examples involves querying the target network to inform decisions on designing a pattern upon the Fourier plane. The shapes are designed to target regions of the Fourier domain effectively without being able to back-propagate loss toward said plane.
13039-5
Author(s): Khaled Obaideen, Univ. of Sharjah (United Arab Emirates); Yousuf Faroukh, Sharjah Academy for Astronomy, Space Sciences & Technology (United Arab Emirates); Mohammad AlShabi, Univ. of Sharjah (United Arab Emirates)
22 April 2024 • 9:50 AM - 10:10 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
In recent decades, there have been notable advancements in Automatic Target Recognition (ATR) systems. One technique that has played a crucial role in improving the accuracy and efficiency of these systems is dictionary-learning. This paper provides a thorough examination, documenting the evolutionary progression of dictionary-learning methodologies in the field of Automatic Target Recognition (ATR). Commencing with initial approaches such as K-SVD and MOD, we examine their fundamental influence and subsequent evolution towards more adaptable methodologies, such as online and convolutional dictionary learning. The focus is on comprehending the enhancements in target recognition achieved by dictionary-learning methods, particularly in demanding scenarios characterized by factors such as noise, occlusions, and diverse target orientations. In addition, we investigate the recent incorporation of deep learning principles into conventional dictionary-based frameworks, revealing a hybrid paradigm that holds the potential to significantly transform automatic target recognition (ATR) capabilities.
Break
Coffee Break 10:10 AM - 10:30 AM
Session 2: Methods and Applications I
22 April 2024 • 10:30 AM - 12:00 PM EDT | Potomac 2
Session Chair: Panagiotis Markopoulos, The Univ. of Texas at San Antonio (United States)
13036-1
Author(s): Andreas E. Savakis, Rochester Institute of Technology (United States)
22 April 2024 • 10:30 AM - 11:00 AM EDT | Potomac 2
Show Abstract + Hide Abstract
The adaptation of deep network models to new environments, with significantly different distributions compared to the training data, has both theoretical interest and practical implications. Domain Adaptation (DA) aims to overcome the dataset bias problem by closing the gap in classification performance between the source domain used for training and the target domain where testing takes place. In this talk, we present a new framework for Continual Domain Adaptation, where the target domain samples are acquired in small batches over time and adaptation takes place continually in changing environments. Our Continual Domain Adaptation approach utilizes concepts from both DA and continual learning and demonstrate state-of-the-art results on various datasets under challenging conditions.
13036-2
Author(s): Adrian Stern, Shadi Kandalaft, Oren Bargan Lowte, Vladislav Kravtes, Ben-Gurion Univ. of the Negev (Israel)
22 April 2024 • 11:00 AM - 11:20 AM EDT | Potomac 2
Show Abstract + Hide Abstract
In the past two decades, numerous Compressive Imaging (CI) techniques have been developed to reduce acquired data. Recently, these CI methods have incorporated Deep Learning (DL) tools to optimize both the reconstruction algorithm and the sensing model. However, most of these DL-based CI methods have been developed by simulating the sensing process without considering the limitations associated with the optical realization of the optimized sensing model. Since the merit of CI stands with the physical realization of the sensing process, we revisit the leading DL-based CI methods. We present a preliminary comparison of their performances while focusing on practical aspects such as the realizability of the sensing matrix and robustness to the measurement noise.
13036-3
Author(s): Arthur C. Depoian, Colleen P. Bailey, Parthasarathy Guturu, Univ. of North Texas (United States)
22 April 2024 • 11:20 AM - 11:40 AM EDT | Potomac 2
Show Abstract + Hide Abstract
Multispectral imagery is instrumental across diverse domains, including remote sensing, environmental monitoring, agriculture, and healthcare, as it offers a treasure trove of data over various spectral bands, enabling profound insights into our environment. However, with the ever-expanding volume of multispectral data, the need for efficient compression methods is becoming increasingly critical. This paper explores the application of a novel neural network architecture for compressing 13-channel EuroSAT satellite imagery. The proposed method leverages the big data paradigm by training on a large and diverse dataset. The models offer a choice between high-fidelity compression for preserving image quality and a more aggressive compression ratio (3-pass) for storage efficiency, both maintaining good reconstruction quality. The results demonstrate that the proposed method can achieve significant compression ratios while maintaining good visual quality, paving the way for efficient storage, transmission, and analysis of Earth observation data in big data environments.
13036-4
Author(s): Ian Tomeo, Rochester Institute of Technology (United States); Panagiotis Markopoulos, The Univ. of Texas at San Antonio (United States); Andreas E. Savakis, Rochester Institute of Technology (United States)
22 April 2024 • 11:40 AM - 12:00 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Principal Component Analysis (PCA) is commonly used for dimensionality reduction, feature extraction, data denoising, and visualization. The L1-PCA is known to confer robustness or a resistance to outliers in the data. In this paper, a new method for L1-PCA is explored using quantum annealing hardware. To showcase performance increases as compared to other PCA types, results for a fault detection scenario are presented and the speedup of L1-PCA using quantum annealing is demonstrated. Additionally, L1-PCA has better fault detection rates than L2-PCA when in the presence of outliers.
Break
Lunch Break 12:00 PM - 1:30 PM
Session 3: Methods and Applications II
22 April 2024 • 1:30 PM - 3:00 PM EDT | Potomac 2
Session Chair: Panagiotis Markopoulos, The Univ. of Texas at San Antonio (United States)
13036-5
Author(s): Gonzalo R. Arce, Andres Ramirez, Nestor Porras, Univ. of Delaware (United States)
22 April 2024 • 1:30 PM - 2:00 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Lidar remote sensing systems are utilized across different platforms such as satellites, airplanes, and drones. These platforms play a crucial role in determining the sampling characteristics of the imaging system they carry. For instance, low-altitude lidars offer high photon count and spatial resolution but are limited to small, localized areas. In contrast, satellite lidars cover larger areas globally but suffer from lower photon counts and sparse sampling along swath line trajectories. This paper addresses the limitations of satellite imaging systems using a novel class of satellite remote sensing lidars coined Compressive Satellite Lidars (CS-Lidars). CS-Lidars leverage compressive sensing and machine learning techniques to capture Earth's features from hundreds of kilometers above its surface. By doing so, they reconstruct 3D imagery with high resolution and coverage, akin to data collected from airborne platforms flying hundreds of meters above ground level. The paper also compares different machine learning methods used to reconstruct compressive lidar measurements, aiming for high-resolution, dense coverage, and broad field-of-view per swath pass.
13036-7
Author(s): Kristen Hallas, Md Shahriar Forhad, Tamer Oraby, Benjamin Peters, Jianzhi Li, The Univ. of Texas Rio Grande Valley (United States)
22 April 2024 • 2:00 PM - 2:20 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Lithium-ion batteries (LIBs) play a big part in the vision of a net-zero emission economy, yet it is commonly reported that only a small percentage of LIBs are recycled worldwide. An outstanding barrier to making recycling LIBs economical throughout the supply chain pertains to the uncertainty surrounding their remaining useful life (RUL). How do operating conditions impact initial useful life of the battery? We applied sparse identification of nonlinear dynamics method (SINDy) to understand the life-cycle dynamics of LIBs with respect to sensor data observed for current, voltage, internal resistance and temperature. A dataset of 124 commercial lithium iron phosphate/graphite (LFP) batteries have been charged and cycled to failure under 72 unique policies. Charging policies were standardized, reduced to PC scores, and clustered by a k-means algorithm. Sensor data from the first cycle was averaged within clusters, characterizing a "good as new" state. SINDy method was applied to discover dynamics of this state and compared amongst clusters. This work contributes to the effort of defining a model that can predict the remaining useful life (RUL) of LIBs during degradation.
13036-8
Author(s): Shageenth Sandrakumar, Simon Khan, Air Force Research Lab. (United States)
22 April 2024 • 2:20 PM - 2:40 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Currently, the Android Team Awareness Kit (ATAK) has a plugin which analyzes sensor data from the ATAK device’s onboard accelerometer and gyroscope to determine the movement a person makes when they walk. The plug-in uses machine learning (ML) algorithms to create a model of that person's gait, and then sends pertinent data through the associated human gait model to authenticate a user. The novelty of our effort lies into enhancing this human gait authentication by using different features extracted from spectral information of the accelerometer and gyroscope signals from the smartphone using a public human activity recognition dataset (WISDM) as a proof of concept, marking a previously unexplored approach. By leveraging spectral data, we perform feature-level fusion utilizing ML algorithms and the performance shows promising for authentication utilizing 51 users. The SVM-rbf classifiers achieved a mean Equal Error Rate (EER) and mean Accuracy (ACC) of 2\% and 97.3\% , while the GBM classifiers achieved a mean EER and ACC of 0.4\% and 99.1\%, and the CNN classifiers achieved a mean EER and ACC of 10% and 90.4% respectively.
13036-28
Author(s): Evans Nyanney, The Univ. of Texas Rio Grande Valley (United States)
22 April 2024 • 2:40 PM - 3:00 PM EDT | Potomac 2
Show Abstract + Hide Abstract
In our study, we examine a uniform distribution within a regular hexagon, using its vertices as a conditional set. We explore conditional optimal sets of n-points and their corresponding quantization errors for n≥6, along with the quantization dimension and coefficient in scenarios without constraints. Additionally, we assess optimal point sets and quantization errors under constraints—such as the hexagon's circumcircle, incircle, and diagonals—focusing on the same set of conditions. This approach allows us to understand how geometric constraints influence quantization in uniform distributions.
Break
Coffee Break 3:00 PM - 3:30 PM
Session 4: Methods and Applications III
22 April 2024 • 3:30 PM - 4:30 PM EDT | Potomac 2
Session Chair: Bing Ouyang, Harbor Branch Oceanographic Institute (United States)
13036-9
Author(s): Kevin Hwang, Sai Challagundla, Glenelg High School (United States), Univ. of Maryland, Baltimore County (United States); Maryam Alomair, Univ. of Maryland, Baltimore County (United States); Doug Janssen, St. Paul's School for Boys (United States); Kendall Morton, Glenelg High School (United States); Lujie Chen, Fow-Sen Choa, Univ. of Maryland, Baltimore County (United States)
22 April 2024 • 3:30 PM - 3:50 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Enhancing human learning speed with the aid of available AI tools becomes increasingly crucial as the volume of data for knowledge extraction, inference, and learning continues to grow. This study explores an AI-driven approach to creating and evaluating Multiple Choice Questions (MCQs) that can lead to activation of multilevel knowledge trees in human brains. The methodology involves generating Bloom's Taxonomy-aligned questions through zero-shot prompting with GPT-3.5, validating question alignment with Bloom’s Taxonomy with RoBERTa--a language model grounded in transformer architecture, employing self-attention mechanisms to handle input sequences and produce context-aware representations of individual words within a given sentence--, evaluating question quality using Item Writing Flaws (IWF)--issues that can arise in the creation of test items or questions--, and validating questions using subject matter experts.
13036-10
Author(s): Alisa Kunapinun, William Fairman, Paul S. Wills, Dennis Hanisak, Shagundeep Singh, Bing Ouyang, Harbor Branch Oceanographic Institute (United States)
22 April 2024 • 3:50 PM - 4:10 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Integrated Multi-Trophic Aquaculture (IMTA) co-farms species for efficiency. Harbor Branch Oceanographic Institute at Florida Atlantic University is pioneering an AI-driven IoT framework for IMTA's commercial scale. Their Pseudorandom Encoded Light for Evaluating Biomass (PEEB) sensor tracks Sea Lettuce growth and aligns with traditional measurements. However, replacing physical harvesting with PEEB faces challenges from environmental factors and inconsistencies, like algae on the sensor. This paper delves into IMTA data collection, emphasizing iterative design and preprocessing. The aim is leveraging machine learning for predictions, ensuring a balance between sensor design and data integrity in aquaculture."
13036-11
Author(s): Prasad S. Thenkabail, Pardhasaradhi Teluguntla, Adam Oliphant, Itiya Aneece, Daniel Foley, U.S. Geological Survey (United States)
22 April 2024 • 4:10 PM - 4:30 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Global food and water security are threatened by several events such as changing climate, ballooning populations, stress on land and water, demographic changes, pandemics, and wars. The need to grow sufficient food and nutrition to feed the populations of twenty-first century and beyond require us to carefully understand, model, map, and monitor cropland dynamics over time and space. To achieve this, we have proposed and established a global food security support analysis data (GFSAD) project to develop multiple high-resolution agricultural cropland products encompassing the entire world. In this presentation, we will demonstrate production of Landsat satellite derived 30m global cropland extent product as well as irrigated versus rainfed cropland product using petabyte-scale big-data analytics, and multiple machine learning algorithms by coding and computing on the Google Earth Engine (GEE) cloud. Accuracies, errors, and uncertainties of the products will also be discussed.
Symposium Plenary
22 April 2024 • 5:00 PM - 6:30 PM EDT | Potomac A
Session Chairs: Tien Pham, The MITRE Corp. (United States), Douglas R. Droege, L3Harris Technologies, Inc. (United States)

View Full Details: spie.org/dcs/symposium-plenary

Chair welcome and introduction
22 April 2024 • 5:00 PM - 5:05 PM EDT

DoD's microelectronics for the defense and commercial sensing ecosystem (Plenary Presentation)
Presenter(s): Dev Shenoy, Principal Director for Microelectronics, Office of the Under Secretary of Defense for Research and Engineering (United States)
22 April 2024 • 5:05 PM - 5:45 PM EDT

NATO DIANA: a case study for reimagining defence innovation (Plenary Presentation)
Presenter(s): Deeph Chana, Managing Director, NATO Defence Innovation Accelerator for the North Atlantic (DIANA) (United Kingdom)
22 April 2024 • 5:50 PM - 6:30 PM EDT

Symposium Panel on Microelectronics Commercial Crossover
23 April 2024 • 8:30 AM - 10:00 AM EDT | Potomac A

View Full Details: spie.org/dcs/symposium-panel

The CHIPS Act Microelectronics Commons network is accelerating the pace of microelectronics technology development in the U.S. This panel discussion will explore opportunities for crossover from commercial technology into DoD systems and applications, discussing what emerging commercial microelectronics technologies could be most impactful on photonics and sensors and how the DoD might best leverage commercial innovations in microelectronics.

Moderator:
John Pellegrino, Electro-Optical Systems Lab., Georgia Tech Research Institute (retired) (United States)

Panelists:
Shamik Das, The MITRE Corporation (United States)
Erin Gawron-Hyla, OUSD (R&E) (United States)
Carl McCants, Defense Advanced Research Projects Agency (United States)
Kyle Squires, Ira A. Fulton Schools of Engineering, Arizona State Univ. (United States)
Anil Rao, Intel Corporation (United States)

Break
Coffee Break 10:00 AM - 10:30 AM
Session 5: Methods and Applications IV
23 April 2024 • 10:30 AM - 12:00 PM EDT | Potomac 2
Session Chair: Panagiotis Markopoulos, The Univ. of Texas at San Antonio (United States)
13036-13
CANCELED: The virtues of overfitting: computation on AI fabric (Invited Paper)
Author(s): Dimitris A. Pados, Florida Atlantic Univ. (United States)
23 April 2024 • 10:30 AM - 11:00 AM EDT | Potomac 2
Show Abstract + Hide Abstract
To be determined.
13036-14
Author(s): Khandaker Mamun Ahmed, M. Hadi Amini, Naphtali Rishe, Florida International Univ. (United States)
23 April 2024 • 11:00 AM - 11:20 AM EDT | Potomac 2
Show Abstract + Hide Abstract
Anomaly detection in surveillance videos is challenging because only the video label annotations are available for snippet-level predictions. In this work, we propose a transfer learning aided statistical approach to detect anomaly events within a video. We first use a pre-trained model, such as VGG-16, to extract features from the video data and perform the statistical analysis: segmentation, summation, and normalization to localize the anomaly snippet.
13036-15
CANCELED: Reuse of words in text supervision
Author(s): Aparnaa Senthilnathan, Rochester Institute of Technology (United States); Erin E. Tripp, Nathan Inkawhich, Air Force Research Lab. (United States)
23 April 2024 • 11:20 AM - 11:40 AM EDT | Potomac 2
Show Abstract + Hide Abstract
Foundation models are now available as off-the-shelf tools for a variety of downstream tasks. These large machine learning models are trained on vast amounts of data in a task-agnostic way and can be used with little to no fine-tuning. In this talk, we consider Contrastive Language-Image Pre-training (CLIP) models, which use images and captions as natural language supervision to learn text and image feature embeddings. These embeddings can then be used for zero-shot prediction by measuring the similarity between new image and text features. We study a potential vulnerability introduced by ambiguities in the English language – words which have more than one distinct semantic meaning. For example, a boxer may refer to a human athlete or a breed of dog. Experiments measure the effects of this ambiguity on the similarity between embeddings and performance in downstream tasks. Finally, we propose some mitigation strategies for downstream users.
13036-16
Author(s): Arthur C. Depoian, Colleen P. Bailey, Parthasarathy Guturu, Univ. of North Texas (United States)
23 April 2024 • 11:40 AM - 12:00 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Edge computing in remote sensing requires on-device learning due to limitations. Traditional approaches struggle with limited resources. This paper proposes a continual learning method using a feedback-based data sampling technique to improve performance. We explore the trade-off between accuracy and resource usage for real-world deployment on edge devices. This approach can enable better decision making, improved efficiency, and greater autonomy in remote sensing tasks.
Break
Lunch/Exhibition Break 12:00 PM - 1:30 PM
Session 6: Methods and Applications V
23 April 2024 • 1:30 PM - 3:10 PM EDT | Potomac 2
Session Chair: George Sklivanitis, Florida Atlantic Univ. (United States)
13036-17
Author(s): Kyle Juretus, Villanova Univ. (United States); Syed Ali Hamza, Widener Univ. (United States); Moeness Amin, Villanova Univ. (United States)
23 April 2024 • 1:30 PM - 1:50 PM EDT | Potomac 2
Show Abstract + Hide Abstract
The ability of sparse arrays to significantly reduce the hardware cost and complexity over a uniform linear array (ULA) is advantageous for a variety of applications with large array sizes. While the hardware complexity is reduced, the optimum selection of active antennas for the sparse array involves iterative solutions of an optimization problem. In a dynamic environment, such a solution is deemed impractical. In this regard, replacing the traditional optimization algorithms with automatic data-driven learning techniques offers a means towards real time configuration design of sparse arrays and, as such, provides prompt response to sudden changes in the operating environment. This paper examines optimum sparse array design using deep learning. We consider the case of two sources which need to be separately isolated for corresponding signal recovery and classification on datasets varying from few to unlimited snapshots and incorporating various SNR values.
13036-18
Author(s): Henry Breaker, Syed Ali Hamza, Widener Univ. (United States)
23 April 2024 • 1:50 PM - 2:10 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Radar-based sensing emerges as a promising alternative to cameras and wearable devices for indoor human activity recognition. Unlike wearables, radar sensors offer non-contact and unobtrusive monitoring, while being insensitive to lighting conditions and preserving privacy as compared to cameras. This paper addresses the task of continuous and sequential classification of daily life activities, unlike the problem to isolate distinct motions in isolation. Upon acquiring raw radar data containing sequences of motions, an event detection algorithm, the Short-Time-Average/Long-Time-Average (STA/LTA) algorithm, is utilized to detect individual motion segments. By recognizing breaks between transitions from one motion type to another, the STA/LTA detector isolates individual activity segments. To ensure consistent input shapes for activities of varying durations, image resizing and cropping techniques are employed. Furthermore, data augmentation techniques are applied to modify micro-Doppler signatures, enhancing the classification system's robustness and providing additional data for training.
13036-19
Author(s): Ajaya Dahal, Sabyasachi Biswas, Mississippi State Univ. (United States); Sevgi Z. Gurbuz, The Univ. of Alabama (United States); Ali C. Gurbuz, Mississippi State Univ. (United States)
23 April 2024 • 2:10 PM - 2:30 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Wi-Fi-based Human Activity Recognition (HAR) is an emerging research domain that leverages Wi-Fi signals to detect and categorize human activities. It has gained considerable attention due to its inherent advantages, including convenience, non-intrusiveness, and cost-effectiveness, as it utilizes the ubiquitous Wi-Fi infrastructure that is already in place. By analyzing Wi-Fi signals, this methodology enables the recognition of various human activities, opening the door to a wide range of applications, including health and wellness monitoring, smart home gesture control, commercial and industrial applications, etc.
13036-20
Author(s): Mason Calderbank, Michael Jensen, Daniel Creighton, Daniel Smith, Syed Ali Hamza, Widener Univ. (United States)
23 April 2024 • 2:30 PM - 2:50 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Traditional clinical vital sign measurement methods are often contact-based, causing discomfort for patients and practitioners and rendering it inconvenient for continuous monitoring. Additionally, close proximity during measurement poses the risk of disease transmission and allows only one patient to be monitored at a time. To address these challenges, contactless measurement methods are being explored, with radar technology emerging as a promising alternative for vital sign monitoring. The proposed design utilizes a MIMO radar system to remotely detect subtle chest movements caused by breathing and heartbeat. The primary challenge lies in separating weaker heartbeat movements from stronger breathing motions, in the presence of body movements which mask the chest movements due to vital signs. We employ filtering techniques and chirp averaging using slow-time oversampling to enable the precise estimation of breathing and heartbeat patterns. We collect radar vital sign data from various individuals with different resting heart rates in a controlled lab environment. The system's performance is evaluated by comparing it with ground truth information obtained from pulse oximeter.
13036-21
Author(s): Mahmoud Seifallahi, Brennen Farrell, Florida Atlantic Univ. (United States); James E. Galvin, Comprehensive Ctr. for Brain Health, Univ. of Miami (United States); Behnaz Ghoraani, Florida Atlantic Univ. (United States)
23 April 2024 • 2:50 PM - 3:10 PM EDT | Potomac 2
Show Abstract + Hide Abstract
Leveraging the advancements in human pose estimation (HPE) with convolutional neural networks, we address the pressing need for efficient Alzheimer's disease (AD) detection. Bypassing conventional diagnostic procedures, our approach employs regular cameras, including those in cell phones, combined with pose estimation and machine learning for early AD diagnosis. Using OpenPose, we analyzed the walking patterns of 107 subjects, extracting 48 distinct gait markers from 25 body joint positions. Notably, 39 markers exhibited significant variations between healthy individuals and AD patients. Our model, utilizing a Support Vector Machine, achieved a diagnostic accuracy of 90.01% and an F-score of 86.20%. Our findings underscore the potential of accessible camera technology and computational techniques for practical, non-invasive AD detection in everyday settings.
Conference Chair
The Univ. of Texas at San Antonio (United States)
Conference Co-Chair
Florida Atlantic Univ. (United States), Harbor Branch Oceanographic Institute (United States)
Conference Co-Chair
Florida Atlantic Univ. (United States)
Program Committee
Temple Univ. (United States)
Program Committee
Univ. of Delaware (United States)
Program Committee
Univ. of North Texas (United States)
Program Committee
Zois Boukouvalas
American Univ. (United States)
Program Committee
Mississippi State Univ. (United States)
Program Committee
Santa Clara Univ. (United States)
Program Committee
Florida Atlantic Univ. (United States)
Program Committee
Univ. of California, Riverside (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
The Univ. of Texas Rio Grande Valley (United States)
Program Committee
Univ. of Toronto (Canada)
Program Committee
Ben-Gurion Univ. of the Negev (Israel)
Additional Information

View call for papers

 

What you will need to submit:

  • Presentation title
  • Author(s) information
  • Speaker biography (1000-character max including spaces)
  • Abstract for technical review (200-300 words; text only)
  • Summary of abstract for display in the program (50-150 words; text only)
  • Keywords used in search for your paper (optional)
  • Check the individual conference call for papers for additional requirements (i.e. extended abstract PDF upload for review or instructions for award competitions)
Note: Only original material should be submitted. Commercial papers, papers with no new research/development content, and papers with proprietary restrictions will not be accepted for presentation.