This conference provides a technical forum for members of both industry and academia to present their latest applications of machine learning. Machine learning has been applied to a broad domain of image/vision systems from medical imaging to consumer cameras. Learned tasks such as image recognition, noise reduction, or natural language processing, are currently being applied in many common devices in consumer and industrial settings. Training datasets and training methods for machine learning are critical for the success of a system. Studies demonstrating the deployment and benchmarking of machine learning algorithms on specialized computer hardware is highly valuable to many groups in this field. Sensor hardware design or selection as it pertains to machine learning tasks; for example, an analysis of different camera designs and how each pertains to the performance of an image recognition task such as object detection, is of interest. Analysis of full systems that include the sensor technology, data processing hardware, and results are welcome as each area is critical for the successful application of machine learning.

Papers or tutorials reviewing the topics covered by this section are welcome. All abstracts will be reviewed by the program committee for originality and merit. Topics of interest include, but are not limited to, the following:

Algorithms Consumer Application Industrial Application Security Medicine Big Data Hardware Big Experimental Facilities ;
In progress – view active session
Conference 12227

Applications of Machine Learning 2022

23 - 24 August 2022 | Conv. Ctr. Room 12
View Session ∨
  • Poster Session
  • 1: Remote Sensing
  • 2: Industry
  • 3: Optics I
  • 4: Optics II
  • Optical Engineering Plenary Session
  • 5: Physics
  • 6: Medical Imaging and Biology
Information

Timing will be finalized in early August


POST-DEADLINE ABSTRACT SUBMISSIONS DUE 5-July

Call for Papers Flyer
Poster Session
22 August 2022 • 5:30 PM - 7:30 PM PDT | Conv. Ctr. Exhibit Hall B1
Conference attendees are invited to view a collection of posters within the topics of Nanoscience + Engineering, Organic Photonics + Electronics, and Optical Engineering + Applications. Enjoy light refreshments, ask questions, and network with colleagues in your field. Authors of poster papers will be present to answer questions concerning their papers. Attendees are required to wear their conference registration badges to the poster session.

Poster authors, visit Poster Presentation Guidelines for set-up instructions.
12227-19
Author(s): Manoj Naick, Amitojdeep Singh, Mohammed Abdul Rasheed, Hoda Kheradfallah, Univ. of Waterloo (Canada); Jyothi Balaji, Medical Research Foundation (India); Vasudevan Lakshminarayanan, Univ. of Waterloo (Canada)
22 August 2022 • 5:30 PM - 7:30 PM PDT | Conv. Ctr. Exhibit Hall B1
Show Abstract + Hide Abstract
Quantum machine learning predictive models are emerging and in this study we developed a classifier to infer the ophthalmic disease from Optical Coherence Tomography (OCT) images. We used OCT images of the retina in vision threatening conditions such as choroidal neovascularization (CNV) and diabetic macular edema (DME). PennyLane an open-source software tool based on the concept of quantum differentiable programming was used mainly to train the quantum circuits. The training was tested on an IBM 5 qubits System “ibmq_belem” and 32 qubits simulator “ibmq_qasm_simulator”. The results are promising.
12227-27
Author(s): Leif Ole Harders, Vitali Czymmek, Stephan Hussmann, Fachhochschule Westküste (Germany); Andreas Wrede, Thorsten Ufer, Landwirtschaftskammer Schleswig-Holstein (Germany)
22 August 2022 • 5:30 PM - 7:30 PM PDT | Conv. Ctr. Exhibit Hall B1
Show Abstract + Hide Abstract
Vision-based systems have a great potential in supporting agricultural processes, such as monitoring fields or weed management. They build the foundation for automated weeding solutions that have the potential to reduce the amount of environmentally damaging chemicals. UAVs have the potential to operate on wet soils compared to conventional agricultural robots. To operate in a UAV-based system, the vision-based detection system has to meet specific requirements in regards to the power consumption, overall detection performance, real-time capability, weight and size. This paper evaluates early research results of a UAV vision-based deep learning approach for weed detection in horticulture, more specifically arboriculture.
12227-28
Author(s): Julia Diaz-Escobar, Instituto Tecnológico de Mexicali (Mexico); Vitaly Kober, Ctr. de Investigación Científica y de Educación Superior de Ensenada B.C. (Mexico)
22 August 2022 • 5:30 PM - 7:30 PM PDT | Conv. Ctr. Exhibit Hall B1
Show Abstract + Hide Abstract
For several years, Computer-Aided Detection (CAD) systems have been used by radiologists as a second interpreter for breast cancer detection in digital mammography. However, for every true-positive cancer detected by a CAD system, there are more false predictions that have to be revised by the expert to avoid an unnecessary biopsy. Nowadays, machine learning models are being used to analyze digital mammography. However, most of the proposed models are trained on a single database and do not have high reliability. In this work, a complete CAD system is proposed. A pre-processing stage is designed to remove noise and extract features using local image phase information. Then, a machine learning approach is utilized for digital mammography classification. Experimental results are presented using various digital mammography datasets and evaluated under different performance metrics.
12227-29
Author(s): Dillon Marquard, Kyle Wright, Roummel F. Marcia, Univ. of California, Merced (United States)
22 August 2022 • 5:30 PM - 7:30 PM PDT | Conv. Ctr. Exhibit Hall B1
Show Abstract + Hide Abstract
Image classification is an important problem in machine learning and is widely used in many real-world applications, such as medicine, ecology, astronomy, and defense. Convolutional neural networks (CNNs) are machine learning techniques designed for inputs whose features are spatially correlated and have been demonstrated to be highly effective approaches for many image classification problems. Data loss in the form of missing pixels pose a particular challenge for image classification. In this work, we investigate techniques for improving the performance of CNN models for image classification with missing data by training on a variety of data alterations that mimic data loss.
12227-30
Author(s): Darwin Patricio Castillo Malla, Patricia Diaz, Yuliana Jiménez, Univ. Técnica Particular de Loja (Ecuador); Vasudevan Lakshminarayanan, Univ. of Waterloo (Canada)
22 August 2022 • 5:30 PM - 7:30 PM PDT | Conv. Ctr. Exhibit Hall B1
Show Abstract + Hide Abstract
Diagnosis of distal radius fractures is usually made by physicians by visual inspections of radiography/X-ray images. However, it is possible that identification with visual inspection alone could cause confusion, resulting in increase in the number of false positives and false negatives. Therefore it is necessary to find systems with a high degree of classification and reproducibility. Here we present a computer-assisted diagnostic (CAD) system of fracture injuries of the distal radius based on the analysis of X-ray images of the wrist using deep learning to automatically classify the lesions as given by the Orthopedic Trauma Association (OTA).
12227-31
Author(s): Richard Zheng, The Mississippi School for Mathematics and Science (United States); Yufeng Zheng, The Univ. of Mississippi Medical Ctr. (United States)
22 August 2022 • 5:30 PM - 7:30 PM PDT | Conv. Ctr. Exhibit Hall B1
Show Abstract + Hide Abstract
In the billions of faces that are shaped by thousands of different cultures and ethnicities, one thing remains universal: the way we express our emotions. To take the next step in human-machine interactions, a machine must be able to read our emotions. Allowing machines to recognize microexpressions gives them a deeper dive into one’s true feelings at an instant which allows us to create more empathetic machines that will take human emotion into account while making optimal decisions; these machines will be able to detect dangerous situations and take on more roles that require recognizing and responding to emotion. Microexpressions are involuntary and transient facial expressions capable of revealing genuine emotions. We propose to train a real-time microexpression classifier utilizing a deep learning algorithm. A composite model will be created by combining a convolutional neural network (CNN) and a recurrent neural network (e.g., long short-term memory [LSTM]). The CNN will extract spatial features whereas the LSTM will summarize temporal features. The inputs of the model are short facial videos, while the outputs are the microexpressions gleaned from the videos. The models will be trained and tested with publicly available face microexpression datasets to recognize different microexpressions (e.g., happiness, fear, anger, surprise, disgust, sadness). The real-time microexpression classification can be implemented via web applications or mobile applications. The real-time microexpression classification will pioneer the way to a world filled with empathetic machines.
12227-32
Author(s): Ankita Chatterjee, Lynbrook High School (United States)
22 August 2022 • 5:30 PM - 7:30 PM PDT | Conv. Ctr. Exhibit Hall B1
Show Abstract + Hide Abstract
While they have similar symptoms, Psoriasis and Eczema have vastly different treatments. Prior research does not distinguish Eczema from Psoriasis with high accuracy. Our research explores Deep Learning techniques for distinguishing Psoriasis and Eczema. The dataset includes images of hand and foot samples. Using ResNet152, MobileNetV2, and hyper-parameter tuning, we demonstrated that both architectures deliver substantial accuracy improvements over past work. MobileNetV2 foot samples had a predictive accuracy of 94.11%, ResNet152 foot samples had a highest accuracy of 100%. The results are optimistic for using deep learning with skin images for diagnostic assistance for these two conditions.
Session 1: Remote Sensing
23 August 2022 • 8:30 AM - 9:30 AM PDT | Conv. Ctr. Room 12
12227-2
Author(s): Amanda K. Ziemann, Zigfried Hampel-Arias, Los Alamos National Lab. (United States)
23 August 2022 • 8:30 AM - 8:50 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Remote sensing change detection approaches typically compare images captured by the same sensor at different times. However, as airborne and spaceborne imaging platforms have become increasingly more accessible, the variety of sensor designs has grown in tandem. The ability to combine these disparate remote sensing images for change detection would provide a far more frequent view of the earth. The recently introduced multi-sensor anomalous change detection (MSACD) framework addresses this challenge by using a data-driven machine learning framework that can effectively account for differences in sensor modality and design. Here, we perform experiments to further evaluate and improve the efficacy of MSACD for highly disparate backgrounds.
12227-3
Author(s): Jacqueline Alvarez, Arnold Kim, Roummel F. Marcia, Chrysoula Tsogka, Univ. of California, Merced (United States)
23 August 2022 • 8:50 AM - 9:10 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
We address the reconstruction of synthetic aperture radar (SAR) images using machine learning. From previous work, we utilize a single, fully-connected layer to learn the sensing matrix of the forward scattering problem. We estimate the reflectivity of the SAR measurements by applying the conjugate transpose of the learned sensing matrix to the SAR measurements. We further improve the reconstructions of the reflectivity using convolutional layers. The model is trained to reconstruct images containing a single target but can be applied to data containing multiple targets without additional training. Resulting reconstructions are sharper images, where the background noise is significantly decreased.
12227-4
Author(s): Elena C. Reinisch, Lauren Castro, Los Alamos National Lab. (United States)
23 August 2022 • 9:10 AM - 9:30 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Understanding rapid changes in sea ice is essential for Arctic navigation. While machine learning models derived from remote sensing imagery are ideal for this work, they are limited by the lack of high-fidelity labeled datasets needed for training. We address this by developing methods to derive labels directly from synthetic aperture radar (SAR) data using polarimetric analysis. We expand on existing polarimetric classification methods by incorporating additional parameters in our analysis. Using a decision tree classifier, we develop new rules for sea ice classification, training our model on Sentinel-1 data using labels from the National Snow & Ice Data Center.
Session 2: Industry
23 August 2022 • 9:30 AM - 10:10 AM PDT | Conv. Ctr. Room 12
12227-5
Author(s): Sarah P. Mantell, Los Alamos National Lab. (United States); Matthew A. Ryder, Diana A. Lados, Worcester Polytechnic Institute (United States); Adam J. Wachtor, Garrison S. Flynn, Los Alamos National Lab. (United States)
23 August 2022 • 9:30 AM - 9:50 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Quantification of critical melt pool characteristics – such as width, depth, and surface morphology – is primarily performed through manual measurements on high resolution images. In this study, computer vision techniques, such as edge detection and texture segmentation, are used to automatically detect and quantify these characteristics. Clustering methods were then applied to these measurements to classify the processing regime for the melt pool geometry and associated operating parameters used in its creation. Operators can then use the classification information to select build parameters which reduce the likelihood of defects in constructed parts.
12227-7
Author(s): Vincenzo Caro, Jorge E. E. Pezoa, Sergio N. Torres, Mauricio Urbina, Rosario Castillo, Escribano Rubén, Univ. de Concepción (Chile)
23 August 2022 • 9:50 AM - 10:10 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
In Chile, The National Fisheries and Aquaculture Service (SERNAPESCA) supervises, controls, and registers the extraction and compliance with fisheries' catch quotas. The Chilean regulation states that all artisanal fishing landings must be inspected on the spot. However, the inspection process is not easy because the number of inspectors is limited, the number of artisanal vessels is large, the volumes of fish extracted are massive, and the number of landings of each artisanal shipowner is high. Besides, the inspection process is dated, manual, and uses a tiny sample size. This paper presents SAFE, a prototype system for supporting the catch-control inspection process of small-scale fishing boats in Chile. SAFE is a modern solution for fisheries inspection that automatically discriminates species of interest using machine learning. Here we present a version of SAFE that classifies six pelagic fish species of interest in Bío-Bío Region: anchovy, horse mackerel, hake, mote sculpin, mackerel scad, and sardine. The system operates using two stages; the first one detects and segments all the fish appearing in an image, and the segmented images are used as inputs by the second stage to perform the fish species classification. In addition, a database of approximately 1,700 images was constructed for training, validation, and testing purposes. For the fish detection stage, we exploited transfer learning to train SSD, Faster R-CNN, and Mask R-CNN deep learning models. As for the fish species classification stage, we designed a simple Convolutional Neural Network (CNN) architecture and exploited transfer learning to train ResNet50 and MobileNetV2 deep learning architectures. Results show that the SAFE software achieves a mean Average Precision mAP) between 80% and 98% for classifying the fish species mentioned above. The best architecture, composed of a Mask R-CNN--based detector and ResNet50-based classifier, achieves an mAP of 98%.
Session 3: Optics I
23 August 2022 • 10:40 AM - 12:20 PM PDT | Conv. Ctr. Room 12
12227-8
Author(s): Aleksandar Haber, Rochester Institute of Technology (United States); John E. Draganov, Michael Krainak, Relative Dynamics, Inc. (United States)
23 August 2022 • 10:40 AM - 11:00 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
We investigate the possibility of using machine learning methods for estimating structural-thermal-optical performance (STOP) models. To generate the estimation data, we simulate transient STOP dynamics of a Newtonian telescope by using COMSOL Multiphysics software. The simulated data is used for training, validating, and testing neural network models. We test the estimation performance for feedforward and recurrent neural network model structures. We thoroughly investigate the ability of these model structures to accurately estimate the low-order dynamics of STOP models. Finally, we perform model-order selection by using Akaike’s and Bayesian Information Criteria methods.
12227-9
Author(s): Cailing Fu, Jochen Stollenwerk, Carlo Holly, RWTH Aachen Univ. (Germany)
23 August 2022 • 11:00 AM - 11:20 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Nowadays, sophisticated ray-tracing software packages are used for the design of optical systems, including local and global optimization algorithms. Nevertheless, the design process is still time-consuming with many manual steps, and it takes days or even weeks until an optical design is finished. To address this shortcoming, artificial intelligence is employed in this work to support the optical designer. With reinforcement learning, an agent can be trained to use ray-tracing and optimization software designing an optical system like an optical designer in the current state of the art. In this work, such an agent is presented for pre-selected optical systems.
12227-10
Author(s): Joeri Lenaerts, Vrije Univ. Brussel (Belgium); Vincent Ginis, Vrije Univ. Brussel (Belgium), Harvard John A. Paulson School of Engineering and Applied Sciences (United States)
23 August 2022 • 11:20 AM - 11:40 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Neural networks have become a popular tool in Optics and Photonics over the last years, in particular in the context of inverse design. To address these tasks even more efficiently, it is helpful to learn how to describe a system using independent degrees of freedom. Here, we study a specific neural network architecture, called a beta-VAE, to serve this purpose. We show that this architecture can learn the degrees of freedom from time-series data. The results of our work can be applied to reduce the complexity of general modeling/engineering problems in optics and photonics.
12227-11
Author(s): Patrice Roulet, ImmerVision (Canada)
23 August 2022 • 11:40 AM - 12:00 PM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
For a camera dedicated to computer vision task, it is crucial to adapt key performance indicators (such as point spread function) to machine perception. Given a computer vision task, it is not mandatory to target the same image quality than for human vision. Using faithful optical and imaging simulation pipeline for complex wide-angle systems we study the isolated impact of image quality indicators on learning-based tasks (e.g object detection). With the use case of 2D object detection for automotive, we show that we can estimate loser tolerances that will not affect neural network accuracy. This should widen the scope of possibilities for optical designers.
12227-12
Author(s): Gyuhyeong Kim, LG Display (Korea, Republic of)
23 August 2022 • 12:00 PM - 12:20 PM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Currently, the external quantum efficiency (EQE) of OLED devices is almost at a limit. One of the causes of these limitations is that the performance of phosphorescent molecules designed by humans is at the limit. To overcome the limitation, we introduce a methodology for screening multiple molecules using reinforcement learning. We first set up an environment where phosphorescent molecules can be designed, and explore molecular space using reinforcement learning and density functional tight binding (DFTB) theory. Subsequently, we calculate the wavelength, bond dissociation energy (BDE), photoluminescence quantum yield (PLQY), and emitting dipole orientation (EDO) using the time-dependent density functional theory (DFT) and molecular dynamics (MD). As a result, we successively find new ways to design the improved green cores.
Session 4: Optics II
23 August 2022 • 2:20 PM - 3:40 PM PDT | Conv. Ctr. Room 12
12227-13
Author(s): Shaun Comino, Univ. of Central Florida (United States)
23 August 2022 • 2:20 PM - 2:40 PM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Atmospheric turbulence measured by the refractive index structure constant (Cn^2) is a crucial metric to determine the efficacy of a laser propagation system along its path. In this paper is provided a deep learning neural network which uses simple environmental variables such as temperature, wind speed, wind direction, ground temperature, etc. to predict the refractive index structure constant without the need for prohibitively expensive equipment.
12227-14
Author(s): Page King, Ball Aerospace (United States); R. John Koshel, Wyant College of Optical Sciences (United States)
23 August 2022 • 2:40 PM - 3:00 PM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Robustness to image quality degradations is critical for developing deep neural networks for real-world image classification. Previous efforts have pursued robustness by exploring how various types of blur, noise, contrast, compression, color, etc. degrade image quality and impact image classification performance. This paper extends this discussion to include optical aberrations, which are fundamental to the lens design of imaging systems and enable further discussion of DNN performance in the context of hardware design. In this paper, multiple state-of-the-art DNN models are evaluated for their image classification performance with imagery that has been degraded by various optical aberrations.
12227-15
Author(s): Philipp-Immanuel Schneider, Lin Zschiedrich, Martin Hammerschmidt, Lilli Kuen, Ivan Sekulic, JCMwave GmbH (Germany), Zuse Institute Berlin (Germany); Julien Kluge, Bastian Leykauf, Markus Krutzik, Humboldt-Univ. zu Berlin (Germany); Sven Burger, JCMwave GmbH (Germany), Zuse Institute Berlin (Germany)
23 August 2022 • 3:00 PM - 3:20 PM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Manual optimization of experimental parameters can quickly become too complex and time-consuming if more than a few correlated parameters need to be adjusted. We discuss automating this process using Bayesian optimization. This machine learning-based method is particularly suitable because it can handle noisy measurements, performs a global search and requires relatively few experimental runs. We discuss the efficient, scalable implementation of Bayesian optimization, present practical applications for tuning experimental parameters, and compare it with other local and global heuristic methods to show its application range.
12227-16
Author(s): Da-in Choi, Taejin Kwon, Jeongtae So, Sunho Lim, KAIST (Korea, Republic of); Dongjun Woo, Nosung Lee, Jaewon Kim, SAMSUNG Display Co., Ltd. (Korea, Republic of); Seungryong Cho, KAIST (Korea, Republic of)
23 August 2022 • 3:20 PM - 3:40 PM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
During the production of translucent screen modules, used commonly for cellphone screens and monitors, small particulates may be trapped between the module components or microscopic scratches may occur. Such defects degrade the quality of the product. Thus, it is common practice for technicians to identify the defect region and to screen the defective products. However, there is a large discrepancy in the detection tendencies and detection time among the technicians. Thus, it is important to standardize and automate the detection process. In this research, a CNN based binary classification algorithm is proposed to identify the defective products from the normal products automatically.
Optical Engineering Plenary Session
23 August 2022 • 4:00 PM - 4:45 PM PDT | Conv. Ctr. Room 6A
4:00 to 4:05 PM: Welcome and Opening Remarks
12221-501
Author(s): Lee D. Feinberg, NASA Goddard Space Flight Ctr. (United States)
23 August 2022 • 4:05 PM - 4:35 PM PDT | Conv. Ctr. Room 6A
Show Abstract + Hide Abstract
After two decades of development, the James Webb Space Telescope launched on December 25th, 2021. This revolutionary telescope is the first ever 6.5 meter segmented telescope in space that was the work of 1000’s of engineers, technicians and scientists. Once in space, there were over 50 successful deployments followed by the first ever alignment of a segmented telescope in space and instrument commissioning. This talk will review the history of the telescope development through testing and on-orbit commissioning with a special focus on the optical technologies that both enabled the observatory and that were proven out through the commissioning.
Session 5: Physics
24 August 2022 • 8:00 AM - 10:00 AM PDT | Conv. Ctr. Room 12
12227-21
Author(s): Steven Stetzler, Univ. of Washington (United States); Michael J. Grosskopf, Earl Lawrence, Los Alamos National Lab. (United States)
24 August 2022 • 8:00 AM - 8:20 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Fitting a theoretical model to experimental data in a Bayesian manner using Markov chain Monte Carlo typically requires one to evaluate the model thousands (or millions) of times. When the model is a slow-to-compute physics simulation, Bayesian model fitting becomes infeasible. To remedy this, a second statistical model that predicts the simulation output -- an "emulator" -- can be used in lieu of the full simulation during model fitting. This work examines the accuracy-runtime tradeoff of several approximate Gaussian process models when emulating the predictions of density functional theory (DFT) models using parameterizations of different physical fidelity and cost/stability.
12227-22
Author(s): Michael J. Grosskopf, Los Alamos National Lab. (United States); Rodrigo Navarro Perez, San Diego State Univ. (United States); Nicolas Schunck, Lawrence Livermore National Lab. (United States); Earl Lawrence, Los Alamos National Lab. (United States)
24 August 2022 • 8:20 AM - 8:40 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Using statistics and machine learning to account for discrepancy between simulation and experiment is common for improving predictive capability of simulation-driven science. Black-box predictors are useful for capturing complex discrepancies, but obscure information they convey about the physical system. We apply two approaches from ML interpretability to understand and communication relationships between inputs and predictions from a discrepancy model in nuclear density functional theory. These relationships are used to communicate relationships between properties of simulated nuclides and prediction bias of binding energies to gain deeper knowledge about model-form error and assess whether these data-driven relationships are physically meaningful.
12227-23
Author(s): Kellin Rumsey, Michael J. Grosskopf, Earl Lawrence, Ayan Biswas, Los Alamos National Lab. (United States); Nathan Urban, Brookhaven National Lab. (United States)
24 August 2022 • 8:40 AM - 9:00 AM PDT | Conv. Ctr. Room 12
12227-24
Author(s): Rebecca A. Coles, Biays Bowerman, Martin Schoonen, Juergen Thieme, Brookhaven National Lab. (United States)
24 August 2022 • 9:00 AM - 9:20 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
We describe the methods for automating the workflow for rapidly measuring and producing elemental maps of large-area samples using the Submicron Resolution X-ray Spectroscopy Beamline (SRX) at the National Synchrotron Light Source-II, Brookhaven National Laboratory, through a novel combination of supervised (support vector machine) and unsupervised (cluster analysis) machine learning algorithms. SRX has the capability to create centimeter area full spectrum x-ray fluorescence (XRF) maps non-destructively for cotton swipe samples with special detector and beam configurations. To facilitate the automation of this process, we discuss the development of the Synchrotron Network Automation Program in Python (SnapPy) software package that automates measurements such that SnapPy will control everything from beamline machine control to data acquisition and analysis. The only intervention that will need to be performed by beamline staff will be to physically install and remove samples. This will allow us to run measurements overnight or during times when beamline staff would not otherwise be available.
12227-25
Author(s): Emily S. Teti, Los Alamos National Lab. (United States); Sebastian Salazar, Los Alamos National Lab. (United States), Columbia Univ. (United States); Matthew Carpenter, Los Alamos National Lab. (United States)
24 August 2022 • 9:20 AM - 9:40 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
This work accomplishes (1) unique data mining to leverage limited domain knowledge when working with a small number of high-resolution spectra and (2) classification of oxidation state from x-ray emission spectra using a combination of Monte-Carlo simulation and deep neural networks. Novel methods were required for this application due to the lack of an underlying theory of the mechanisms responsible for the generation of x-ray emission spectra. Insights drawn from this work further the applicability of x-ray emission spectra for spectral imaging.
12227-26
Author(s): Zheng Kai Yang, Ya-Wen Ho, Chung-Yuan Chang, Albert Lin, National Yang Ming Chiao Tung Univ. (Taiwan)
24 August 2022 • 9:40 AM - 10:00 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Machine learning (ML) based compact device modeling provides the opportunity for process-aware device modeling and thus process-aware circuit simulation. In contrast, incorporating semiconductor manufacturing parameters into compact models (CM) and subsequent circuit simulations using pure physics-based CM is difficult. We demonstrate process-aware circuit simulation where the effect of plasma treatment and thermal annealing can be directly reflected on the circuit output in SPICE transient simulation. The Verilog-A model input is V, frequency(f), area(A), and process conditions, i.e., plasma surface treatment (PST) and post-metal annealing (PMA). The MOSCAP capacitance-voltage (CV) characteristics under illumination are described by ML compact models.
Session 6: Medical Imaging and Biology
24 August 2022 • 10:30 AM - 12:10 PM PDT | Conv. Ctr. Room 12
12227-17
Author(s): M. Arshad Zahangir Chowdhury, Yanqi Luo, Si Chen, Zhengchun Liu, Aniket Tekawade, Rajkumar Kettimuthu, Argonne National Lab. (United States)
24 August 2022 • 10:30 AM - 10:50 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
The microscopy research at the Bionanoprobe of Argonne National Laboratory focuses on applying synchrotron X-ray fluorescence (XRF) techniques to obtain trace elemental mappings of cryogenic biological samples to gain insights about their role in critical biological activities. The elemental mappings and the morphological aspects of the biological samples extracted from XRF images serve as label-free biological fingerprints for training machine learning models for fast and accurate identification of biological samples and for quantification of unknown elemental mappings or morphological features of the biological samples. In this work, we demonstrate a software consisting of clustering machine learning models to distinguish regions-of-interests (ROI) or classes characterized by different biological states, for instance, live and dead biological cells from XRF images with minimal manual annotations. The model’s performance is analyzed and strategies for real time implementation in beamtime fluorescence experiments are discussed.
12227-33
Author(s): Danika Gupta, The Harker School (United States)
24 August 2022 • 10:50 AM - 11:10 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
The number of Americans with Alzheimer’s is expected to triple to 14 million in 2060. We use Multi-Modal techniques to effectively forecast Alzheimer’s using OASIS3, a longitudinal study of 1098 patients across 30 years with 2168 MRIs and >6000 clinician notes. We use ensemble methods to combine Convolutional Neural Networks (CNN) forecasts of MRIs with Machine Learning forecasts based on Freesurfer featurized brain volumetrics and clinical data to forecast eventual Clinical Dementia Rating (CDR). We demonstrate that CDR can be forecasted with accuracies over 94%, and that harmful False Negatives can be reduced by 2x-15x depending on the ensemble method.
12227-34
Author(s): Megan A. Witherow, Old Dominion University (United States); Manar D. Samad, Tennessee State University (United States); Khan M. Iftekharuddin, Old Dominion University (United States)
24 August 2022 • 11:10 AM - 11:30 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Automatic classification of child facial expressions is challenging due to the scarcity of labeled facial expression images for children. The transfer learning of deep convolutional neural networks (CNNs), pretrained on adult facial expressions, is promising for the classification of child facial expressions through model fine-tuning. However, CNNs may fail to model the spatial relationships between the face and its components. The proposed transfer learning and feature fusion approach captures the spatial relationships between facial components by extracting facial features from landmarks and incorporating them into the model via feature fusion.
12227-35
Author(s): Cindy Gonzales, Lawrence Livermore National Laboratory (United States), Johns Hopkins Universiry (United States)
24 August 2022 • 11:30 AM - 11:50 AM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
According to the Alzheimer's Association, roughly one in nine people age 65 and older suffer from the burden of Alzheimer's dementia and one in three seniors dies with Alzheimer's or another dementia [1]. Recent advances have been made in early diagnosis of Alzheimer's Disease including utilizing machine learning techniques to identify abnormalities associated with Alzheimer's Disease in magnetic resonance imaging (MRI) data. In this paper, we will explore how the pre-processing of two-dimensional (2D) slices of MRI data using digital signal processing techniques affect a machine learning classifier. This work differs from other studies as it focuses on the methods used to pre-process the MRI data to highlight abnormalities rather than optimizing the machine learning approach based on the available data provided. We show that employing digital signal processing techniques, specifically low-pass and high-pass filtering, to 2D slices of an MRI in the frequency domain can improve the performance of a basic machine learning classifier. This is a promising result which has the potential to improve the performance of state-of-the-art machine learning classifiers simply by pre-processing the data using a digital signal/image filter. [1] Alzheimer's disease facts and figures, Retrieved May 27, 2022 from https://www.alz.org/alzheimers-dementia/facts-figures
12227-36
Author(s): Shadi Khan Baloch, Department of Mechatronic Engineering, Mehran UET Jamshoro, 76090, Sindh, Pakistan (Pakistan)
24 August 2022 • 11:50 AM - 12:10 PM PDT | Conv. Ctr. Room 12
Show Abstract + Hide Abstract
Agriculture sector is an important pillar of the country’s economy and cotton crop is considered as one of the prominent agriculture resource. It is widely cultivated in India, China, Pakistan, USA, Brazil and in other countries of the world. Worldwide, cotton crop production is affected by numerous diseases such as cotton leaf curl virus (CLCV/CLCuV), bacterial blight and ball rot. Image processing techniques together with machine learning algorithms are successfully employed in numerous fields and it is also used for crop disease detection. In this study, we present a deep learning-based technique for categorizing cotton leaf diseases such as bacterial blight and cotton leaf curl virus (CLCV). The dataset of cotton leaf showing diseases symptoms are collected from various hotspots in Sindh Pakistan. We employ the Inception v4 architecture as a convolutional neural network to identify diseased plant leaves in particular bacterial blight and CLCV. The accuracy of the developed model is 98.26% which is more as compared to the existing systems.
Conference Chair
Lawrence Livermore National Lab. (United States)
Conference Chair
Univ. of Dayton (United States)
Conference Chair
NVIDIA Corp. (United States)
Conference Co-Chair
Lawrence Livermore National Lab. (United States)
Conference Co-Chair
Old Dominion Univ. (United States)
Program Committee
St. Jude Children's Research Hospital (United States)
Program Committee
BeamIO (United States)
Program Committee
NVIDIA Corp. (United States)
Program Committee
Lawrence Livermore National Lab. (United States)
Program Committee
Lawrence Livermore National Lab. (United States)
Program Committee
Univ. of Dayton (United States)
Program Committee
Lawrence Livermore National Lab. (United States)
Program Committee
Univ. of Dayton (United States)
Program Committee
Etsy, Inc. (United States)
Program Committee
Lawrence Livermore National Lab. (United States)
Program Committee
Manar D. Samad
Tennessee State Univ. (United States)
Program Committee
Los Alamos National Lab. (United States)