This conference will provide a forum for researchers involved in development and application of computer-aided detection and diagnosis (CAD) systems in medical imaging. Original papers are requested on all novel CAD methods and applications, including both ‘conventional’ and deep learning approaches. CAD has found increasing medical applications since its inception a few decades ago and it continues to be a hot topic, especially with the proliferation of artificial intelligence (AI) in many aspects of daily life. Thus, the CAD conference is soliciting papers in the broad sense of CAD-AI, including topics also beyond detection and diagnosis with an emphasis on novel methods, applications, learning paradigms, -omics integration, and performance evaluation. A detailed list of topics can be found below. Applications in all medical imaging modalities are encouraged, including but not limited to X-ray, computed tomography, magnetic resonance imaging, nuclear medicine, molecular imaging, optical imaging, ultrasound, endoscopy, macroscopic and microscopic imaging, and multi-modality technologies.

NEW FOR 2022 Joint Session with the Image Perception conference: “Translation of CAD-AI methods to clinical practice; are we there yet?” We invite papers on comparisons in performance between CAD-AI and humans, retrospective studies comparing CAD-AI output to original clinical decisions, reader studies, and studies of CAD-AI in clinical practice.

LIVE DEMONSTRATIONS WORKSHOP A workshop featuring real-time demonstrations of algorithms and systems will be held during the conference. This workshop is intended to be a forum for developers to exhibit their software, find new collaborators, and inspire the attendees. All participants of the SPIE Medical Imaging Symposium are invited to submit a proposal for a demonstration. More information will be provided at a later date.

TOPIC AREAS: FOR THIS CONFERENCE ONLY During the submission process, you will be asked to choose the 2-3 most appropriate keywords (one ‘application’ and up to two ‘topics’) from the following lists to assist in the review process.

Choose one keyword from the following applications list: Choose up to two keywords from the following topics list: ;
In progress – view active session
Conference 12033

Computer-Aided Diagnosis

In person: 21 - 24 February 2022
View Session ∨
  • 1: Breast I
  • 2: Translation of CAD-AI Methods to Clinical Practice: Are We There Yet?: Joint Session with Conferences 12033 and 12035
  • 3: COVID-19
  • Awards and Plenary Session
  • 4: Keynote and Novel Applications
  • Award Announcements
  • 5: Deep Learning I
  • Tuesday/Wednesday Poster Viewing
  • 6: Breast II
  • 7: Detection
  • Workshop: Live Demonstrations
  • 8: Neurology
  • 9: Deep Learning II
  • 10: Head and Neck, Musculoskeletal
  • 11: Radiomics, Radiogenomics, Multi-omics
  • Wednesday Poster Session
  • 12: Lung
  • 13: Abdomen
  • 14: Eye, Retina
  • 15: Segmentation
Information
This conference is not accepting post-deadline abstract submissions.
Session 1: Breast I
In person: 21 February 2022 • 8:00 AM - 9:40 AM
Session Chairs: Karen Drukker, The Univ. of Chicago Medicine (United States), Despina Kontos, Penn Medicine (United States)
12033-1
Author(s): Juhun Lee, Robert Nishikawa, Univ. of Pittsburgh (United States)
In person: 21 February 2022 • 8:00 AM - 8:20 AM
Show Abstract + Hide Abstract
We developed a conditional generative adversarial network (CGAN) to simulate a contralateral breast mammogram with normal appearance, from a given breast mammogram exam. We then investigated whether using the CGAN to produce simulated images of the contralateral breast can provide additional information as to the presence of a mammographically-occult (MO) cancer (supplementing the left-right mammogram comparison). However, CGANs can suffer from various artifacts. We trained the CGAN using 1366 normal screening mammograms then tested it on 333 screening mammogram cases (97 MO cancer). Although there exist artifacts in simulated mammograms, it showed minimal effect on MO cancer detection.
12033-2
Author(s): Jun Luo, Dooman Arefan, Univ. of Pittsburgh (United States); Margarita Zuley, Univ. of Pittsburgh (United States), Magee-Womens Hospital, Univ. of Pittsburgh Medical Ctr. (United States); Jules H. Sumkin, Univ. of Pittsburgh (United States), Magee-Womens Hospital, Univ. of Pittsburgh Medical Center (United States); Shandong Wu, Univ. of Pittsburgh (United States)
In person: 21 February 2022 • 8:20 AM - 8:40 AM
Show Abstract + Hide Abstract
Mammography is used as a standard screening procedure for the potential patients of breast cancer. The application of deep learning in mammography is one of the topics that medical researchers most concentrate on. In this work, we propose an end-to-end Curriculum Learning (CL) strategy in task space for classifying the three categories of Full-Field Digital Mammography (FFDM), namely Malignant, Negative, and False recall. Specifically, our method treats this three-class classification as a "harder" task in terms of CL, and create an "easier" sub-task of classifying False recall against the combined group of Negative and Malignant. We introduce a loss scheduler to dynamically weight the contribution of the losses from the two tasks. We conduct experiments on an FFDM datasets of 1,709 images. The results show that our curriculum learning strategy can boost the performance for classifying the three categories of FFDM compared to the baseline strategies for model training.
12033-3
Author(s): Belayat Hossain, Robert M. Nishikawa, Juhun Lee, Univ. of Pittsburgh (United States)
In person: 21 February 2022 • 8:40 AM - 9:00 AM
Show Abstract + Hide Abstract
We report an improved algorithm for detecting biopsy-proven lesions on digital breast tomosynthesis using a small training set from our DBTex challenge participation. To tackle small samples, all top-ranked algorithms (1st–3rd) used large inhouse datasets. We hypothesized using false positive findings (FPs) by detection algorithms from non-biopsied samples in the training set as an alternative to using inhouse datasets. We used FPs for augmentation and proposed an ensemble approach to fuse multiple detection models using cross-validation and changing the model-depth. Using a challenge-validation set, we achieved a mean sensitivity of 0.84, close to one of the top algorithms.
12033-4
Author(s): Warid Islam, Gopichandh Danala, Bin Zheng, The Univ. of Oklahoma (United States)
In person: 21 February 2022 • 9:00 AM - 9:20 AM
Show Abstract + Hide Abstract
This study proposes a novel feature fusion method for developing a CAD scheme of mammograms to classify suspicious lesions. Using a dataset involving 2,000 images, three support vector machines (SVM) are trained using features computed from whole ROIs, segmented lesions, and fusion of both, respectively. The study results reveal that SVM trained with fused features produces the maximum accuracy, utilizing a 10-fold cross-validation approach incorporated with a random projection algorithm, which is a feature reduction method to train SVM. The area under the ROC curve was used as an evaluation index. The study demonstrates that image features computed from lesions and fixed ROIs have complementary discriminatory information. Thus, using the fused feature significantly improves CAD performance.
12033-5
Author(s): Simona Rabinovici-Cohen, Tal Tlusty, IBM Research - Haifa (Israel); Xose M. Fernandez, Beatriz Grandal Rejo, Institut Curie (France)
In person: 21 February 2022 • 9:20 AM - 9:40 AM
Show Abstract + Hide Abstract
Women with locally advanced breast cancer are generally given neoadjuvant chemotherapy (NAC). We explore the use of tumor thickness features computed from MRI imaging to predict the risk of post treatment metastasis. We performed a retrospective study on a cohort of 1738 patients who were administered NAC. Of these patients, 551 patients had MRI before the treatment started. We analyzed the multimodal clinical and MRI data, achieving 0.747 AUC and 0.379 specificity at a sensitivity operation point of 0.99. We also use interpretability methods to explain the models and identify important features for the early prediction of metastasis.
Session 2: Translation of CAD-AI Methods to Clinical Practice: Are We There Yet?: Joint Session with Conferences 12033 and 12035
In person: 21 February 2022 • 10:10 AM - 12:10 PM
Session Chairs: Khan M. Iftekharuddin, Old Dominion Univ. (United States), Claudia R. Mello-Thoms, Univ. Iowa Carver College of Medicine (United States)
12035-28
Author(s): Sian Taylor-Phillips, Karoline Freeman, Chris Stinton, Julia Geppert, Aileen Clarke, Dan Todkill, Samantha Johnson, The Univ. of Warwick (United Kingdom)
In person: 21 February 2022 • 10:10 AM - 10:30 AM
Show Abstract + Hide Abstract
This systematic review reports the accuracy of artificial intelligence (AI) for detection of breast cancer in digital mammography. AI to replace or augment the radiologist performed well in smaller studies (1,086 women, 520 cancers, 5/5 AI systems more accurate than single radiologist in laboratory), but this has not yet translated to larger studies (79,910 women, 1,878 cancer, 94% of AI systems evaluated were less accurate than a single radiologist). AI to triage out normal cases screened out 53%, 45% and 50% of low-risk women but also 10%, 4% and 0% of cancers detected by radiologists. Prospective studies are needed.
12033-6
Author(s): Di Sun, Lubomir Hadjiiski, Univ. of Michigan (United States); Rohan Garje, Yousef Zakharia, The Univ. of Iowa (United States); Lauren Pomerantz, Monika Joshi, The Pennsylvania State Univ. (United States); Ajjai Alva, Heang-Ping Chan, Richard Cohan, Elaine Caoili, Univ. of Michigan (United States); Kenny Cha, U.S. Food and Drug Administration (United States); Galina Kirova-Nedyalkova, Tokuda Hospital Sofia (Bulgaria); Matthew Davenport, Prasad Shankar, Isaac Francis, Kimberly Shampain, Nathaniel Meyer, Daniel Barkmeier, Sean Woolen, Phillip Palmbos, Alon Weizer, Ravi Samala, Chuan Zhou, Martha Matuszak, Univ. of Michigan (United States)
In person: 21 February 2022 • 10:30 AM - 10:50 AM
Show Abstract + Hide Abstract
We evaluated the effect of computerized decision support system for bladder cancer treatment response assessment (CDSS-T) on the performance of 16 multi-institutional observers. They provided estimates of the likelihood of stage T0 disease after treatment without and with CDSS-T aid. The average AUC of 16 observers increased from 0.73 without CDSS-T to 0.77 with CDSS-T (p = 0.003). The average AUC performance for the different institutions with CDSS-T was similar. Individual observer’s performance was improved significantly in both the original and repeated evaluations and more consistent with CDSS-T. This study demonstrated that the CDSS-T has the potential to improve treatment response assessment of physicians from different specialties and institutions and reduce the inter- and intra-observer variabilities on the assessments.
12033-7
Author(s): Anshul Ratnaparkhi, Bilwaj Gaonkar, David Zarrin, Ien Li, Kirstin Cook, Bayard R. Wilson, Azim Laiwalla, Mark Attiah, Christine Ahn, Diane Villaroman, Bryan Y. Yoo, Banafsheh Salehi, Joel S. Beckett, Luke Macyszyn, Univ. of California, Los Angeles (United States)
In person: 21 February 2022 • 10:50 AM - 11:10 AM
Show Abstract + Hide Abstract
In practice, deep-learning algorithms perform best in the setting within that they are trained in, a phenomenon known as the domain effect. Our study confirms the presence of a scanner-specific domain effect for a deep-U-Net trained to segment spinal canals on axial MR images acquired from a specific scanner. It then demonstrates that an ensemble of the aforementioned U-Nets reduces this domain effect. Finally, we demonstrate that ensembling reduces the differential between in-domain and out-of-domain performance.
12035-29
Author(s): Katharina V. Hoebel, Athinoula A. Martinos Ctr. for Biomedical Imaging (United States); Christopher Bridge, Athinoula A. Martinos Ctr. for Biomedical Imaging (United States), Massachusetts General Hospital (United States), BWH Ctr. for Clinical Data Science (United States); Sara Ahmed, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States); Oluwatosin Akintola, Massachusetts General Hospital (United States); Caroline Chung, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States); Raymond Huang, Brigham and Women's Hospital, Harvard Medical School (United States); Jason Johnson, The Univ. of Texas M.D. Anderson Cancer Ctr. (United States); Albert Kim, K. Ina Ly, Ken Chang, Jay Patel, Athinoula A. Martinos Ctr. for Biomedical Imaging (United States); Marco Pinho, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States); Tracy T. Batchelor, Massachusetts General Hospital (United States); Bruce Rosen, Elizabeth Gerstner, Jayashree Kalpathy-Cramer, Athinoula A. Martinos Ctr. for Biomedical Imaging (United States)
In person: 21 February 2022 • 11:10 AM - 11:30 AM
Show Abstract + Hide Abstract
Metrics routinely used to evaluate the performance of Deep Learning (DL) segmentation algorithms show a low concordance with human segmentation quality perception. Here, we present the results of a study on expert quality perception of brain tumor segmentations generated by a DL segmentation algorithm. The quality of 60 segmentations was rated by four expert medical professionals. We observed a low inter-rater agreement among all raters and varying levels of pairwise agreement between raters. The highest correlation between selected segmentation quality metrics and ratings could be observed for Hausdorff distance with a high variability in the correlations of individual raters.
12035-30
Author(s): Yee Lam Elim Thompson, U.S. Food and Drug Administration (United States); Gary Levine, Weijie Chen, Berkman Sahiner, Nicholas Petrick, Qin Li, Frank Samuelson, Ctr. for Device and Radiological Health, U.S. Food and Drug Administration (United States)
In person: 21 February 2022 • 11:30 AM - 11:50 AM
Show Abstract + Hide Abstract
A Computer-Aided Triage and Notification (CADt) device uses artificial intelligence (AI) to prioritize radiological medical images and speed up reviews of diseased cases in life-threatening conditions such as stroke, intercranial hemorrhage, and pneumothorax. However, questions remain on a quantitative assessment of the clinical effectiveness of CADt devices for speeding the review of patients with life-threatening conditions. This work presents an analytical method based on queueing theory to quantify the wait-time-savings and to study the impacts of CADt for various clinical settings. Our work has shown that the theoretical results are consistent with clinical intuition and are verified by Monte Carlo.
12033-8
Author(s): Alessa Hering, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany); Felix Peisen, Universitätsklinikum Tübingen (Germany); Annika Hänsch, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany); Thomas Eigentler, Universitätsklinikum Tübingen (Germany); Ahmed Othman, Dept of Diagnostic and Interventional Radiology (Germany); Jan Moltz, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany)
In person: 21 February 2022 • 11:50 AM - 12:10 PM
Session 3: COVID-19
In person: 21 February 2022 • 1:20 PM - 3:40 PM
Session Chairs: Samuel G. Armato, The Univ. of Chicago (United States), Nicholas A. Petrick, U.S. Food and Drug Administration (United States)
12033-9
Author(s): Fakrul Islam Tushar, Dept. of Electrical & Computer Engineering, Pratt School of Engineering, Duke University (United States), Center for Virtual Imaging Trails, Duke University, Durham, NC (United States), Department of Radiology, Duke University School of Medicine, Durham, NC. (United States); Ehsan Abadi, Maciej A. Mazurowski, William P. Segars, Ehsan Samei, Joseph Y. Lo, Dept. of Electrical & Computer Engineering (United States), Center for Virtual Imaging Trails (United States), Department of Radiology (United States)
In person: 21 February 2022 • 1:20 PM - 1:40 PM
12033-10
Author(s): Idil Aytekin, Onat Dalmaz, Bilkent Univ. (Turkey); Haydar Ankishan, Baskent Üniv. (Turkey); Emine U. Saritas, Bilkent Univ. (Turkey); Ulas Bagci, Northwestern Univ. (United States); Tolga Cukur, Bilkent Univ. (Turkey); Haydar Celik, Children's National Health System (United States)
In person: 21 February 2022 • 1:40 PM - 2:00 PM
Show Abstract + Hide Abstract
In this work, we propose a novel deep learning technique to automatically detect COVID-19 patients based on audio recordings of their cough and breathing sounds. The proposed technique leverages a vision transformer model to discriminate between patients and healthy subjects based on spectrogram features of their respiratory sounds. Our model achieves on par or superior performance to baselines employing state-of-the-art convolutional and transformer architectures, as well as traditional machine-learning classifiers; and it can distinguish COVID-19 patients from healthy subjects with over 95% AUC.
12033-11
Author(s): Sourajit Saha, Univ. of Maryland, Baltimore (United States); Yaacov Yesha, Yelena Yesha, Aryya Gangopadhyay, David Chapman, Univ. of Maryland, Baltimore County (United States); Michael Morris, Babak Saboury, National Institutes of Health Clinical Ctr. (United States); Phuong Nguyen, Univ. of Maryland, Baltimore County (United States)
In person: 21 February 2022 • 2:00 PM - 2:20 PM
Show Abstract + Hide Abstract
The purpose of this study is to devise a Computer Aided Diagnosis (CAD) system that is able to detect COVID-19 abnormalities from chest radio-graphs with increased efficiency and accuracy. We investigate a novel deep learning based ensemble model to classify the category of pneumonia from chest X-ray images. We use a labeled image dataset provided by Society for Imaging Informatics in Medicine for a kaggle competition that contains chest radio-graphs. And the task of our proposed CAD is to categorize between negative for pneumonia or typical, indeterminate, atypical for COVID-19. The training set (with labels publicly available) of this dataset contains 6334 images belonging to 4 classes. Furthermore, we experiment on the efficacy of our proposed ensemble method. Accordingly, we perform a ablation study to confirm that our proposed pipeline drives the classification accuracy higher and also compare our ensemble technique with the existing ones quantitatively and qualitatively.
12033-12
Author(s): Aishik Konwer, Prateek Prasanna, Stony Brook University (United States)
In person: 21 February 2022 • 2:20 PM - 2:40 PM
Show Abstract + Hide Abstract
In this work we present a self-supervised transformer based approach trained with an unlabeled chest radiograph dataset (N=27499), to predict clinical outcomes such as mortality and requirement of mechanical ventilation on a small chest radiograph dataset (N=530). The vision transformer is used as an autoencoder to extract features for the downstream task. We utilize both contrastive and reconstruction loss functions to generate robust embeddings. Experimental results demonstrate that our approach outperforms a ResNet50 feature-based supervised baseline.
12033-13
Author(s): Ben Frey, Univ. of St. Thomas (United States); Lingyi Zhao, Muyinatu A. Lediju Bell, Johns Hopkins Univ. (United States)
In person: 21 February 2022 • 2:40 PM - 3:00 PM
Show Abstract + Hide Abstract
Multiple groups have demonstrated the potential of deep learning to aid COVID-19 diagnosis using lung ultrasound B-mode images. However, no previous work considers the application of these deep learning models to signal processing stages that occur prior to traditional ultrasound B-mode image formation. Considering the multiple signal processing stages required to achieve ultrasound B-mode images, our research objective is to investigate the most appropriate stage for our deep learning approach to COVID-19 B-line feature detection, starting with raw channel data received by an ultrasound transducer. Results are promising to proceed with future COVID-19 B-line feature detection using B-mode images.
12033-14
Author(s): Masahiro Oda, Tong Zheng, Yuichiro Hayashi, Nagoya University (Japan); Yoshito Otake, Nara Institute of Science and Technology (Japan); Masahiro Hashimoto, Keio University School of Medicine (Japan); Toshiaki Akashi, Shigeki Aoki, Juntendo University (Japan); Kensaku Mori, Nagoya University (Japan)
In person: 21 February 2022 • 3:00 PM - 3:20 PM
12033-15
Author(s): Catalin Fetita, Mathilde Maury, Télécom SudParis (France); Aurélien Justet, Caen Univ. Hospital (France); Juliette Dindart, Avicenne Hospital, AP-HP (France); Jean Richeux, Caen Univ. Hospital (France); Lucile Sese, Avicenne Hospital, AP-HP (France), Univ. Sorbonne (France); Nicolas Aide, Caen Univ. Hospital (France); Thomas Gille, Hilario Nunes, Jean-François Bernaudin, Pierre-Yves Brillet, Avicenne Hospital, AP-HP (France), Univ. Sorbonne (France)
In person: 21 February 2022 • 3:20 PM - 3:40 PM
Show Abstract + Hide Abstract
In this study we question the existence of residual vascular alteration in post-acute sequelae of COVID-19 (PASC) by investigating possible associations between vascular remodeling biomarkers extracted from CT and clinical and morphological parameters. The used vascular biomarkers concern the blood volume ratio BV5/BV50, an index of local peripheral vascular density and a peripheral composite vascular remodeling index, both measured in the antero-postero-lateral lung periphery (excluding mediastinal region). The investigation of associations between vascular remodeling biomarkers and clinical and morphological parameters (DLCO, CT attenuation, lung deformation, perfusion scintigraphy) revealed moderate to strong correlations highlighting the ability of the proposed vascular biomarkers to capture the persistence of vascular alterations in PASC in relation with the development of fibrotic patterns, which is a promising direction for future research.
Awards and Plenary Session
In person: 21 February 2022 • 4:00 PM - 5:15 PM
Session Chairs: Metin N. Gurcan, Wake Forest Baptist Medical Ctr. (United States), Robert M. Nishikawa, Univ. of Pittsburgh (United States)
4:00 pm: Symposium Chair Welcome and best Student Paper Award Announcement
The first place winner and runner up of the Robert F. Wagner All-Conference Student Paper Award will be announced.
4:15 pm: SPIE 2022 Presidents Welcome and new SPIE Fellows Acknowledgements
4:20 pm: SPIE Harrison H. Barrett Award in Medical Imaging
This award will be presented in recognition of outstanding accomplishments in medical imaging.
12032-300
Author(s): Jennifer N. Avari Silva, Washington Univ. in St. Louis (United States)
In person: 21 February 2022 • 4:30 PM - 5:15 PM
Show Abstract + Hide Abstract
With the increased availability of extended reality (XR) devices in the marketplace, there has been a rapid development of medical XR applications spanning from education, training, rehabilitation, pre-procedural planning, and intra-procedural use. We will explore various use case to understand the importance of technology-use case matches and focus on intra-procedural use cases which generally have the highest risk to patient and medical provider but may have the most sizable impact on benefit to patient and procedure.
Session 4: Keynote and Novel Applications
In person: 22 February 2022 • 8:00 AM - 9:40 AM
Session Chairs: Karen Drukker, The Univ. of Chicago Medicine (United States), Khan M. Iftekharuddin, Old Dominion Univ. (United States)
12033-500
TBD (Keynote Presentation)
Author(s): Jayashree Kalpathy-Cramer, Athinoula A. Martinos Ctr. for Biomedical Imaging (United States)
In person: 22 February 2022 • 8:00 AM - 8:40 AM
12033-16
Author(s): Shaojie Chang, Yongfeng Gao, Marc Jason Pomeroy, Zhengrong Liang, Stony Brook Univ. (United States)
In person: 22 February 2022 • 8:40 AM - 9:00 AM
12033-17
Author(s): Xiaohong W. Gao, Middlesex Univ. (United Kingdom)
In person: 22 February 2022 • 9:00 AM - 9:20 AM
Show Abstract + Hide Abstract
One of the challenges for the development of an AI-enhanced system remains that it constantly under-performs considerably when the system is tested on an independent cohort dataset obtained from different research centres. This paper improves this detection performance by increasing colour contrast between lesioned and surrounding mucosa regions, especially for early onset of squamous cancer from endoscopic oesophagus videos. Significant benefit on early detection of oesophageal cancer is realised not only for visual inspection during endoscopy procedure but also for training a deep learning system for detection, delineation and classification by improving the sensitivity, specificity and accuracy by 11%, 4% and 6% respectively.
12033-18
Author(s): Kai Jiang, Masahiro Oda, Nagoya Univ. (Japan); Hironari Shiwaku, Fukuoka Univ. (Japan); Masashi Misawa, Showa Univ. Northern Yokohama Hospital (Japan); Kensaku Mori, Nagoya Univ. (Japan)
In person: 22 February 2022 • 9:20 AM - 9:40 AM
Show Abstract + Hide Abstract
This paper presents an automated real-time esophagus achalasia (achalasia) detection method for esophagoscopy assistance. Achalasia is a well-recognized primary esophageal motor disorder of unknown etiology. To diagnose the achalasia, endoscopic evaluation of the esophagus and stomach is recommended. However, esophagoscopy is low sensitive in the early-stage of achalasia, only about half of patients with early-stage achalasia can be identified. Thus, a quantitative detection system of real-time esophagoscopy video is required for diagnosis assistance of achalasia. This paper presents to use of a convolutional neural network (CNN) to detect all achalasia frames in esophagoscopy video. We trained and evaluated our network with an original dataset that is extracted from several esophagoscopy videos of achalasia patients. Furthermore, we develop a real-time achalasia detection Computer-Aided Diagnosis (CAD) system with the trained network.
Award Announcements
In person: 22 February 2022 • 9:40 AM - 9:45 AM
Session Chairs: Karen Drukker, The Univ. of Chicago Medicine (United States), Khan M. Iftekharuddin, Old Dominion Univ. (United States)
The Computer-Aided Diagnosis Paper Award, conference RFW finalists, and poster award recipients will be recognized with certificates distributed.
Session 5: Deep Learning I
In person: 22 February 2022 • 10:10 AM - 12:10 PM
Session Chairs: Catalin Fetita, Télécom SudParis (France), Zhengrong Liang, Stony Brook Univ. (United States)
12033-19
Author(s): Amir Reza Sadri, Thomas DeSilvio, Case Western Reserve Univ. (United States); Andrei Purysko, Cleveland Clinic (United States); Rajmohan Paspulati, Kenneth Friedman, Univ. Hospitals Cleveland Medical Ctr. (United States); Smitha S. Krishnamurthi, David Liska, Cleveland Clinic (United States); Sharon L. Stein, Univ. Hospitals Cleveland Medical Ctr. (United States); Satish E. Viswanath, Case Western Reserve Univ. (United States)
In person: 22 February 2022 • 10:10 AM - 10:30 AM
Show Abstract + Hide Abstract
We present a novel tool to integrate wavelet networks into a convolutional neural network (CNN), termed a deep hybrid convolutional wavelet network (DHCWN). The proposed model comprises the wavelons that use the shift and scale parameters of a mother wavelet as its building units. Whereas the activation functions in a typical CNN are fixed and monotonic (e.g. ReLU), the activation functions of DHCWN are wavelets that are flexible and more stable during optimization. DHCWN was evaluated using a multi-institutional cohort of 95 pre-treatment rectal cancer MRI scans to predict pathologic response to neoadjuvant chemoradiation. Compared to CNN and a multilayer wavelet perceptron, DHCWN yielded significantly better performance in predicting treatment response in training and hold-out validation sets, with 90.67% and 91.17% accuracy, respectively. DHCWN thus offers a significantly more extensible and effective solution for characterizing predictive signatures via routine imaging data.
12033-20
Author(s): Ghada Zamzmi, Tochi Oguguo, Sivaramakrishnan Rajaraman, Sameer Antani, National Institutes of Health (United States)
In person: 22 February 2022 • 10:30 AM - 10:50 AM
Show Abstract + Hide Abstract
Existing works for automated echocardiography view classification are designed under the assumption that the views in the testing set must belong to a limited number of views that have appeared in the training set (closed-world). This assumption does not hold in real-world environments that are open and have unseen examples, which may drastically weaken the robustness of conventional closed-world view classification approaches. In this work, we develop an open-world active learning approach for echocardiography view classification, where the network classifies images of known views into their respective classes and detects images of ''unknown'' views. Our results using an echocardiography dataset containing known and unknown views showed the superiority of the proposed approach (up to 10% increase in accuracy) as compared to the conventional closed-world classification approaches.
12033-21
Author(s): Sai Kiran R. Maryada, The Univ. of Oklahoma (United States); William Lee Booker, Babel Analytics (United States); Gopichandh Danala, The Univ. of Oklahoma (United States); Catherine An Ha, Babel Analytics (United States); Dean F. Hougen, Bin Zheng, The Univ. of Oklahoma (United States)
In person: 22 February 2022 • 10:50 AM - 11:10 AM
Show Abstract + Hide Abstract
Building a robust AI model requires a large and diverse dataset for training and validation. While a large number of Retinal fundus photos are available online, collecting them to create a clean, well-structured dataset is a difficult and manually intensive process. In this work, we propose a two-stage deep-learning system to automatically identify clean retinal fundus images and delete images with severe artifacts. In two stages, two transfer learning models based on the ResNet-50 architecture pre-trained using ImageNet data are built with increased threshold values on SoftMax to reduce false positives. The first stage classifier identifies “easy” images, and the remaining “difficult” images are further identified by the second stage classifier. Using the two-stage deep-learning model yields a positive predictive value (PPV) of 98.56% for the target class compared to a single-stage model with a PPV of 95.74% and helps to reduce false positives by two-thirds.
12033-22
Author(s): Ravi K. Samala, Nicholas Petrick, Berkman Sahiner, Gene Pennello, Kenny H. Cha, Mohammad Mehdi Farhangi, U.S. Food and Drug Administration (United States)
In person: 22 February 2022 • 11:10 AM - 11:30 AM
Show Abstract + Hide Abstract
We investigated the workings of ensemble deep-convolutional-neural-networks (DNN) for the classification of true- and false-lung nodule candidates in thoracic-CT. We show that an ensemble approach results in improved detection performance and potentially improved robustness to out-to-distribution data. We analyzed the training trajectories of six DNNs using the uniform manifold approximation and projection of the output scores at different training checkpoints and compare the rank-biased overlap measures to better understand the diversity in model training and output scores. Our future work includes incorporating these analyses to develop an ensemble model that can handle unbalances among different subgroups in the training data.
12033-23
Author(s): Samuel Robertson, Anup Tuladhar, Deepthi Rajashekar, Nils D. Forkert, Univ. of Calgary (Canada)
In person: 22 February 2022 • 11:30 AM - 11:50 AM
Show Abstract + Hide Abstract
The efficacy of stroke treatments is highly time-sensitive, and accelerated diagnosis may improve patient outcomes. Lesion identification in MRI datasets is time consuming and challenging. Automatic lesion localization can expedite diagnosis by flagging images and corresponding regions of interest for visual assessment. In this work, we propose a deep reinforcement learning model to localize and detect ischemic stroke lesions in fluid attenuated inversion recovery MRI images, combining advances in computer vision to sequentially localize multiple lesions. The results show that the model learns to successfully localize lesions in challenging hybrid data from multiple studies.
12033-24
Author(s): Álvaro García Faura, XLAB d.o.o. (Slovenia); Dejan Štepec, Tomaž Martinčič, XLAB d.o.o. (Slovenia), Univ. of Ljubljana (Slovenia); Danijel Skočaj, Univ. of Ljubljana (Slovenia)
In person: 22 February 2022 • 11:50 AM - 12:10 PM
Show Abstract + Hide Abstract
A key component towards an improved cancer diagnosis is the development of computer-assisted tools. In this article, we present the solution that won the SegPC-2021 competition for the segmentation of multiple myeloma plasma cells in microscopy images. The labels in the competition dataset were generated semi-automatically and presented noise. To deal with it, new labels were generated from existing ones, heavy image augmentation was carried out and predictions were combined by a custom ensemble strategy. These techniques, along with state-of-the-art feature extractors and instance segmentation architectures, resulted in a mean Intersection-over-Union of 0.9389 on the SegPC-2021 final test set.
Tuesday/Wednesday Poster Viewing
In person: 22 February 2022 • 12:00 PM - 7:00 PM
Posters will be on display Tuesday and Wednesday with extended viewing until 7:00 pm on Tuesday. The poster session with authors in attendance will be Wednesday evening from 5:30 to 7:00 pm. Award winners will be identified with ribbons during the reception. Award announcement times are listed in the conference schedule.
Session 6: Breast II
In person: 22 February 2022 • 1:20 PM - 3:00 PM
Session Chairs: Susan M. Astley, The Univ. of Manchester (United Kingdom), Maryellen L. Giger, The Univ. of Chicago (United States)
12033-25
Author(s): Natalie M. Baughan, Lindsay Douglas, Maya Ballard, Esther Lee, Alexandra Edwards, Li Lan, Hui Li, Maryellen Giger, The Univ. of Chicago (United States)
In person: 22 February 2022 • 1:20 PM - 1:40 PM
Show Abstract + Hide Abstract
We investigated associations between mammographic texture features and DCE-MRI background parenchymal enhancement (BPE) using Kendall’s tau-b and a two-sample t-test. BPE levels were provided from the radiology report and texture features were calculated from corresponding mammograms using a region of interest selected from the central region behind the nipple on the unaffected breast. Kendall test results indicated a statistically significant correlation of BPE with 5 selected texture features and t-test results indicated a significant difference in one feature between levels of BPE after multiple comparisons correction. Results indicate an association between coarse, low spatial frequency mammographic patterns and increased BPE.
12033-26
Author(s): Heather M. Whitney, Wheaton College (United States), The Univ. of Chicago (United States); Yu Ji, Tianjin Medical Univ. Cancer Institute & Hospital (China); Hui Li, The Univ. of Chicago (United States); Peifang Liu, Tianjin Medical Univ. Cancer Institute & Hospital (China); Maryellen Giger, The Univ. of Chicago (United States)
In person: 22 February 2022 • 1:40 PM - 2:00 PM
Show Abstract + Hide Abstract
A collection of radiomic features extracted from DCE-MR images of breast cancers was investigated for statistically significant difference in feature values when designation as Luminal A or Luminal B agreed or disagreed using immunohistochemical staining alone or the St. Gallen reference standards. The impact on classification performance to identify lesions as Luminal A or Luminal B was assessed through review of feature selection and classification performance (area under the receiver operating characteristic curve) when using agreement or disagreement lesions. The results demonstrate some differences in feature value distributions and AI classification performance between the two types of lesions according to their molecular subtyping through the two reference standards.
12033-27
Author(s): Shannon Doyle, Francesco Dal Canton, Jelle Wesseling, The Netherlands Cancer Institute (Netherlands); Clara I. Sánchez, Univ. of Amsterdam (Netherlands); Jonas Teuwen, The Netherlands Cancer Institute (Netherlands)
In person: 22 February 2022 • 2:00 PM - 2:20 PM
Show Abstract + Hide Abstract
Duct detection in Hematoxylin and Eosin stained whole-slide images (WSIs) along with downstream analysis is necessary for the diagnosis and treatment planning of Ductal Carcinoma in-Situ (DCIS). This process can be facilitated using deep learning methods. We used novel self-supervised learning methods to produce feature encodings and compared their performance with ImageNet features and random initialisation on the downstream task of duct detection.
12033-28
Author(s): Yen Nhi T. Vu, Brent Mombourquette, Thomas P. Matthews, Jason Su, Sadanand Singh, Whiterabbit.ai (United States)
In person: 22 February 2022 • 2:20 PM - 2:40 PM
Show Abstract + Hide Abstract
Regular breast screening mammography allow for early detection of cancer, treatment and reduce breast cancer mortality. Computer aided detection (CAD) softwares have been available to radiologists for decades to address these issues. However, CAD has failed to improve interpretation of full-field digital mammography (FFDM) images due to its low sensitivity. Usage of deep learning models have shown promise in improving performance of radiologists. Unfortunately, lack of large finding-level annotated datasets have posed a challenge in training deep learning based CAD systems. In this work, we propose a simple and intuitive two-stage detection framework, named WRDet. Furthermore, we define a new criteria to match predicted proposals with loose bounding box annotations. At just 1 false positive predictions per image, WRDet achieves a sensitivity of 0.94. Our results demonstrate possibility of a CAD system that could be beneficial in improving accuracy of screening mammography.
12033-29
Author(s): Yue Li, Sun Yat-Sen Univ. (China); Zilong He, Nanfang Hospital, Southern Medical Univ. (China); Xiangyuan Ma, Shantou Univ. (China); Weixiong Zeng, Jialing Liu, Weimin Xu, Zeyuan Xu, Sina Wang, Chanjuan Wen, Hui Zeng, Jiefang Wu, Weiguo Chen, Nanfang Hospital, Southern Medical Univ. (China); Yao Lu, Sun Yat-Sen Univ. (China)
In person: 22 February 2022 • 2:40 PM - 3:00 PM
Show Abstract + Hide Abstract
This study compared a deep-learning-based computer-aided detection (CADe) model for architectural distortion (AD) in digital breast tomosynthesis (DBT) and full-field digital mammography (FFDM). The experimental results showed that the model in DBT achieved significantly better detection performance than the model in FFDM (p=0.002). Qualitative analysis illustrated that DBT had the ability to overcome the problem of tissue superimposition in FFDM and could help the CADe model improve its detection performance. This conclusion was consistent with the clinical experience of radiologists.
Session 7: Detection
In person: 22 February 2022 • 3:30 PM - 4:50 PM
Session Chairs: Weijie Chen, U.S. Food and Drug Administration (United States), Horst Karl Hahn, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany)
12033-30
Author(s): Axel Wismüller, Univ. of Rochester Medical Ctr. (United States); M. Ali Vosoughi, Adora DSouza, Anas Zainul Abidin, Univ. of Rochester (United States)
In person: 22 February 2022 • 3:30 PM - 3:50 PM
Show Abstract + Hide Abstract
Unveiling causal relationships among time series in multivariate observational data is a challenging research topic; however, conventional methods are of limited value due to ill-posedness when the number of nodes exceeds the number of temporal observations. We have proposed the large-scale Augmented Granger Causality (lsAGC) algorithm based on augmenting a dimensionality-reduced representation of the system's state-space by supplementing data from the conditional source time-series taken from the original input space. We apply lsAGC on synthetic fMRI data with known ground truth and compare its performance to state-of-the-art methods. Our results suggest that the proposed lsAGC method significantly outperforms existing methods, both in diagnostic accuracy with Area Under the Receiver Operating Characteristic, demonstrating the potential of lsAGC for large-scale observations in neuroimaging studies of the human brain.
12033-31
Author(s): Indrani Bhattacharya, Wei Shao, Simon J. C. Soerensen, Richard E. Fan, Jeffrey B. Wang, Christian A. Kunder, Pejman Ghanouni, Geoffrey A. Sonn, Mirabela Rusu, Stanford Univ. (United States)
In person: 22 February 2022 • 3:50 PM - 4:10 PM
Show Abstract + Hide Abstract
The primary objective of prostate cancer care is identifying and treating aggressive cancer, while reducing over-treatment of indolent cancer. Magnetic Resonance Imaging (MRI) can help detect prostate cancer, but radiologist interpretations suffer from false positives and false negatives and high inter-reader variability. Existing automated methods suffer from sensitivity-specificity trade-off, making them unsuitable for clinical use. We present an automated approach that integrates, for the first time, a radiology-pathology fusion model with zonal distribution of prostate cancer to selectively identify aggressive and indolent cancer on prostate MRI. This can help reduce unnecessary biopsies and target aggressive cancer components during biopsy.
12033-32
Author(s): Kento Nishihira, Hidenobu Suzuki, Yoshiki Kawata, Tokushima Univ. (Japan); Noboru Niki, Medical science Institute, Inc. (Japan)
In person: 22 February 2022 • 4:10 PM - 4:30 PM
12033-33
Author(s): Omer Shmueli, Chen Solomon, Noam Ben-Eliezer, Hayit Greenspan, Tel Aviv Univ. (Israel)
In person: 22 February 2022 • 4:30 PM - 4:50 PM
Show Abstract + Hide Abstract
In this work, we suggest a new network architecture, based on Y-net and EfficientNet models, with attention layers to improve the network performance and reduce overfitting. Furthermore, the attention layers allow us to extract lesion locations. In addition, we show an innovative regularization scheme on the attention weight mask to make it focus on the lesions while letting it search in different areas. Finally, we explore an option to add synthetic lesions in the training process. Based on recent work, we generate artificial lesions in healthy brain MRI scans to augment our training data. Our system achieves 90% accuracy in identifying cases that contain lesions (vs. healthy) with more than 12% improvement over an equivalent system without the attention and the data added.
Session WK3: Workshop: Live Demonstrations
In person: 22 February 2022 • 5:00 PM - 7:00 PM
CALL FOR PARTICIPATION
The goal of this workshop is to provide a forum for systems and algorithms developers to show off their creations. The intent is for the audience to be inspired to conduct derivative research, for the demonstrators to receive feedback and find new collaborators, and for all to learn about the rapidly evolving field of medical imaging.

The Live Demonstration Workshop invites participation from all of the conferences that comprise the SPIE Medical Imaging symposium. We encourage the CAD, Digital Pathology, Image Processing, Imaging Informatics, Image Perception, Image-Guided Procedures, Modeling, Physics, and Robotic Interventions conferences to participate.

This workshop features interactive demonstrations that are complementary to the topics of SPIE Medical Imaging. Workshop demonstrations include samples, systems, and software demonstrations that depict the implementation, operation, and utility of cutting-edge as well as mature research. Having an accepted SPIE Medical Imaging paper is not required for giving a Live Demonstration; however, authors of SPIE Medical Imaging papers are encouraged to submit demonstrations that are complementary to their oral and poster presentations.

The session will include a Certificate of Merit Award and $500 prize sponsored by Siemens Healthineers presented to one demonstration considered to be of exceptional interest. We invite all workshop visitors to vote for three of their favorite demonstrations, with the final winner chosen from the top scorers by a group of appointed judges.

IMPORTANT DATES
  • January 14, 2022: Deadline for submission
  • January 21, 2022: Notification of acceptance
  • February 4, 2022: Deadline for two-slide summary
JOIN THE WORKSHOP
If you would like to demonstrate at the SPIE Medical Imaging Live Demonstrations Workshop, please send an e-mail before the submission deadline to Horst Hahn, Karen Drukker, and Lubomir Hadjiiski:

horst.hahn@mevis.fraunhofer.de
kdrukker@uchicago.edu
lhadjisk@umich.edu

In the e-mail, supply the following information:
  • Title of the demo
  • Names and affiliations (name of institute, city, country) of the demonstrators
  • Short description of the demo, one paragraph minimum. Make sure it clearly describes the technology and application area of the demo. You may cite or include a paper describing the demo.
  • Optionally, describe the public data used in the development or evaluation of the system. Include a link to the data or to a page that describes how to access that data.
  • Optionally, include a link to a video showing the system in action.
NOTES
Please note the following rules and requirements:
  • Teams from academia (universities, university medical centers, research organizations), government, and industry are invited to participate in this year’s workshop. Demonstrations should be scientific and not commercial in nature; demonstration of research prototypes is highly encouraged.
  • After you submit a description of your proposed demonstration, you will receive a confirmation by e-mail.
  • The organizers will accept teams for demonstrations based on the quality of the provided description. If there are more proposals than presentation slots in the workshop, organizers will also strive to select a representative mix of applications.
  • Notification of acceptance/rejection of your demonstration for the Workshop will be emailed about 3 weeks before the conference (see ‘important dates’ above).
  • For demonstrations accepted for presentation at the Live Demonstration Workshop:
    • The accepted demonstrations will be listed online in the workshop program.
    • All teams need to provide one or two slides describing their system before the conference (see ‘important dates’ above) from which the opening presentation will be compiled.
    • In the case of in person SPIE Medical Imaging meeting, each team is responsible for bringing their own equipment. The organization will provide a table and power supply for each demonstration. Demos should be done on a single laptop. If the demo requires an external monitor this is allowed, but there should be no more than one monitor of 25″ maximum size.
    • In the case of virtual SPIE Medical Imaging meeting, each team will demonstrate their tool over internet. More details will be provided in the future.
    • Participation in the workshop is free of charge, but all demonstrators (those present during the workshop) must be registered to attend the SPIE Medical Imaging Conference.
Session 8: Neurology
In person: 23 February 2022 • 8:00 AM - 9:40 AM
Session Chairs: Lubomir M. Hadjiiski, Michigan Medicine (United States), Hongbing Lu, PLA Air Force Military Medical Univ. (China)
12033-34
Author(s): David DeVries, Gerald C. Baines Ctr. for Translational Cancer Research (Canada), Western Univ. (Canada); Frank Lagerwaard, Amsterdam UMC (Netherlands); Jaap Zindler, Haaglanden Medical Ctr. (Netherlands), Holland Proton Therapy Ctr. (Netherlands); Timothy Yeung, RefleXion Medical (United States); George Rodrigues, George Hajdok, Western Univ. (Canada); Aaron Ward, Gerald C. Baines Ctr. for Translational Cancer Research (Canada), Western Univ. (Canada)
In person: 23 February 2022 • 8:00 AM - 8:20 AM
Show Abstract + Hide Abstract
The prediction of brain metastasis (BM) response to stereotactic radiosurgery could assist clinicians when choosing BM treatments. Pre-treatment clinical and magnetic resonance imaging (MRI) radiomic features were used with a random forest classifier and bootstrap experimental design to investigate predicting treatment endpoints. It was found that in-field progression, out-of-field progression, and 1-year overall survival could be predicted with respective AUC estimates of 0.70, 0.57 and 0.66. The effect of incorporating data from multiple MR scanners was also investigated. MR scanner variability was found to decrease classifier AUC, though pre-processing methods were found to counteract this effect for some scanner models.
12033-35
Author(s): Marcel Bengs, Finn Behrendt, Max-Heinrich Laves, Technische Univ. Hamburg (Germany); Julia Krüger, Roland Opfer, jung diagnostics GmbH (Germany); Alexander Schlaefer, Technische Univ. Hamburg (Germany)
In person: 23 February 2022 • 8:20 AM - 8:40 AM
Show Abstract + Hide Abstract
Anomaly detection in brain Magnetic Resonance Images (MRIs) is a challenging task. Unsupervised anomaly detection (UAD) in brain MRI with deep learning has shown promising results to provide a quick, initial assessment. So far, these methods only rely on the visual appearance of healthy brain anatomy for UAD. We propose deep learning for UAD in 3D brain MRI considering additional age information. We analyze the value of additional age information during training, as an additional anomaly score, and systematically study several architecture concepts. Our approach significantly improves UAD performance compared to deep learning approaches without age information.
12033-36
Author(s): Emma Stanley, Deepthi Rajashekar, Pauline Mouches, Matthias Wilms, Kira Plettl, Nils D. Forkert, Univ. of Calgary (Canada)
In person: 23 February 2022 • 8:40 AM - 9:00 AM
Show Abstract + Hide Abstract
In this study, a fully convolutional neural network (CNN) is proposed to classify attention deficit/hyperactivity disorder (ADHD) from T1-weighted brain magnetic resonance imaging (MRI) of youth aged 9-11 in the Adolescent Brain Cognitive Development Study. Saliency voxel attribution maps were generated, which allowed for identification of the brain regions that were highly influential in model prediction. The identified brain regions are known to show structural differences in youth with ADHD. This study demonstrates the feasibility of using a CNN for detection of adolescent ADHD, while providing explanations of the brain regions involved in classification.
12033-37
Author(s): Abdullah Thabit, Australian e-Health Research Ctr. (Australia), CSIRO Health & Biosecurity (Australia); ShenPeng Li, Australian e-Health Research Ctr. (Australia); Rob Williams, The Univ. of Melbourne (Australia); Victor L. Villemagne, Univ. of Pittsburgh (United States); Christopher C. Rowe, The Univ. of Melbourne (Australia); Vincent Dore, Pierrick Bourgeat, Australian e-Health Research Ctr. (Australia)
In person: 23 February 2022 • 9:00 AM - 9:20 AM
Show Abstract + Hide Abstract
Alzheimer’s disease (AD) is the most common cause of dementia. Amyloid PET imaging has been used in the diagnosis of AD to measure the amyloid burden in the brain, but amyloid imaging is sensitive to the type of scanner model and therefore require harmonization when different scanner models are used. In this work, we propose an automatic approach that aims to harmonize PET images through unsupervised learning. We propose Smoothing-CycleGAN, a modified cycleGAN that uses a 3D smoothing kernel to learn the optimum Point Spread Function (PSF) for bringing PET images into a common spatial resolution. We validate our approach using two sets of datasets, and we analyze the SUVR agreement before and after PET image harmonization. Our results show that the PSF of PET images that have different spatial resolutions can be estimated automatically using Smoothing-cycleGAN, which results in better SUVR agreement after image translation.
12033-38
Author(s): Alejandro Gutierrez, Anup Tuladhar, Deepthi Rajashekar, Nils D. Forkert, Univ. of Calgary (Canada)
In person: 23 February 2022 • 9:20 AM - 9:40 AM
Show Abstract + Hide Abstract
Follow-up imaging is fundamental for the diagnosis, treatment, and rehabilitation of acute ischemic stroke patients. However, multiple imaging techniques are used for this purpose, such as FLAIR and NCCT, which creates a challenge for medical image analysis methods. A simple solution to overcome this problem is using the unpaired CycleGAN image-to-image translation method, which can unify the data by translating one modality to another. This method, however, fails to translate the stroke lesions correctly. In this work, two additions are made (attention-guided learning, gradient-consistency loss), which help to successfully preserve the lesions. Ablation evaluations reveal the significance of each addition.
Session 9: Deep Learning II
In person: 23 February 2022 • 10:10 AM - 12:10 PM
Session Chairs: Ronald M. Summers, National Institutes of Health Clinical Ctr. (United States), Kenji Suzuki, Tokyo Institute of Technology (Japan)
12033-39
Author(s): Zhiyang Zheng, Georgia Institute of Technology (United States); Yi Su, Kewei Chen, David A. Weidman, Banner Alzheimer's Institute (United States); Teresa Wu, Arizona State Univ. (United States); Shihchung Lo, Fleming Lure, MS Technologies Corp. (United States); Jing Li, Georgia Institute of Technology (United States)
In person: 23 February 2022 • 10:10 AM - 10:30 AM
Show Abstract + Hide Abstract
Multi-modality images usually exist for diagnosis/prognosis of a disease, but with different levels of accessibility and accuracy. We proposed Cross-Modality Transfer Learning (CMTL) for accurate diagnosis/prognosis based on standard imaging modality with high accessibility (mod_HA), with a novel training strategy of using not only data of mod_HA but also knowledge transferred from the model based on advanced imaging modality with low accessibility (mod_LA). We applied CMTL to predict conversion of individuals with Mild Cognitive Impairment (MCI) to Alzheimer’s Disease (AD), demonstrating improved performance of the MRI (mod_HA)-based model by leveraging the knowledge transferred from the model based on tau-PET (mod_HA).
12033-40
Author(s): Degan Hao, Dooman Arefan, Shandong Wu, Univ. of Pittsburgh (United States)
In person: 23 February 2022 • 10:30 AM - 10:50 AM
Show Abstract + Hide Abstract
We design an intelligent tool that imitates radiologists' reading behavior and knowledge for lesion localization on radiographs using deep reinforcement learning (DRL). In details, we developed a Q network to simulate a radiologist's succession of saccades and fixations - iteratively choosing the next ROI of radiograph to pay attention to while reading images. Guided by a novel rewarding scheme, our algorithm learns to iteratively zoom in for a close-up assessment of the potential abnormal sub-regions until the termination condition is met. We train and test our model with 80% and 20% of the ChestX-ray8 dataset with pathologically confirmed bounding boxes (B-Boxes), respectively. The localization accuracy is measured at different thresholds of intersection over union (IoU) between the DRL-generated and the ground truth B-Box. The proposed method achieves accuracy of 0.996, 0.412, 0.667, 0.650 at threshold 0.1 respectively for cardiomegaly, mass, pneumonia, and pneumothorax.
12033-41
Author(s): Ansh Roge, Amogh Hiremath, Michael Sobota, Case Western Reserve Univ. (United States); Sree Harsha Tirumani, Leonardo Kayat Bittencourt, Univ. Hospitals of Cleveland (United States); Justin Ream, Ryan Ward, The Cleveland Clinic Foundation (United States); Halimat Olaniyan, Sadhna Verma, Univ. of Cincinnati (United States); Andrei Purysko, The Cleveland Clinic Foundation (United States); Anant Madabhushi, Rakesh Shiradkar, Case Western Reserve Univ. (United States)
In person: 23 February 2022 • 10:50 AM - 11:10 AM
Show Abstract + Hide Abstract
In this study, we sought to analyze the sensitivity of CNNs to radiologist delineations of PCa ROIs with regards to distinguishing clinically significant and insignificant PCa. 5 radiologists delineated PCa ROIs on bi-parametric MRI within the training set (n1=112 lesions). Patches were extracted using the ROI delineations on bpMRI which were used to train individual CNNs using a SqueezeNet architecture to classify significant PCa. The resulting networks were relatively consistent and had no significant difference in AUCs (0.82 ± 0.02). These models were evaluated on independent test sets (n2=85 lesions, n3=29 lesions). However, the predictions were relatively inconsistent, with ICC(2,1) scores across D2 and D3 being 0.74 and 0.54, respectively. Closer agreement in ROI overlap produced higher correlation in predictions on external test sets (R = 0.89, p < 0.05), suggesting that CNNs are influenced by inter-reader differences in ROI delineations.
12033-42
Author(s): Yongjian Yu, Axon Connected, LLC (United States); Jue Wang, Union College (United States)
In person: 23 February 2022 • 11:10 AM - 11:30 AM
Show Abstract + Hide Abstract
Cell characterization is key to research medical signaling of cancer-derived cells in the peripheral blood sample under the high-resolution fluorescent microscope. The task has been challenging with traditional image processing and machine learning techniques due to imaging artifacts, noise, debris, defocusing, shallow depth of field, and high variability in cell morphotypes and fluorescence. We present a compact deep learning method that combines the cell component segmentation/grouping with guided feature learning for categorizing circulating tumor cells from lung cancer liquid biopsy. The method demonstrates a promising performance with a small training dataset. It is effective, efficient, and valuable in low-cost clinical applications.
12033-43
Author(s): Mina Rezaei, Ludwig Maximilian University (Germany); Janne Nappi, Massachusetts General Hospital / Harvard Medical School (United States); Christoph Meinel, Potsdam University (Germany); Hiro Yoshida, Massachusetts General Hospital / Harvard Medical School (United States)
In person: 23 February 2022 • 11:30 AM - 11:50 AM
Show Abstract + Hide Abstract
There is an unmet need for deep learning that could automatically adapt to the real world conditions of imbalanced medical imaging data. We applied uncertainty estimation to the representation learning of long-tailed and out-of-distribution samples. By estimating the relative uncertainties of the input data with a dynamic Monte-Carlo dropout and combination of losses, our Bayesian framework is able to adapt to the imbalanced data for learning generalizable classifiers. Our evaluation based on two public semantic segmentation datasets with different class imbalance ratios showed that the proposed framework generalizes to the different datasets better than existing state-of-the-art models.
12033-44
Author(s): Anees Kazi, Viktoria Markova, Beiyan Liu, Prabhat Reddy Kondamadugula, Ahmed Adly, Shahrooz Faghihroohi, Nassir Navab, Technische Univ. München (Germany)
In person: 23 February 2022 • 11:50 AM - 12:10 PM
Session 10: Head and Neck, Musculoskeletal
In person: 23 February 2022 • 1:20 PM - 3:00 PM
Session Chairs: Chuan Zhou, Michigan Medicine (United States), Marius George Linguraru, Children's National Medical Ctr. (United States)
12033-45
Author(s): Tricia Chinnery, Pencilla Lang, Anthony Nichols, Sarah Mattonen, Western Univ. (Canada)
In person: 23 February 2022 • 1:20 PM - 1:40 PM
Show Abstract + Hide Abstract
We developed a machine learning classifier to predict feeding tube insertion in patients with oropharyngeal cancer (OPC). Primary tumor volumes were contoured on CT images and PyRadiomics was used to compute radiomic features from these volumes of interest. Feature selection was performed and machine learning classifiers were built using the selected features on the training dataset. Model performances were assessed in the testing dataset. Seven features were selected and the top performing classifier achieved an AUC of 0.69 in the testing dataset. This model could guide physicians in identifying patients with OPC who may benefit from prophylactic feeding tube insertion.
12033-46
Author(s): David Zarrin, Anshul Ratnaparkhi, Bayard Wilson, Kirstin Cook, Ien Li, Azim Laiwalla, Mark Attiah, Joel Beckett, Bilwaj Gaonkar, Luke Macyszyn, Univ. of California, Los Angeles (United States)
In person: 23 February 2022 • 1:40 PM - 2:00 PM
Show Abstract + Hide Abstract
We sought to test the accuracy of a state-of-the-art machine learning algorithm for segmenting cervical spinal cord and neural foramina against expert clinician raters. A deep U-net ensemble was trained on 50 MRI-series, and then evaluated qualitatively and quantitatively using Sorenson-Dice coefficients, Hausdorff coefficients, and average surface distances on a separate testing set of 50 MRI-series. We conclude that automated deep learning methods segment cervical cords more accurately than cervical neural foramina, and that further technical work is necessary to improve automated segmentation of cervical anatomy.
12033-47
Author(s): Shinji Nakazawa, LPIXEL Inc. (Japan); Changhee Han, Saitama Prefectural Univ. (Japan); Joe Hasei, Okayama City Hospital (Japan); Yuichi Nakahara, Toshifumi Ozaki, Okayama Univ. (Japan)
In person: 23 February 2022 • 2:00 PM - 2:20 PM
Show Abstract + Hide Abstract
Convolutional Neural Networks play a key role in bone age assessment for investigating endocrinology, genetic, and growth disorders under various modalities and body regions. However, no researcher has tackled bone age progression/regression despite its valuable potential applications: bone-related disease diagnosis, clinical knowledge acquisition, and museum education. Therefore, we propose Bone Age Progression Generative Adversarial Network (BAPGAN) to progress/regress both femur/phalange X-ray images while preserving identity and realism. We exhaustively confirm the BAPGAN's clinical potential via Fréchet Inception Distance, Visual Turing Test by two expert orthopedists, and t-Distributed Stochastic Neighbor Embedding.
12033-48
Author(s): Larissa Schudlo, Yiting Xie, IBM Watson Health (United States); Kirstin Small, Brigham and Women's Hospital (United States); Ben Graf, IBM Watson Health (United States)
In person: 23 February 2022 • 2:20 PM - 2:40 PM
Show Abstract + Hide Abstract
Spine fractures are often missed on chest x-rays, especially the frontal view. Artificial intelligence could reduce the rate of missing these findings. To detect fractures, radiologists make a relative comparison of vertebrae along the spine. We designed a time-distributed CNN+LSTM model that encapsulates both spatial and sequential information to classify vertebral sequences in frontal chest x-rays. The proposed method achieved an AUC of 83.17 +/- 3.83% in differentiating x-rays with and without fractures, an improvement of 3.74% and 6.74% relative to ResNet-152 and non-time-distributed CNN+LSTM models. These findings suggest the importance of exploiting sequential and spatial information for fracture detection.
12033-49
Author(s): Fidan Mammadli, Technische Univ. Eindhoven (Netherlands); Fons van der Sommen, Technische Univ. Eindhoven (Netherlands), Eindhoven Artificial Intelligence Systems Institute (Netherlands); Tim Boers, Joost A. van der Putten, Technische Univ. Eindhoven (Netherlands); Kiki N. Fockens, Jelmer B. Jukema, Martijn de Jong, Jacques J. Bergman, Amsterdam UMC (Netherlands); Peter H. N. de With, Technische Univ. Eindhoven (Netherlands)
In person: 23 February 2022 • 2:40 PM - 3:00 PM
Show Abstract + Hide Abstract
The majority of encouraging results published for endoscopic Computer-Aided Detection (CAD) algorithms employ datasets adhering to high-quality standards that cannot be guaranteed at peripheral medical centres. Several Frame Informativeness Assessment (FIA) systems have been proposed in the literature to address low-quality endoscopic frames. However, current state-of-the-art implies sequential use of FIA and CAD, affecting the time performance of both algorithms. Since these algorithms process similar images, we hypothesize that part of the learned features can be leveraged for both systems, enabling optimized implementation. We explore this case for early Barrett cancer detection by integrating the FIA within the CAD system.
Session 11: Radiomics, Radiogenomics, Multi-omics
In person: 23 February 2022 • 3:30 PM - 5:30 PM
Session Chairs: Prateek Prasanna, Stony Brook Univ. (United States), Letícia Rittner, Univ. of Campinas (Brazil)
12033-50
Author(s): Can Cui, Zuhayr Asad, William F. Dean, Isabelle T. Smith, Christopher Madden, Shunxin Bao, Bennett A. Landman, Joseph T. Roland, Lori A. Coburn, Keith T. Wilson, Jeffrey P. Zwerner, Shilin Zhao, Lee E. Wheless, Yuankai Huo, Vanderbilt Univ. (United States)
In person: 23 February 2022 • 3:30 PM - 3:50 PM
Show Abstract + Hide Abstract
Multi-view learning (e.g., integrating pathological images with genomic features) tends to improve the accuracy of cancer diagnosis and prognosis as compared to learning with a single view. However, missing data is a common problem in clinical practice, i.e., not every patient has all views available. Most of the previous works directly discarded samples with missing views, which might lose information in the data with missing views and increase the likelihood of overfitting. In this work, we generalize the multi-view learning in cancer diagnosis with the capacity of dealing with missing data using histological images and genomic data. Our integrated model can utilize all available data from patients with both complete and partial views. The experiments on the public TCGA-GBMLGG dataset show that the data with missing views can contribute to multi-view learning, which improves the model performance in grade classification of glioma cancer.
12033-51
Author(s): Walia Farzana, Zeina A. Shboul, Ahmed Temtam, K. M. Iftekharuddin, Old Dominion University (United States)
In person: 23 February 2022 • 3:50 PM - 4:10 PM
Show Abstract + Hide Abstract
This work demonstrated a machine learning based framework for MGMT classification with uncertainty analysis utilizing imaging features extracted from multimodal magnetic resonance imaging. The imaging features include conventional texture, volumetric, and sophisticated fractal, and multi-resolution fractal texture features. The proposed method is evaluated with publicly available BraTS-TCGA-GBM pre-operative scans and TCGA datasets with 114 patients. The experiment with 10-fold cross-validation suggests that the fractal and multi-resolution fractal texture features offer an improved prediction of MGMT status. The improved performance of uncertainty ensemble model offers accuracy of 71.74% and area under the curve of 0.76.
12033-52
Author(s): Apurva Singh, Bardia Yousefi, Eric A. Cohen, Babak Haghighi, Sharyn Katz, Peter B. Noël, Russell T. Shinohara, Despina Kontos, Univ. of Pennsylvania (United States)
In person: 23 February 2022 • 4:10 PM - 4:30 PM
Show Abstract + Hide Abstract
ComBat is a promising harmonization method for removing variation due to technical factors in radiomic features but is limited by inability to harmonize by multiple batch effects and assumptions that residuals are normally distributed and that all clinical/batch variables are known. We developed Nested ComBat to handle multiple batch effects and Nested+GMM to handle bimodal feature distributions, and evaluated these approaches on radiomic features extracted from lung CT images. While Nested ComBat performed comparably to standard ComBat, Nested+GMM ComBat demonstrated the best harmonization performance. These approaches enable more generalized ComBat usage and show promise for better radiomic feature standardization.
12033-53
Author(s): Ka'Toria Leitch, Maysam Shahedi, James D. Dormer, The Univ. of Texas at Dallas (United States); Quyen N. Do, Yin Xi, Matthew A. Lewis, Christina L. Herrera, Catherine Y. Spong, Ananth J. Madhuranthakam, Diane M. Twickler, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
In person: 23 February 2022 • 4:30 PM - 4:50 PM
Show Abstract + Hide Abstract
In women with placenta accreta spectrum (PAS), patient management may involve cesarean hysterectomy at delivery. Magnetic resonance imaging (MRI) has been used for further evaluation of PAS and surgical planning. This work tackles two prediction problems: predicting presence of PAS and predicting hysterectomy using MR images of pregnant patients. First, we extracted approximately 2,500 radiomic features from MR images with two regions of interest: the placenta and the uterus. In addition to analyzing two regions of interest, we dilated the placenta and uterus masks by 5, 10, 15, and 20 mm to gain insights from the myometrium, where the uterus and placenta overlap in the case of PAS. This study cohort includes 241 pregnant women. Of these women, 89 underwent hysterectomy while 152 did not; 141 with suspected PAS, and 100 without suspected PAS. We obtained an accuracy of 0.88 for predicting hysterectomy and an accuracy of 0.92 for classifying suspected PAS.
12033-54
Author(s): Marco Caballo, Wendelien Sanderink, Radboud Univ. Medical Ctr. (Netherlands); Luyi Han, Yuan Gao, Radboud Univ. Medical Ctr. (Netherlands), The Netherlands Cancer Institute (Netherlands); Alexandra Athanasiou, MITERA Hospital (Greece); Ritse M. Mann, Radboud Univ. Medical Ctr. (Netherlands), The Netherlands Cancer Institute (Netherlands)
In person: 23 February 2022 • 4:50 PM - 5:10 PM
Show Abstract + Hide Abstract
A four-dimensional (4D) radiomics approach for the analysis of dynamic contrast-enhanced (DCE) MRI images of breast cancer is proposed, which aims at quantifying imaging features related to kinetics, enhancement heterogeneity and time-dependent textural variation from the tumors and their peritumoral regions, leveraging both spatial and temporal image information. The potential of this approach was studied for two clinical applications: the prediction of pathological complete response (pCR) to neoadjuvant chemotherapy (NAC), and of systemic recurrence (SR) in pretreatment images of triple-negative (TN) breast cancers. It yielded promising results for both clinical applications (AUC=0.81 and AUC=0.83 for pCR and SR prediction, respectively).
12033-55
Author(s): Hidenobu Suzuki, Mikio Matsuhiro, Yoshiki Kawata, Tokushima Univ. (Japan); Issei Imoto, Aichi Cancer Ctr. Research Institute (Japan); Yasutaka Nakano, Shiga Univ. of Medical Science (Japan); Masahiko Kusumoto, National Cancer Ctr. Hospital (Japan); Masahiro Kaneko, Tokyo Health Service Association (Japan); Noboru Niki, Medical Science Institute, Inc. (Japan)
In person: 23 February 2022 • 5:10 PM - 5:30 PM
Show Abstract + Hide Abstract
We performed visualization and unsupervised clustering of emphysema progression using t-distributed stochastic neighbor embedding (t-SNE) analysis of longitudinal CT images and single nucleotide polymorphisms (SNPs). The procedure of this analysis is as follows; (1) automatic segmentation of lung lobes using 3D U-Net, (2) quantitative image analysis of emphysema progression in lung lobes, and (3) visualization and unsupervised clustering of emphysema progression using t-SNE. We demonstrate that visualization and clustering using t-SNE could possibly explain the factors associated with emphysema progression through the integration of SNPs, smoking history, and imaging features.
Wednesday Poster Session
In person: 23 February 2022 • 5:30 PM - 7:00 PM
All symposium attendees are invited to attend the evening Wednesday Poster Session to view the high-quality posters and engage the authors in discussion. Attendees are required to wear their conference registration badges to access the Poster Session. Authors may set up their posters starting Tuesday 22 February.*

*In order to be fully considered for a Poster Award, it is recommended to have your poster set up by 12:00pm on Tuesday 22 February 2022. Posters should remain on display until the end of the Poster Session on Wednesday.
12033-78
Author(s): Hirohisa Oda, Yuichiro Hayashi, Nagoya Univ. (Japan); Takayuki Kitasaka, Aichi Institute of Technology (Japan); Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Nagoya Univ. (Japan); Kojiro Suzuki, Aichi Medical Univ. (Japan); Masahiro Oda, Kensaku Mori, Nagoya Univ. (Japan)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
This paper proposes a segmentation method of the intestines (both the small and the large intestines) from CT volumes. Although our previous method introduced 2D distance map estimation for preventing incorrect shortcuts between adjacent regions, incorrect shortcuts between air-filled regions are often generated. Furthermore, regions generated by the Watershed algorithm were sometimes tiny. We solve those problems by a multi-class segmentation and 3D distance transformation. Experiments using 110 CT volumes showed that our proposed method successfully prevented those problems.
12033-79
Author(s): Huong Pham, Meredith Jones, Tiancheng Gai, Warid Islam, Gopichandh Danala, Javier Jo, Bin Zheng, The Univ. of Oklahoma (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
This study aims to identify an optimal approach to develop a new quantitative image marker to predict survival of gastric cancer patients using CT images. After tumor segmentation and computation of optimal radiomics features, two logistic regression models (LRM) using image features computed from one CT slice and multiple adjacent CT slices are trained and tested using a leave-one-case-out cross-validation method. Two LRMs yield case prediction-based AUC values of 0.70 and 0.77, respectively. The study demonstrates that (1) radiomics features carry discriminatory information to patients’ survival and (2) fusion of quasi-3D image features yields higher prediction accuracy than using 2D image features.
12033-80
Author(s): Ralph Saber, Polytechnique Montréal (Canada); David Henault, Ctr. de Recherche du Ctr. Hospitalier de l'Univ. de Montréal (Canada); Eugene Vorontsov, Polytechnique Montréal (Canada); Emmanuel Montagnon, An Tang, Simon Turcotte, Ctr. de Recherche du Ctr. Hospitalier de l'Univ. de Montréal (Canada); Samuel Kadoury, Polytechnique Montréal (Canada), Ctr. de Recherche du Ctr. Hospitalier de l'Univ. de Montréal (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
In an effort to improve the understanding of tumor biology in colorectal cancer (CRC), it was found that CD3+ tumor infiltrating lymphocytes (TIL) had a strong prognostic value in primary CRC as well as in colorectal liver metastases (CLM). However, quantification of TILs remains labor intensive and requires tissue samples. In this study, we propose a radiomics-based pipeline to predict CD3 T-cell infiltration status in CLM from pre-operative computed tomography (CT) images. Our findings demonstrate a relationship between CT radiomic features and CD3 tumor infiltration status with the potential of noninvasively determining CD3 status from pre-operative CT images.
12033-81
Author(s): Alvaro Fernandez-Quilez, Trygve Eftestøl, Ketil Oppedal, Univ. of Stavanger (Norway); Svein Reidar Kjosavik, Omer Parvez, Stavanger Univ. Hospital (Norway)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Prostate Cancer (PCa) is the third most commonly diagnosed cancer worldwide. In spite of it, its diagnostic is hampered by a substantial overdiagnosis. Magnetic Resonance Imaging (MRI) has proven to be reliable to differentiate between clinically significant (cS) and non-clinically significant (nCs) cases but it requires specialized training. Deep learning (DL) has arisen as an alternative to automatize manual MRI analysis but they require large amounts of annotated data. Standard augmentation techniques such as image translation have become the default option to increase data availability. However, the correlation between transformed data and the original one limits the amount of information provided by such a method. Generative Adversarial Networks (GAN) present an alternative. We explore a cGAN and DCGAN architecture to generate ADC MRI prostate samples and show how their addition improves final results measured by area under the curve (AUC) on a prostate cancer triage application.
12033-82
Author(s): Alvaro Fernandez-Quilez, Trygve Eftestøl, Ketil Oppedal, Univ. of Stavanger (Norway); Svein Reidar Kjosavik, Habib Ullah, Stavanger Univ. Hospital (Norway)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Prostate Cancer (PCa) is the second most common cancer diagnosed among men worldwide. Current diagnostic practices come at the cost of a substantial overdiagnosis. Magnetic Resonance Imaging (MRI) has proven to add an additional value to detect lesions and differentiate between clinically significant (cS) and non-clinically significant (nCs) cases but it relies on specialized training and can be a time-intensive task. Deep Learning (DL) holds promise in automatizing such tasks. However, large amounts of annotated data are commonly required. On the other hand, an experienced clinician is typically able to discern between a healthy and an abnormal case by mere comparison. This work exploits such ability by making use of control cases at training time and learning their distribution through autoencoders. Following, we propose to use a simple threshold approach to discriminate between healthy and abnormal cases as well as to detect lesions in cases deemed as abnormal in T2w and ADC MRI.
12033-83
Author(s): Farina Kock, Grzegorz Chlebus, Felix Thielke, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany); Hans Meine, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany), Univ. Bremen (Germany); Andrea Schenk, Fraunhofer-Institut für Digitale Medizin MEVIS (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
The segmentation of liver vessels is a crucial task for liver surgical planning. In selective internal radio therapy, a catheter has to be placed into the hepatic artery, injecting radioactive beads to internally destroy tumor tissue. Based on a set of 146 abdominal CT datasets with expert segmentations, we trained three-level 3D U-Nets with loss-sensitive re-weighting. They are evaluated with respect to different measures including the Dice coefficient and the mutual skeleton coverage. The best model incorporates a masked loss for the liver area, which achieves a mean Dice coefficient of 0.56, a sensitivity of 0.69 and a precision of 0.66.
12033-84
Author(s): Koen C. Kusters, Thom Scheeve, Nikoo Dehghani, Technische Univ. Eindhoven (Netherlands); Quirine E. van der Zander, Maastricht Univ. Medical Ctr. (Netherlands), GROW, School for Oncology and Developmental Biology (Netherlands); Ramon-Michel Schreuder, Catharina Hospital (Netherlands); Ad A. Masclee, Maastricht Univ. Medical Ctr. (Netherlands); Erik J. Schoon, Catharina Hospital (Netherlands); Fons van der Sommen, Technische Univ. Eindhoven (Netherlands), Eindhoven Artificial Intelligence Systems Institute (Netherlands); Peter H. N. de With, Technische Univ. Eindhoven (Netherlands)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Reliable in-vivo Convolutional Neural Network (CNN) based Computer-Aided Diagnosis tools for characterization of Colorectal Polyps, precursor lesions of Colorectal Cancer, could assist clinicians with diagnosis and better informed decision-making during colonoscopy procedures. Model confidence calibration is an essential step towards reliable CNNs. Well-calibrated models produce classification confidence scores that reflect the actual correctness likelihood, hence providing reliable predictions by trustworthy and informative confidence scores. In this work, we diverge from previously explored calibration approach, that needs an explicit training round on the validation set, by exploring two recently proposed trainable methods which are more convenient and facilitate the calibration process.
12033-85
Author(s): Wen Gu, Shenghui Wang, Beijing Jiaotong Univ. (China); Shuaihua Zhao, Institute of Automation (China); Lili Wan, Beijing Jiaotong Univ. (China)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
To automatically segment the gland regions, fully supervised segmentation algorithms require labor-intensive and time-consuming labeling at the pixel level. Furthermore, the generated CAMs by weakly supervised segmentation algorithms still have problems of under-activation and over-activation. In this paper, we propose a weakly supervised learning method HistoSegResT(HSRT), which only uses image-level labels (i.e., malignant and benign) to complete histopathology image segmentation. The HSRT method mainly consists of feature extraction, long-range dependency construction, and feature reconstruction. A series of experiments results show that the proposed HSRT method outperformed existing state-of-the-art methods with the same level of supervision on the GlaS dataset and can effectively relieve under-activation and over-activation of generated CAMs.
12033-86
Author(s): Sven Kuckertz, Jan Klein, Christiane Engel, Benjamin Geisler, Stefan Kraß, Stefan Heldmann, Fraunhofer Institute for Digital Medicine MEVIS (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
We present a novel approach for handling complex information of lesion segmentation in CT follow-up studies. The backbone of our approach is the computation of a longitudinal tumor tree. We perform deep learning based segmentation of all lesions for each time point in CT follow-up studies. Subsequently, follow-up images are registered to establish correspondence between the studies and trace tumors among time points, yielding tree-like relations. The tumor tree encodes the complexity of the individual disease progression. In addition, we present novel descriptive statistics and tools for correlating tumor volumes and RECIST diameters to analyze significance of various markers.
12033-87
Author(s): Alvaro Fernandez-Quilez, Trygve Eftestøl, Univ. of Stavanger (Norway); Svein Reidar Kjosavik, Stavanger Univ. Hospital (Norway); Ketil Oppedal, Univ. of Stavanger (Norway)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Prostate cancer (PCa) is the second most commonly diagnosed cancer worldwide among men. In spite of it, its current diagnostic pathway is substantially hampered by overdiagnosis. Imaging techniques such as magnetic resonance imaging (MRI) have proven to add an additional value to the current diagnostic procedures, but it relies on specialized training. Deep Learning (DL) holds promise in automatizing tasks such as MRI analysis, but large amounts of annotated data are commonly required. Existing work commonly relies on ImageNet pre-training, which is sub-optimal due to the domain gap. We propose to apply self-supervised learning (SSL) based on a generative approach to alleviate such issues. We show how by making use of an auto-encoder architecture and by applying transformations such as pixel intensity transformation or occlusion we are able to learn robust medical visual representations that are domain-specific and that can be used as an initialization method.
12033-88
Author(s): Chisako Muramatsu, Shiga Univ. (Japan); Mikinao Oiwa, Nagoya Medical Ctr. (Japan); Tomonori Kawasaki, International Medical Ctr.., Saitama Medical Univ. (Japan); Hiroshi Fujita, Gifu Univ. (Japan)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
When breast cancer is found, the best treatment is selected based on the cancer characteristics. We investigated a method to classify lesions into four molecular subtypes to assist diagnosis and treatment planning. Because of a limited number of samples and imbalanced types, the lesions were classified based on the similarities of samples using contrastive learning. The network takes two images and provides the same-class and different-class probabilities. The proposed model was tested using 189 cases by a 4-fold cross validation. The result indicates the potential usefulness of the proposed method. The computerized subtype classification may support a prompt treatment planning.
12033-89
Author(s): Jennie Karlsson, Jennifer Ramkull, Ida Arvidsson, Kalle Åström, Lund Univ. (Sweden); Kristina Lång, Lund Univ (Sweden), Skåne University Hospital (Sweden)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
A cheaper method for diagnosis of breast cancer in low- and middle-income countries is needed. A portable ultrasound device in combination with machine learning (ML) could be a solution. The aim of this work was to develop such an ML algorithm. Different convolutional neural network approaches were investigated including transfer learning and deep feature networks based on combinations of transfer networks. Two datasets were used for development and external evaluation. Gradient-weighted Class Activation Mapping (Grad-CAM) was performed to generate heatmaps. The best result was achieved by the deep feature combination of InceptionV3, Xception and VGG19 with an AUC of 0.93.
12033-90
Author(s): Hui Meng, Qingfeng Li, Xuefeng Liu, Beihang Univ. (China); Yong Wang, Chinese Academy of Medical Sciences & Peking Union Medical College (China); Jianwei Niu, Beihang Univ. (China)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Computer-aided diagnosis has been widely used in breast ultrasound images, and many deep learning-based models have emerged. However, the datasets used for breast ultrasound classification face the problem of category imbalance, which limits the accuracy of breast cancer classification. In this work, we proposed a novel dual-branch network (DBNet) to alleviate the imbalance problem and improve classification accuracy. DBNet is constructed by conventional learning branch and re-balancing branch in parallel, which take universal sampling data and reversed sampling data as inputs, respectively. Based on the design of loss function, the DBNet first learns the classification patterns from uniform sampling data and then focus on reversed sampling data gradually. The experimental results demonstrate that DBNet yields a result of 0.863 in accuracy, which outperforms the ResNet-18 and the BBN by 3.6% and 3.0%, respectively.
12033-91
Author(s): David Bermejo-Pelaez, Univ. Politécnica de Madrid (Spain); Raúl San José Estépar, Brigham and Women's Hospital (United States); Maria Fernández-Velilla, Hospital Universitario La Paz (Spain); Carmelo Palacios Miras, Hospital Universitario Fundación Jiménez Díaz (Spain); Guillermo Gallardo Madueño, Clínica Univ. de Navarra (Spain); Mariana Benegas, Hospital Clínic de Barcelona (Spain); Miguel Luengo Oroz, Spotlab (Spain); Jacobo Sellares, Marcelo Sánchez, Hospital Clínic de Barcelona (Spain); Gorka Bastarrika, Clínica Univ. de Navarra (Spain); Germán Peces-Barba, Hospital Universitario Fundación Jiménez Díaz (Spain); Luis M. Seijo, Clínica Univ. de Navarra (Spain); Maria J. Ledesma Carbayo, Univ. Politécnica de Madrid (Spain)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
In this work we present a deep learning algorithm based on CNN to automatically identify and quantify COVID-19 pneumonia patterns. A Dense-efficient CNN architecture is presented to automatically segment the different lesion subtypes. The proposed technique has been independently tested in a multicentric cohort of 100 patients, showing Dice coefficients of 0.948 ±0.053 for consolidations, 0.948±0.053 for ground glass opacities, and 0.988±0.016 for healthy tissue with respect to radiologist’s reference segmentations, and high correlations with respect to radiologist severity visual scores.
12033-92
Author(s): Jianfei Liu, Joanne Li, Amday Wolde, Catherine Cukras, Johnny Tam, National Eye Institute (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-93
Author(s): Hoda Kheradfallah, Vasudevan Lakshminarayanan, Univ. of Waterloo (Canada); Jothi Balaji, Sankara Nethralaya (India); Varadharajan Jayakumar, Mohammed Abdul Rasheed, Univ. of Waterloo (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Diabetic Retinopathy (DR) is a major cause of visual impairment among the working-age population with a high prevalence rate. This disease is characterized by 10 major lesions on fundus examination according to the International Clinical Diabetic Retinopathy scale (ICDRS). DR can be diagnosed with computer-aided diagnosis methods such as deep neural networks (DNN). The approach of this study is to segment DR-associated lesions with DNN models and predict severity grades using segmented lesions. A dataset of 143 images was used to produce lesion annotated masks. The proposed toolbox will include task-based DNN models for segmenting lesions. Then, models will be combined to predict disease grade according to ICDRS.
12033-94
Author(s): Suhev Shakya, Columbus State Univ. (United States); Mariana Vasquez, Univ. of California, Berkeley (United States); Yiyang Wang, Jacob Furst, Roselyne Tchoua, Daniela Raicu, DePaul Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Age-related Macular Degeneration (AMD) is a significant health burden that can lead to irreversible vision loss in the elderly population. Accurately classifying Optical Coherence Tomography (OCT) images is vital in computer-aided diagnosis (CAD) of AMD. Most CAD studies focus on improving classification results but ignore the fact that a classifier may predict a correct image label for the wrong reasons. We propose a human-in-the-loop OCT image classification scheme that allows users to provide feedback on model explanation during the training process to address this limitation. We innovatively integrate a custom loss function with our expert’s annotation of the OCT images along with the model’s explanation. Our results indicate that the proposed method improves the model explanation correction over the baseline model by 85% while maintaining a high classification accuracy of over 95%.
12033-95
Author(s): Yifan Gao, Haomin Chen, Catalina Caballero, Sophie Cai, Craig Jones, Adrienne Scott, Mathias Unberath, Johns Hopkins Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-96
Author(s): Anna Breger, Univ. Wien (Austria); Felix Goldbach, Bianca S. Gerendas, Ursula Schmidt-Erfurth, Medizinische Univ. Wien (Austria); Martin Ehler, Univ. Wien (Austria)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Optical coherence tomography angiography (OCTA) is a noninvasive imaging modality for the visualization of retinal blood flow in the human retina. Using specific OCTA retinal imaging biomarkers, automated blood vessel segmentation can improve subsequent image analysis and disease diagnosis. We present a novel method based on frequency representations of the image, using so-called Gabor filter banks, and evaluate it on an OCTA image data set acquired by a Cirrus HD-OCT device. The segmentations yield accurate visualizations and coincide well with device-specific values concerning vessel density. Moreover, we suggest the computation of novel adaptive local vessel density maps for easy analysis of retinal blood flow.
12033-97
Author(s): Yabo Fu, Yang Lei, Zhen Tian, Tonghe Wang, Xianjin Dai, Jun Zhou, Mark McDonald, Jeffrey Bradley, Tian Liu, Xiaofeng Yang, Emory Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-98
Author(s): Ka'Toria Leitch, The Univ. of Texas at Dallas (United States); Martin Halicek, Augusta Univ. (United States); James V. Little, Amy Y. Chen, Emory Univ. School of Medicine (United States); Baowei Fei, The Univ. of Texas at Dallas (United States), The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Radiomics and hyperspectral imaging (HSI) have the potential to improve the accuracy of tumor malignancy prediction and assessment. In this work, we extracted the radiomic features of fresh surgical papillary thyroid carcinoma (PTC) specimen that were imaged with HIS. A total of 107 unique radiomic features were extracted. This study includes 72 ex-vivo tissue specimens from 44 patients with pathology-confirmed PTC. We found that one radiomic feature – least axis length – contributed most to determining PTC tumor aggression, or high-risk classification. Least axis length, a shape-based feature was so significant, that we achieved an accuracy of 100% when analyzing the dilated images. Using the methods outlined here, the possibility of using radiomic features to arrive at conclusions for PTC treatment is becoming clearer. Faster, more accurate predictions will lead to better patient outcomes and reduction of PTC recurrence.
12033-99
Author(s): Lin Chai, Institute of Automation (China); Yaping Wang, Univ. of Chinese Academy of Sciences (China), Institute of Automation (China); Weiyang Shi, Institute of Automation (China); Bing Liu, Beijing Normal Univ. (China); Lingzhong Fan, Tianzi Jiang, Institute of Automation (China), Ctr. for Excellence in Brain Science & Intelligence Technology (China), Univ. of Chinese Academy of Sciences (China)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Numerous studies have demonstrated substantial inter-individual symptom heterogeneity among patients with schizophrenia, which seriously affects the quantification of diagnosis and treatment schema. Here, we explored the individual-specific associations among morphologic deviations from normative ranges of brain structure and specific symptomatology structure on three different dimensions without the effect of general disease effects. Specifically, we employed an exploratory bi-factor model for the PANSS scale and built normative models for three cortical measurements: cortical area and thickness. Significant correlations among different cortical measurements and latent symptom groups were observed, which could provide evidence to understand the pathophysiology of schizophrenia symptoms.
12033-100
Author(s): Xiangjun Chen, Zhaohui Wang, Hainan Univ. (China); Faouzi Alaya Cheikh, Norwegian Univ. of Science and Technology (Norway); Yuefu Zhan, Hainan Women and Children's Medical Ctr. (China)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
In the paper, a multi-modal fusion strategy was proposed to combines different modalities of MRI with clinical phenotypic data for classifying acute bilirubin encephalopathy. We trained/tested (80%/20%) the approach on a database containing 800 patients, each sample is composed of three modalities 3D brain MRI and corresponding clinical phenotype data. Further, we designed different comparative experiments to explore the best fusion strategy. The results demonstrate that the method achieves an accuracy of 0.78, a sensitivity of 0.46, and a specificity of 0.99. The results demonstrate that the method outperforming that purely applies MRI or clinical phenotypic as input.
12033-101
Author(s): Hidenobu Suzuki, Mikio Matsuhiro, Yoshiki Kawata, Tokushima Univ. (Japan); Toshihiko Sugiura, Nobuhiro Tanabe, Chiba Univ. (Japan); Masahiko Kusumoto, National Cancer Ctr. Hospital (Japan); Masahiro Kaneko, Tokyo Health Service Association (Japan); Noboru Niki, Medical Science Institute, Inc. (Japan)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Diameters of the aorta and main pulmonary artery (MPA) are useful for predicting the presence of pulmonary hypertension. We use U-Net to the segmentation of the aorta and MPA of non-contrast CT images for normal and chronic thromboembolic pulmonary hypertension (CTEPH) cases, and evaluate the segmentation performance in terms of robustness to the contacts between blood vessels. We demonstrate the details of the segmentation performance for CTEPH cases, and how to improve the stability to contacts with blood vessels.
12033-102
Author(s): Claire Weissman, Whitman College (United States); Lilly Roelofs, Univ. of Houston (United States); Jacob Furst, Daniela Stan Raicu, Roselyne Tchoua, DePaul Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-103
Author(s): Karem D. Marcomini, Institute of Mathematics and Computer Science, Escola de Engenharia de São Carlos (Brazil); Diego A. C. Cardenas, Instituto do Coração do Hospital das Clínicas (Brazil); Agma J. M. Traina, Univ. de São Paulo (Brazil); Marco A. Gutierrez, Instituto do Coração do Hospital das Clínicas (Brazil)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
We propose a deep learning-based approach that can simultaneously classify and localize COVID-19 lesions based on CXR images. We used a public dataset containing 5,639 CXR images. The EfficientNetB4 architecture was used to perform this classification and a YOLOv5 pre-trained in COCO dataset in the detection. The classification achieved an average accuracy of 0.83 (±0.01) and AUC of 0.88 (±0.02) in 5-fold over the test dataset. Positive results were evaluated by the opacity detector, which achieved a mAP of 59.51%. The good performance and rapid diagnostic prediction make the system a promising means to assist radiologists in diagnosis.
12033-104
Author(s): Apurva Singh, Florian Holzl, Sharyn Katz, Despina Kontos, Univ. of Pennsylvania (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Studies that involve radiomic or genomic feature descriptors of tumor regions usually incorporate some feature selection method to reduce their high-dimensional descriptors and ensure low collinearity among the features. However, it is important to explore various feature selection methods to identify optimal radiogenomic feature sets. These sets will be used to develop prognostic radiogenomic phenotypes. In this study, we explore three methods to select optimal radiomic and genomic features and identify statistically significant radiogenomic phenotypes. These phenotypes are subsequently integrated with stage, sex and histology in a multi-variate Cox proportional hazards model to predict overall survival in 85 NSCLC patients. The prognostic performance of the multi-variate models derived from the three methods is compared to evaluate the efficacy of the methods used to select optimal multi-modal features.
12033-105
Author(s): Yifan Wang, Chuan Zhou, Heang-Ping Chan, Lubomir M. Hadjiiski, Aamer Chughtai, The University of Michigan (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
We designed two groups of fusion methods to combine the output information from our previously developed shallow and deep U-shape based deep learning models (U-DL) for the segmentation of nodules with a wide variety of sizes, shapes, and margins. 683 (2287 nodules) and 200 (318 nodules) patient cases collected from LIDC-IDRI were used as training and independent test set, respectively. Our newly developed late fusion methods (LF-4) and early fusion method (EF-2 and EF-3) achieved significantly (p<0.05) better performance with average DICE coefficients of 0.745±0.135, 0.745±0.142, and 0.747±0.142, respectively, compared to 0.718±0.159 of the baseline method (LF-1) using pre-defined thresholds.
12033-106
Author(s): Tianyu Han, RWTH Aachen Univ. (Germany); Sven Nebelung, Uniklinik RWTH Aachen (Germany); Christoph Haarburger, ARISTRA GmbH (Germany); Christiane Kuhl, Uniklinik RWTH Aachen (Germany); Fabian Kiessling, The Institute for Experimental Molecular Imaging, RWTH Aachen Univ. (Germany), Fraunhofer-Institut für Digitale Medizin MEVIS (Germany), Comprehensive Diagnostic Ctr. Aachen, Uniklinik RWTH Aachen (Germany); Volkmar Schulz, RWTH Aachen Univ. (Germany), Fraunhofer-Institut fü Digitale Medizin MEVIS (Germany), Comprehensive Diagnostic Ctr. Aachen, Uniklinik RWTH Aachen (Germany); Daniel Truhn, Uniklinik RWTH Aachen (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-107
Author(s): Jonathan Burkow, Gregory Holste, Michigan State Univ. (United States); Jeffrey Otjen, Seattle Children’s Hospital (United States); Francisco Perez, Seattle Children's Hospital (United States); Joseph Junewick, Spectrum Health (United States); Adam Alessio, Michigan State Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Rib fractures in young children are ~80-100% the result of child abuse and can be challenging to detect on pediatric radiographs. This work presents our efforts to develop an object detection method for rib fracture detection on pediatric chest radiographs. We propose an “avalanche decision" method motivated by domain knowledge that pediatric patients with rib fractures commonly present with multiple fractures. This approach uses dynamically decreasing decision thresholds and is applied at inference to two leading architectures, RetinaNet and YOLOv5. On our curated dataset, RetinaNet and YOLOv5 both saw 12-18% improvements in F2 scores compared to base configurations.
12033-108
Author(s): Qi Qiu, Kai Sun, Institute of Automation (China); Jing Zhang, Panpan Liu, The Second Hospital of Lanzhou Univ. (China); Liang Wang, Junting Zhang, Beijing Tiantan Hospital (China); Junlin Zhou, The Second Hospital of Lanzhou Univ. (China); Zhenyu Liu, Jie Tian, Institute of Automation (China)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-109
Author(s): Xiangjun Chen, Zhaohui Wang, Hainan Univ. (China); Faouzi Alaya Cheikh, Norwegian Univ. of Science and Technology (Norway); Yuefu Zhan, Hainan Women and Children's Medical Ctr. (China)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
In this paper, a 3D-ResNet with an attention subnet for ASD diagnosis was first proposed. The model is trained and tested by sMRI from Autism Brain Imaging Data Exchange (ABIDE), and multiple sets of controlled experiments prove the superiority of this method. Crucially, the Grad-CAM was further applied to display the parts of the model emphasized in the classification by understanding the decision-making process. The class activation mapping of multiple slices of the representation sMRI was visualized, and the results showed that there are high signals in the regions near the hippocampus, corpus callosum, thalamus, and amygdala.
12033-110
Author(s): Kimberley M. Timmins, Irene C. Schaaf, Ynte M. Ruigrok, Birgitta K. Velthuis, Hugo J. Kuijf, Univ. Medical Ctr. Utrecht (Netherlands)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Early detection of unruptured intracranial aneurysms is important for assessing rupture risk. Automatic methods could aid radiologists in detecting aneurysms. Most automatic methods are modality-dependent voxel-based deep learning methods. We propose modality-independent aneurysm detection by deep learning using mesh surface representations of full brain vasculature. A mesh convolutional neural network was trained on labelled vessel surface meshes extracted automatically from TOF-MRAs. The trained model can detect aneurysms in both TOF-MRAs and CTAs with comparable performance to state-of-the-art aneurysm detection algorithms. This may aid radiologists in aneurysm detection without requiring the same image modality or protocol for follow-up imaging.
12033-111
Author(s): Alvaro Fernandez-Quilez, Petter Mine, Miguel Germán Borda, Univ. of Stavanger (Norway); Dag Aarsland, King's College London (Norway); Daniel Ferreira, Eric Westman, Karolinska Institute (Sweden); Afina W. Lemstra, Mara Ten Kate, Vrije Univ. Amsterdam (Netherlands); Alessandro Padovani, Irene Rektorova, Masaryk Univ. (Czech Republic); Laura Bonanni, Univ. degli Studi G. d'Annunzio Chieti-Pescara (Italy); Flavio Mariano Nobili, lIRCCS Ospedale Policlinico San Martino (Italy); Milica G. Kramberger, Univ. of Ljubljana (Slovenia); John-Paul Taylor, Newcastle Univ. (United Kingdom); Jakub Hort, Charles Univ. (Czech Republic); Jón Snædal, Landspítali Univ. Hospital (Iceland); Fréderic Blanc, Les Hôpitaux Univs. de Strasbourg (France); Angelo Antonini, Univ. degli Studi di Padova (Italy); Ketil Oppedal, Stavanger Univ. Hospital (Norway)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-112
Author(s): Priscilla Cho, Emory Univ. (United States); Sajal Dash, Aristeides Tsaris, Hong-Jun Yoon, Oak Ridge National Lab. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Leukemia is a form of blood cancer that originates in the bone marrow and accounts for one-third of pediatric cancers. Acute lymphoblastic leukemia is the most prevalent leukemia type found in children. To diagnose acute lymphoblastic leukemia, pathologists often conduct a morphological bone marrow assessment. These manual processes require well-trained personnel and medical professionals, thus being costly in time and expenses. Computerized decision support via machine learning can accelerate the diagnosis process and reduce the cost. We adopted the Vision Transformer model to classify white blood cells. The Vision Transformer achieved superb classification performance compared to state-of-the-art convolutional neural networks while requiring less computational resources for training. We applied the Vision Transformer model to an acute lymphoblastic leukemia classification dataset of 12,528 samples and achieved an accuracy of 88.4%.
12033-113
Author(s): Yoon Jo Kim, Jinseo An, Helen Hong, Seoul Women's Univ. (Korea, Republic of)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Recently, deep learning-based pneumonia classification has shown excellent performance on chest X-ray images, but when analyzing classification results through visualization, it has limitations in classifying by observing the outside of the lungs. In this study, we propose a deep ensemble model with multi-scale lung-focused patches for the classification of pneumonia. The proposed method consists of three steps: contrast enhancement, multi-scale lung-focused patches generation, and deep ensemble model with Convolutional Block Attention Module. The model trained on the large and middle-sized patches improved classification performance with an accuracy of 92% and Grad-CAM visualization showed the model focused on the lung region properly.
12033-114
Author(s): Mohammadreza Salmanpour, The Univ. of British Columbia (Canada), BC Cancer Research Institute (Canada), Technological Virtual Collaboration Co. (Canada); Mahdi Hosseinzadeh, Technological Virtual Collaboration Co. (Canada), Tarbiat Modares Univ. (Iran, Islamic Republic of); Azizeh Akbari, Technological Virtual Collaboration Co. (Canada), Hakim Sabzevari Univ. (Iran, Islamic Republic of); Kasra Borazjani, Technological Virtual Collaboration Co. (Canada), Univ. of Tehran (Iran, Islamic Republic of); Kasra Mojallal, Technological Virtual Collaboration Co. (Canada), Amirkabir Univ. of Technology (Iran, Islamic Republic of); Dariush Askari, Technological Virtual Collaboration Co. (Canada), Shahid Beheshti Univ. of Medical Sciences (Iran, Islamic Republic of); Masoad Rezaei, Tarbiat Modares Univ. (Iran, Islamic Republic of); Ghasem Hajianfar, Technological Virtual Collaboration Co. (Canada), Rajaie Cardiovascular Medical and Research Ctr. (Iran, Islamic Republic of), Technological Virtual Collaboration Co. (Canada); Mohammad M. Ghaemi, Kerman Univ. of Medical Sciences (Iran, Islamic Republic of), Technological Virtual Collaboration Co. (Canada); Amir Hossein Nabizadeh, Kerman Univ. of Medical Sciences (Iran, Islamic Republic of), Univ. de Lisboa (Portugal), Technological Virtual Collaboration Co. (Canada); Arman Rahmim, The Univ. of British Columbia (Canada), BC Cancer Research Institute (Canada)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-115
Author(s): Md Shibly Sadique, Ahmed Temtam, Old Dominion Univ. (United States); Erik S. Lappinen, Eastern Virginia Medical School (United States); Khan M. Iftekharuddin, Old Dominion Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-116
Author(s): Fakrul Islam Tushar, Duke Univ. (United States), Duke Univ. School of Medicine (United States); Vincent M. D'Anniballe, Duke Univ. School of Medicine (United States); Geoffrey D. Rubin, The Univ. of Arizona (United States); Ehsan Samei, Joseph Y. Lo, Duke Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Despite the potential of weakly supervised learning to automatically annotate massive amounts of data, little is known about its limitations for use in computer-aided diagnosis (CAD). For CT specifically, interpreting the performance of CAD algorithms can be challenging given the large number of co-occurring diseases. This paper examines the effect of co-occurring diseases when training classification models by weakly supervised learning, specifically by comparing multi-label and multiple binary classifiers using the same training data. Our results demonstrated that the binary model outperformed the multi-label classification in every disease category in terms of AUC. However, this performance was heavily influenced by co-occurring diseases in the binary model, suggesting it did not always learn the correct appearance of the specific disease.
12033-117
Author(s): Jing Ni, Qilei Chen, Univ. of Massachusetts Lowell (United States); Ping Liu, Central South Univ. (China); Yu Cao, Benyuan Liu, Univ. of Massachusetts Lowell (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
We present a simple and generic approach, named the Spotlight Scheme, for leveraging the knowledge of pathology location in image classification. In particular, in addition to the whole image classification stream, we add a spotlighted image stream by blacking out the non-suspicious regions. We then introduce a hybrid two-stage intermediate fusion module, namely, shallow tutoring and deep ensemble. The shallow tutoring module allows the whole image classification stream to focus on the pathological area with the help of the spotlight stream. This module can be placed in any backbone architecture multiple times, and thus penetrates the entire feature extraction procedure. At a later point, a deep ensemble network is adopted to aggregate the two streams and learn a joint representation. The experimental results show state-of-the-art or competitive performance on two medical tasks, Retinopathy of Prematurity and glaucoma.
12033-118
Author(s): Weiguo Cao, Marc J. Pomeroy, Yongfeng Gao, Stony Brook Univ. (United States); Perry J. Pickhardt, Univ. of Wisconsin School of Medicine and Public Health (United States); Almas F. Abbasi, Jela Bandovic, Zhengrong Liang, Stony Brook Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
This work explores a novel vector representation of the local image contrast patterns. We generate a matrix from the first ring of surrounding voxels and perform a Karhunen-Loève transformation on this matrix. Using the eigenvectors associated with the largest three eigenvalues, we then generate a series of textures based on a vector representation of this matrix. Experiments were performed to classify colorectal polyps using the learnt features and a Random Forest classifier to differentiate malignant from benign lesions. The outcomes show dramatic improvement for the lesion classification compared to seven existing classification methods.
12033-119
Author(s): Meredith Jones, Huong Pham, Tiancheng Gai, Bin Zheng, The Univ. of Oklahoma (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
This study aims to demonstrate the feasibility of using the fusion of optimally selected handcrafted and automated features to build a machine learning classifier that can improve performance of classifying breast lesions. A retrospective dataset involving 1,535 mammograms depicting 740 malignant and 795 benign lesions is used. 41 handcrafted features and 25,088 automated features extracted from a pre-trained VGG16 are initially computed. After applying relief-based algorithms to select optimal features, three linear SVMs are trained using a 10-fold cross-validation method. Three SVM trained using handcrafted, automated, and fusion of two type features yield AUCs of 0.621, 0.668 and 0.710, respectively.
12033-120
Author(s): Daniel C. Elton, Andy Chen, National Institutes of Health (United States); Perry J. Pickhardt, Univ. of Wisconsin School of Medicine and Public Health (United States); Ronald M. Summers, National Institutes of Health (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
In this work we explore utilizing a convolutional neural network (CNN) to predict all-cause mortality and cardiovascular risk over a 5 year horizon from abdominal CT scans taken for routine CT colonography in otherwise healthy patients aged 50-65. We find that adding a variational autoencoder (VAE) to the CNN classifier improves its accuracy for five year survival prediction (AUC 0.792 vs 0.775). Our VAE based method performs significantly better than the Framingham Risk Score and slightly better than the method demonstrated in Pickhardt et al. (2020) which utilized a combination of five CT derived biomarkers.
12033-121
Author(s): Yang Lei, Tonghe Wang, Justin Roper, Sibo Tian, Pretesh Patel, Jeffrey D. Bradley, Ashesh B. Jani, Tian Liu, Xiaofeng Yang, Emory Univ. School of Medicine (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-122
Author(s): Erikson Júlio de Aguiar, Karem Daiane Marcomini, Felipe Antunes Quirino, Univ. de São Paulo (Brazil); Marco A. Gutierrez, Instituto do Coração do Hospital das Clínicas (Brazil), Univ. de São Paulo (Brazil); Caetano Traina, Agma Juci M. Traina, Univ. de São Paulo (Brazil)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
This paper investigated the impact of Adversarial Attacks (AE) on Convolution Neural Networks (CNNs) models for classifying COVID-19 and normal cases in chest X-ray images. We evaluated the accuracy of a few models with and without AE. Our results in an attack-free environment showed that CNNs achieved accuracy of 99%. However, when CNNs have been attacked by the Fast Gradient Sign Method (FGSM), their performance was reduced. MobileNetV2 was the most affected model (specificity decreased from 98.61% to 67.73%) and the least affected was VGG16. Our finds describe that the FGSM is be able to fool the models, misclassifying the labels.
12033-123
Author(s): Yuzhe Lu, Aadarsh Jha, Yuankai Huo, Vanderbilt Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Annotated medical images are typically rarer than labeled natural images since they are limited by domain knowledge and privacy constraints. Recent advances in transfer and contrastive learning have provided effective solutions to tackle such issues from different perspectives. The state-of-the-art transfer learning (e.g., Big Transfer (BiT)) and contrastive learning (e.g., Simple Siamese Contrastive Learning (SimSiam)) approaches have been investigated independently, without considering the complementary nature of such techniques. It would be appealing to accelerate contrastive learning with transfer learning, given that slow convergence speed is a critical limitation of modern contrastive learning approaches. In this paper, we investigate the feasibility of aligning BiT with SimSiam. The results suggest that the BiT models accelerate the convergence speed of SimSiam. When used together, the model gives superior performance over both of its counterparts.
12033-124
Author(s): Rachel Madhogarhia, Ctr. for Biomedical Image Computing and Analytics, Univ. of Pennsylvania (United States), The Children’s Hospital of Philadelphia (United States); Anahita Fathi Kazerooni, Ctr. for Biomedical Image Computing and Analytics, Univ. of Pennsylvania (United States), Perelman School of Medicine, Univ. of Pennsylvania (United States); Sherjeel Arif, Ctr. for Biomedical Image Computing and Analytics, Univ. of Pennsylvania (United States), Perelman School of Medicine, Univ. of Pennsylvania (United States), The Children’s Hospital of Philadelphia (United States); Jeffrey B. Ware, Perelman School of Medicine, Univ. of Pennsylvania (United States); Ariana M. Familiar, Lorenna Vidal, The Children's Hospital of Philadelphia (United States); Sina Bagheri, Perelman School of Medicine, Univ. of Pennsylvania (United States), The Children's Hospital of Philadelphia (United States); Hannah Anderson, Perelman School of Medicine, Univ. of Pennsylvania (United States), The Children’s Hospital of Philadelphia (United States); Debanjan Haldar, The Children's Hospital of Philadelphia (United States), Perelman School of Medicine, Univ. of Pennsylvania (United States); Sophie Yagoda, Weill Cornell Medicine (United States); Erin Graves, Temple Univ. Hospital (United States), The Children’s Hospital of Philadelphia (United States); Michael Spadola, Univ. of Pennsylvania (United States), The Children’s Hospital of Philadelphia (United States); Rachel Yan, Nadia Dahmane, Weill Cornell Medicine (United States); Chiharu Sako, Ctr. for Biomedical Image Computing and Analytics, Univ. of Pennsylvania (United States), Perelman School of Medicine, Univ. of Pennsylvania (United States); Arastoo Vossough, Perelman School of Medicine, Univ. of Pennsylvania (United States), The Children’s Hospital of Philadelphia (United States); Phillip Storm, Adam Resnick, The Children's Hospital of Philadelphia (United States); Christos Davatzikos, Ctr. for Biomedical Image Computing and Analytics, Univ. of Pennsylvania (United States), Perelman School of Medicine, Univ. of Pennsylvania (United States); Ali Nabavizadeh, Perelman School of Medicine, Univ. of Pennsylvania (United States), The Children’s Hospital of Philadelphia (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Tumor segmentation is essential for surgical and treatment planning and radiomics studies, but manual segmentation is time-consuming and has high interoperator variability. This study presents a deep learning-based method for automated segmentation of pediatric brain tumors based on multi-parametric MRI scans (T1, T1w-Gd, T2, and FLAIR). DeepMedic, three-dimensional convolutional neural network, was trained on a training set (n=67), and then it was evaluated on an independent test set (n=30). The model displayed strong performance on segmentation of the whole tumor region (mean+/-SD Dice was 0.82+/-0.18), indicating that it can facilitate detection of abnormal region for further clinical measurements.
12033-125
Author(s): Young Jae Kim, Gachon University (Korea, Republic of); Sohyun Byun, Chung Il Ahn, Sang Wook Cho, Infinitt Healthcare (Korea, Republic of); Kwang Gi Kim, Gachon University (Korea, Republic of)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Examining colonoscopy has allowed for a reduction in the mortality of colorectal cancer through the detection and removal of polyps. However, missed polyp rate has been reported as approximately 24%. In this paper, we propose deep learning-based colorectal polyp detection called SmartEndo-Net. ResNet-50 is used in the backbone. Extra mix-up edges in all level of the fusion feature pyramid network(FPN) are added. Fusion features are fed to a class and box network to produce object class and bounding box prediction. SmartEndo-Net recorded sensitivity of 92.17% and proposed network was higher 7.96%, 6.78%, and 10.05% than Yolo-V3, SSD, and Faster R-CNN.
12033-126
Author(s): Lujia Wang, Zhiyang Zheng, Georgia Institute of Technology (United States); Yi SU, Kewei Chen, David A. Weidman, Banner Alzheimer's Institute (United States); Teresa Wu, Arizona State Univ. (United States); Ben Lo, Fleming Lure, MS Technologies Corp. (United States); Jing Li, Georgia Institute of Technology (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Alzheimer’s Disease (AD) is a devastating neurodegenerative disease. Recent advances in tau-PET imaging allow quantitating and mapping out the regional distribution of one important hallmark of AD across the brain. There is a need to develop machine learning (ML) algorithms to interrogate the utility of this new imaging modality. We developed an early AD risk predictor for subjects with Mild Cognitive Impairment (MCI) based on tau-PET using Machine Learning (ML). Our ML algorithms achieved high accuracy in predicting the risk of conversion to AD for a given MCI subject.
12033-127
Author(s): Youngwon Choi, UCLA Ctr. for Computer Vision & Imaging Biomarkers (United States), UCLA David Geffen School of Medicine (United States); Marlena Garcia, UCLA Ctr. for Computer Vision & Imaging Biomarkers (United States); Steven S. Raman, Dieter Enzmann, UCLA David Geffen School of Medicine (United States); Matthew S. Brown, UCLA Ctr. for Computer Vision & Imaging Biomarkers (United States), UCLA David Geffen School of Medicine (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
We propose an AI-human interactive pipeline with batch feedback to accelerate medical image annotation of large data sets. This pipeline continuously iterates on three steps. First, an AI system provides initial automated annotations to image analysts. Second, the analysts edit the annotations. Third, the AI system is upgraded with analysts’ feedback, thus enabling more efficient annotation. To develop this pipeline, we propose an AI system and upgrading workflow that is focused on reducing the annotation time while maintaining accuracy. This feedback loop demonstrated its ability to accelerate the prostate MRI segmentation. With the initial iterations on small batch sizes, the annotation time was reduced substantially.
12033-128
Author(s): Huiqiao Xie, Yang Lei, Tonghe Wang, Justin Roper, Jeffrey D. Bradley, Tian Liu, Hui Mao, Xiaofeng Yang, Emory Univ. School of Medicine (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
In radiotherapy, the magnetic resonance (MR) images are usually obtained with sacrificed longitudinal resolution in order to keep high in-plane resolution and to confined the MR scanning time, while allowing enough body coverage. However, this practice degrades the accuracy of diagnosis, contouring and treatment planning by the low longitudinal resolution in the MR images. In this work, a deep learning-based workflow is proposed to synthesis high-resolution (HR) MR images using parallel cycle-consistent generative adversarial networks (CycleGANs) trained in a self-supervision manner. MR images in the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were evaluated. Preliminary results shown that the proposed workflow outperforms the conventional cubic interpolation in generation HR MR images. The proposed method could eliminate the need of ground truth HR MR images collected in clinics and be feasible to synthesize HR MR images in common practices of radiotherapy.
12033-129
Author(s): Tomoharu Kiyuna, NEC Corp. (Japan); Noriko Motoi, National Cancer Ctr. Research Institute (Japan); Hiroshi Yoshida, Hidehito Horinouchi, Tatsuya Yoshida, National Cancer Ctr. Hospital (Japan); Takashi Kohno, National Cancer Ctr. Research Institute (Japan); Shun-ichi Watanabe, Yuichiro Ohe, National Cancer Ctr. Hospital (Japan); Atsushi Ochiai, National Cancer Ctr. (Japan)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
We applied deep-neural-network based multiple instance learning (MIL) to predict drug response based on histological images and obtained higher accuracy (76\%) and AUC (0.77) than that of existing method, PD-L1 IHC test (accuracy=58.0\%, AUC=0.636). The concordance of prediction between MIL and PD-L1 IHC test was low, which suggested that it could be more efficient to use both methods at the same time in clinical applications. we also found that the selected positive patch locations provided important insights into the histological features associated with drug response.
12033-130
Author(s): Samantha Seymour, Ryan A. Rava, Dennis Swetz, Canon Stroke and Vascular Research Ctr. (United States); Andre Montiero, Ammad Baig, Univ. at Buffalo (United States); Kurt Schultz, Canon Medical Systems USA, Inc. (United States); Kenneth Snyder, Muhammad Waqas, Elad Levy, Univ. at Buffalo (United States); Adnan Siddiqui, Univ. of Buffalo (United States); Jason Davies, Univ. at Buffalo (United States); Ciprian Ionita, Canon Stroke and Vascular Research Ctr. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
This study aimed to look into the feasibility of a radiomics based hematoma expansion prediction model using NCCT images of 200 ICH patients. The model was built utilizing Support Vector Machine (SVM) , Naïve Bayes (NB), Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), and Multilayer Perceptron (MLP) with extracted shape-based radiomic features. The SVM and LR classifiers performed significantly higher in regards to sensitivity as the other classifiers. This study indicates that these classifiers are better predictors of hematoma expansion due to their more cautious approach indicated by a higher sensitivity metric.
12033-131
Author(s): Ipsa Singh Yadav, Marwa Ismail, Case Western Reserve Univ. (United States); Volodymyr Statsevych, Cleveland Clinic (United States); Virginia Hill, Northwestern Univ. (United States); Ramon Correa, Case Western Reserve Univ. (United States); Manmeet Ahluwalia, Miami Cancer Institute (United States); Pallavi Tiwari, Case Western Reserve Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Glioblastoma (GBM) is an aggressive cancer and has a heterogeneous tumor microenvironment that extends beyond the visible tumor margin. For instance, it is difficult to distinguish infiltrating non-contrast enhancing tumor (nCET), the primary cause of recurrence, from vasogenic edema due to their similar appearance on T2w/FLAIR MRI. Histopathologically, nCET has a higher content of viable tumor cells compared to vasogenic edema. We hypothesize that these histopathological changes could be reflected using radiomic analysis on MRI scans. Our analysis was conducted on 55 GBM cases, each having two identified nCET and edema regions of interest. Analysis on multi-parametric MRI (Gd-T1w, T2w, FLAIR) showed that FLAIR sequence yielded the highest classification accuracy between nCET and edema (accuracies of 91.3% and 78.5% for training and test sets). Combining radiomic features from all three sequences further improved accuracies to 92.3% and 89.3% for the two sets, respectively.
12033-132
Author(s): Linnea E. Kremer, The Univ. of Chicago (United States); Natalie Perri, DePaul Univ. (United States); Eliza Sorber, Clemson Univ. (United States); Arlene Chapman, Samuel G. Armato, The Univ. of Chicago (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
12033-133
Author(s): Can Cui, Samuel R. Johnson, Cullen P. Moran, Katherine S. Hajdu, Joanna Shechtel, John J. Block, Brian Bingham, David Smith, Leo Y. Luo, Hakmook Kang, Jennifer L. Halpern, Herbert S. Schwartz, Ginger E. Holt, Joshua M. Lawrenz, Benoit M. Dawant, Vanderbilt Univ. (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Myxofibrosarcoma is a rare, malignant myxoid soft tissue tumor. It can be challenging to distinguish it from myxoma, a benign tumor, in clinical practice. The needle biopsy is frequently indeterminate because of the tissue sampling errors, while the shared imaging patterns between myxomas and myxofibrosarcomas also increase the diagnostic difficulty. Some previous works used radiomics features of T1-weighted MRI images to differentiate myxoid tumors, but limited works used multi-modality data. In this project, we collect a dataset containing 20 myxomas and 20 myxofibrosarcomas, each with a T1-weighted image, a T2-weighted image, and clinical features. Radiomics features from multi-modality images and clinical features are used to train multiple machine learning models.
12033-134
Author(s): Manu Goyal, Junyu Guo, Lauren Hinojosa, Keith Hulsey, Ivan Pedrosa, The Univ. of Texas Southwestern Medical Ctr. at Dallas (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Despite the recent advances of deep learning algorithms in medical imaging, the automatic segmentation algorithms for kidneys in MRI exams are still scarce. Automated segmentation of kidneys in Magnetic Resonance Imaging (MRI) exams are important for enabling radiomics and machine learning analysis of renal disease. In this work, we propose to use the popular Mask R-CNN for the automatic segmentation of kidneys in coronal T2-weighted Fast Spin Eco slices of 100 MRI exams. We propose the morphological operations as post-processing to further improve the performance of Mask R-CNN for this task. With 5-fold cross-validation data, the proposed Mask R-CNN is trained and validated on 70 and 10 MRI exams and then evaluated on the remaining 20 exams in each fold. Our proposed method achieved a dice score of 0.904 and IoU of 0.822.
12033-135
Author(s): Yiting Xie, Benedikt Graf, Parisa Farzam, IBM Watson Health (United States); Brian Baker, Christine Lamoureux, vRad (United States); Arkadiusz Sitek, IBM Watson Health (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
A fully-automated multi-stage deep-learning based algorithm to detect aortic aneurysms on contrast and non-contrast CT was developed. Manual annotations of aorta centerlines and cross-sectional aorta boundaries were created to train the algorithm. Aorta segmentation and aneurysm detection performances were evaluated on 2263 CT scans. Evaluation was performed by comparing the automatically detected aneurysm status to the aneurysm status reported in the radiology reports and the AUC was reported. The approach yielded performance for the task of aneurysm detection of 0.95 AUC.
12033-136
Author(s): Shuheng Cao, Ethan Yu, Aidan Clarke, Ward Melville High School (United States); Yongfeng Gao, Stony Brook Univ. (United States); Lihong Li, College of Staten Island (United States)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
In this paper, we proposed an image segmentation method by merging multi-channels which are defined in Hessian domain. Every domain owns individual properties and is only sensitive to a few specific tissues, which should be an essential complement for other domains. The contrast between two neighboring tissues in their merging results provides more sensitive boundary information than the original images. Moreover, we also notice some weak boundaries are the major barrier in the processing of image segmentation. An unsupervised local segmentation scheme is proposed to combat this challenge by dividing the whole volume into small patches which are overlapped with each other. Our method is testified over five different organs with two major modalities and three noise levels, and yields very promising results. After comparing, our method exhibits its great superiority over two state-of-the-art methods with more accurate contours and boundaries.
12033-137
Author(s): Marlin Siebert, Philipp Rostalski, Univ. zu Lübeck (Germany)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
Visualizing intermediate prediction results can improve the transparency of diagnostic deep learning systems. For detection of eye diseases like diabetic retinopathy (DR), this could be implemented by providing disease-related lesion segmentations which raises the need for performant but also lightweight segmentation models when deployed on edge devices. Therefore, we instantiate the U²-Net at different complexity levels and assess its potential for edge device deployment using feature scaling, depthwise separable convolutions, dual- and multi-task and ensemble learning. Experimental results show a segmentation performance for DR related lesions on par with state-of-the-art results while having fewer parameters and maintaining reasonable computational cost.
12033-138
Author(s): Marjaneh Taghavi, Femke Staal, The Netherlands Cancer Institute (Netherlands), Maastricht Univ. (Netherlands); Monique Maas, The Netherlands Cancer Institute (Netherlands); Regina Beets-Tan, The Netherlands Cancer Institute (Netherlands), Maastricht Univ. (Netherlands); Sean Benson, The Netherlands Cancer Institute (Netherlands)
In person: 23 February 2022 • 5:30 PM - 7:00 PM
Show Abstract + Hide Abstract
For patients with advanced colorectal cancer, liver metastasis is the most common kind of distant spread. Resection of the metastasis is often not possible due to tumor size and location. As a result thermal ablation is now part of international guidelines. However, a significant number of patients experience regrowth. This is currently only identified 4 months after treatment (at the earliest). We have trained a CNN model that can predict regrowth using CT scans at baseline and directly after ablation therapy, which achieves an area under the receiver operating characteristic curve of 0.72 from a dataset containing 120 lesions.
Session 12: Lung
In person: 24 February 2022 • 8:00 AM - 9:40 AM
Session Chairs: Axel Wismüller, Univ. of Rochester Medical Ctr. (United States), Chisako Muramatsu, Shiga Univ. (Japan)
12033-56
Author(s): Apurva Singh, Hannah Horng, Leonid Roshkovan, Michelle Hershman, Russell Shinohara, Sharyn Katz, Despina Kontos, Univ. of Pennsylvania (United States)
In person: 24 February 2022 • 8:00 AM - 8:20 AM
Show Abstract + Hide Abstract
Recent studies have highlighted the impact of variation in image acquisition protocols on the reproducibility of radiomic features, especially in datasets containing scans from multiple vendors. We assess approaches to mitigate the impact of image acquisition heterogeneity: resampling the images to minimum/maximum voxel spacing parameters, harmonization using nested ComBat (voxel spacing and/or image acquisition parameters as batch effects), and combinations of the resampling and harmonization methods. The features derived from these methods are subject to a novel phenotyping approach based on unsupervised hierarchical clustering that identifies statistically significant radiomic phenotypes. These phenotypes are integrated with PDL1 expression, ECOG status, BMI and smoking status in a multivariate Cox proportional hazards model to predict progression-free survival in 124 stage 4 NSCLC patients treated at our institution with first-line PEMBRO monotherapy or combination chemotherapy.
12033-57
Author(s): Jordan D. Fuhrman, Univ of Chicago (United States); Yeqing Zhu, Rowena Yip, Icahn School of Medicine at Mount Sinai (United States); Feng Li, Univ of Chicago (United States); Artit C. Jirapatnakul, Claudia I. Henschke, David F. Yankelevitz, Icahn School of Medicine at Mount Sinai (United States); Maryellen L. Giger, Univ of Chicago (United States)
In person: 24 February 2022 • 8:20 AM - 8:40 AM
Show Abstract + Hide Abstract
Opportunistic disease detection on low-dose CT (LDCT) scans is desirable due to expanded use of LDCT scans for lung cancer screening. In this study, a machine learning paradigm called multiple instance learning (MIL) is investigated for emphysema detection in LDCT scans. The top performing method was able to achieve an area under the ROC curve of 0.93 +/- 0.04 in the task of detecting emphysema in the LDCT scans through a combination of MIL and transfer learning. These results suggest that there is strong potential for the use of MIL in automatic, opportunistic LDCT scan assessment.
12033-58
Author(s): Cindy McCabe, Mojtaba Zarei, William P. Segars, Ehsan Samei, Ehsan Abadi, Duke University (United States)
In person: 24 February 2022 • 8:40 AM - 9:00 AM
Show Abstract + Hide Abstract
This study's aim was to evaluate and optimize the imaging parameters for lung lesion radiomics using the new NAEOTOM Siemens Healthineers photon-counting CT prototype. Due to the limited supply of patient data, the prototype was optimized through virtual clinical trials. Virtual patients were modeled at three BMIs, each including three lesions with varying spiculation levels. The DukeSim software was used to simulate imaging of the virtual patients under varying acquisition and reconstruction parameters. The segmented lesion results showed dose had an insignificant effect on radiomics accuracy, and increasing matrix size was found to increase the accuracy of radiomics measurements.
12033-59
Author(s): Jingnan Jia, Marius Staring, Irene Hernández Girón, Lucia J. Kroft, Anne A. Schouffoer, Berend C. Stoel, Leiden Univ. Medical Ctr. (Netherlands)
In person: 24 February 2022 • 9:00 AM - 9:20 AM
Show Abstract + Hide Abstract
Visually scoring lung involvement in systemic sclerosis from CT scans plays an important role in monitoring progression, but its labor intensiveness hinders practical application. We proposed, therefore, an automatic scoring framework that consists of two cascaded deep regression neural networks. The first network aims to predict the craniocaudal position of five anatomically defined scoring levels on the 3D CT scans. The second network receives the resulting 2D axial slices and predicts the scores. Results showed that our network has comparable performance with human experts and has the potential to be an objective alternative for the visual scoring of SSc.
12033-60
Author(s): Jun Keun Choi, North London Collegiate School Jeju (Korea, Republic of); Jae-Hun Kim, SAMSUNG Medical Ctr. (Korea, Republic of), Sungkyunkwan University School of Medicine (Korea, Republic of); Minsu Park, Chungnam National Univ. (Korea, Republic of); Chin A. Yi, Sungkyunkwan Univ. (Korea, Republic of)
In person: 24 February 2022 • 9:20 AM - 9:40 AM
Show Abstract + Hide Abstract
Radiologists predict whether nodules on chest CT are malignant or benign by observing radiologic features. Although deep learning models can be trained using 3-dimensional images, more accurate prediction may be achieved by extracting radiologic features. Using the LIDC-IDRI database, three different versions of models are trained; Model 1: Malignancy prediction based on image only, Model 2: Radiologic features and malignancy prediction (multi-task model), and 3) Malignancy prediction using radiologic features, in addition to the image (feature fusion model). Model 3 produced the most accurate malignancy prediction based on data of radiologic features transferred from Model 2.
Session 13: Abdomen
In person: 24 February 2022 • 10:10 AM - 12:10 PM
Session Chairs: Hiroyuki Yoshida, Massachusetts General Hospital (United States), Ravi K. Samala, U.S. Food and Drug Administration (United States)
12033-61
Author(s): Seung Yeon Shin, Sungwon Lee, Ronald M. Summers, National Institutes of Health Clinical Ctr. (United States)
In person: 24 February 2022 • 10:10 AM - 10:30 AM
Show Abstract + Hide Abstract
We present a novel graph-theoretic method for small bowel path tracking. It is formulated as finding the minimum cost path between given start and end nodes on a graph that is constructed based on the bowel wall detection. Also, we include must-pass nodes in finding the path to better cover the entire course of the small bowel. The proposed method showed clear improvements in terms of several metrics compared to the baseline method. The maximum length of the path that is tracked without an error per scan, by the proposed method, is above 1.4m on average.
12033-62
Author(s): Tejas Sudharshan Mathai, Sungwon Lee, Daniel C. Elton, National Institutes of Health (United States); Thomas C. Shen, Yifan Peng, National Institutes of Health (United States); Zhiyong Lu, Ronald M. Summers, National Institutes of Health (United States)
In person: 24 February 2022 • 10:30 AM - 10:50 AM
12033-63
Author(s): Jayasree Chakraborty, Memorial Sloan-Kettering Cancer Ctr. (United States); Joshua S. Jolissaint, Memorial Sloan-Kettering Cancer Ctr. (United States), Brigham and Women's Hospital (United States); Tiegong Wang, Kevin C. Soares, Memorial Sloan-Kettering Cancer Ctr. (United States); Linda M. Pak, Brigham and Women's Hospital (United States); Mithat Gonen, Thomas Boerner, Richard K. G. Do, Vinod P. Balachandran, Michael I. D'Angelica, Jeffrey A. Drebin, T. Peter Kingham, Alice C. Wei, William R. Jarnagin, Memorial Sloan-Kettering Cancer Ctr. (United States)
In person: 24 February 2022 • 10:50 AM - 11:10 AM
Show Abstract + Hide Abstract
Intrahepatic cholangiocarcinoma (IHC) is an aggressive liver cancer. Although surgery is the only curative treatment most disease recurs within 2-years after resection. Early hepatic recurrence within a short period after surgery is common and eventually leads to death. Currently, there is no way to assess the risk of early recurrence. Methods to predict these risks would help physicians select the best treatment plan; patients at high risk could be treated early or at the time of surgery with chemotherapy or radiation. We propose a CT radiomics-based approach to predict early hepatic recurrence prior to surgery which obtained an AUC of 0.78 using an AdaBoost classifier.
12033-64
Author(s): Tarun Mattikalli, National Institutes of Health (United States); Tejas Sudharshan Mathai, Ronald M. Summers, National Institutes of Health (United States)
In person: 24 February 2022 • 11:10 AM - 11:30 AM
Show Abstract + Hide Abstract
We designed a universal lesion detection method to localize lesions that are suspicious for metastasis in the NIH DeepLesion dataset. We first studied the performance of state-of-the-art (SOTA) detection neural networks on the lesion detection task and proposed a weighted boxes fusion algorithm to reduce the false positive rate. To emulate clinical use, we developed an ensemble of the best performing detection networks to achieve a clinically acceptable detection precision of 65.17% and sensitivity of 91.67% at 4 FP per image. Our results improve upon the performance of previously published lesion detection methods on challenging NIH DeepLesion CT scans.
12033-65
Author(s): Gyeong Woo Cheon, NVIDIA Corporation (United States); So-Hyun Nam, Dong-A University Medical Center (Korea, Republic of); Jaepyeong Cha, Children's National Hospital (United States)
In person: 24 February 2022 • 11:30 AM - 11:50 AM
Show Abstract + Hide Abstract
The bowel ischemia is caused by insufficient blood flow to the intestine and surgical intervention is the definitive treatment to remove non-viable tissues and restore blood flow to viable tissues. Current clinical practice primarily relies on individual surgeon’s visual inspection and clinical experience that can be subjective and unreproducible. Therefore, more consistent and objective method is required to improve the surgical performance and clinical outcomes. In this work, we present a new optical method combined with unsupervised learning using conditional variational encoders to enable quantitative and objective assessment of tissue perfusion.
12033-66
Author(s): Debayan Bhattacharya, Technische Univ. Hamburg-Harburg (Germany)
In person: 24 February 2022 • 11:50 AM - 12:10 PM
Show Abstract + Hide Abstract
We propose a self-supervised approach to segment flat and sessile polyps. These polyps are particularly difficult to segment and are one of the main reasons for misdetection of polyps. We pre-train our self-supervised U-Net on Kvasir-SEG dataset followed by supervised training on the small Kvasir-Sessile dataset. We compare our self-supervised U-Net against fully supervised U-Net, Attention U-Net, R2U-Net, R2AU-Net and ResUNet++. We report an increase of dice coefficient by 0.29, 0.31, 0.32, 0.36 and 0.30, precision by 0.31, 0.39, 0.35, 0.36 and 0.35, and recall by 0.21, 0.18, 0.28, 0.35 and 0.2 due to self-supervision.
Session 14: Eye, Retina
In person: 24 February 2022 • 1:20 PM - 3:00 PM
Session Chair: Karen Drukker, The Univ. of Chicago Medicine (United States)
12033-67
Author(s): Michael H. Udin, Univ. at Buffalo (United States), Canon Stroke and Vascular Research Ctr. (United States), Roswell Park Comprehensive Cancer Ctr. (United States); Ciprian N. Ionita, Univ. at Buffalo (United States), Canon Stroke and Vascular Research Ctr. (United States); Saraswati Pokharel, Univ. at Buffalo (United States), Roswell Park Comprehensive Cancer Ctr. (United States); Umesh C. Sharma, Univ. at Buffalo (United States), Canon Stroke and Vascular Research Ctr. (United States)
In person: 24 February 2022 • 1:20 PM - 1:40 PM
Show Abstract + Hide Abstract
Ischemic myocardial scarring carries a high risk of death and thus urgently needs an accurate and timely method of identification. To this end, machine learning can be used to differentiate cardiac magnetic resonance imaging scans of patients with and without ischemic myocardial scarring. However, this process requires extensive manual processing of those images to ensure the most accurate result possible. This study proposes an image data selection algorithm that can automatically choose the best image data and produce comparable results to manual processing.
12033-68
Author(s): Souvick Mukherjee, Tharindu De Silva, Gopal Jayakar, Peyton Grisso, Henry Wiley, Tiarnan Keenan, Alisa Thavikulwat, Emily Chew, Catherine Cukras, National Eye Institute, National Institutes of Health (United States)
In person: 24 February 2022 • 1:40 PM - 2:00 PM
Show Abstract + Hide Abstract
Purpose: Spectralis and Cirrus are 2 of the most widely used SD-OCT vendors which are used to capture retinal images. Due to the stark difference in intensities, a model trained on images from one instrument performs poorly on the images of the other instrument. Methods: We use unpaired-CycleGAN based domain-adaptation-network to transform the Cirrus-volumes to the Spectralis-volumes, before using our Spectralis-only-segmentation-network. Results: Our results show that the segmentation model performs significantly better on the domain translated volumes (Total-Retinal-Volume-Error:0.17±0.27mm3, RPEDC-Volume-Error:0.047±0.05mm3) compared to the raw volumes (Total-Retinal-Volume-Error:0.26±0.36mm3, RPEDC-Volume-Error:0.13±0.15mm3) from the Cirrus domain. Conclusions: Both our qualitative and quantitative results show that CycleGAN domain adaptation network can be used as an efficient technique to perform unpaired domain adaptation between SD-OCT images generated from different devices.
12033-69
Author(s): Tharindu S. De Silva, Kristina Hess, Cameron Duic, Souvick Mukherjee, Hector Sandoval, Jessica Aduwo, Tiarnan Keenan, Emily Chew, Catherine Cukras, National Eye Institute (United States)
In person: 24 February 2022 • 2:00 PM - 2:20 PM
Show Abstract + Hide Abstract
This work investigates a semi-supervised approach for automatic detection of hyperreflective foci (HRF) in spectal-domain optical coherence tomography (SD-OCT). Faster RCNN model for object detection was trained in a semi-supervised manner with high confident detections from the current iteration added to the training set in subsequent iterations. With each iteration, the size of the training set was increased and the knowledge of the model was transferred via new detections. The model incrementally improved the HRF detection performance and provided an objective, time, and cost effective alternative to laborious manual inspection of B-scans for HRFs.
12033-70
Author(s): Souvick Mukherjee, Tharindu De Silva, Gopal Jayakar, Peyton Grisso, Henry Wiley, Tiarnan Keenan, Alisa Thavikulwat, Emily Chew, Catherine Cukras, National Eye Institute, National Institutes of Health (United States)
In person: 24 February 2022 • 2:20 PM - 2:40 PM
Show Abstract + Hide Abstract
Purpose: Diseases of the outer-retina cause changes to the retinal-layers which are evident on Spectral-Domain-Optical-Coherence-Tomography images, revealing disease etiology and risk factors for disease progression. Manually labeling these layers is extremely laborious, time consuming and costly. While retinal-volumes are inherently 3-dimensional, state-of-the-art automatic-segmentation approaches have been limited in their utilization of the 3-dimensional nature of the structural information. Methods: In this work, we train our proposed 3D-Regularization-Network using 150-retinal-volumes and test using 191-retinal-volumes. The 3D-deep-features learned by our model captures spatial information simultaneously from all the 3 volumetric-dimensions. Results & Conclusion: Both our qualitative and quantitative results (error: ±2.60-pixels w.r.t. the ground-truth-locations, for AMD severity: 9 and 10) are better than publicly available OCT-Explorer and deep-learning based 2D-UNet-algorithms.
12033-71
Author(s): Hristina Uzunova, German Research Ctr. for Artificial Intelligence (Germany); Leonie Basso, Jan Ehrhardt, Univ. zu Lübeck (Germany); Heinz Handels, Univ. zu Lübeck (Germany), German Research Ctr. for Artificial Intelligence (Germany)
In person: 24 February 2022 • 2:40 PM - 3:00 PM
Show Abstract + Hide Abstract
In this work, a GAN-based pipeline for the generation of realistic retinal OCTs with available pathological structures and ground truth anatomical and pathological annotations is established. The emphasis of the proposed image generation approach lies especially on the simulation of the pathology-induced deformations of the retinal layers around a pathological structure. Our experiments demonstrate the realistic appearance of the images as well as their applicability for the training of neural networks.
Session 15: Segmentation
In person: 24 February 2022 • 3:30 PM - 5:30 PM
Session Chair: Karen Drukker, The Univ. of Chicago Medicine (United States)
12033-72
Author(s): Shaoyan Pan, Zhen Tian, Yang Lei, Tonghe Wang, Jun Zhou, Mark McDonald, Jeffrey Bradley, Tian Liu, Xiaofeng Yang, Emory Univ. (United States)
In person: 24 February 2022 • 3:30 PM - 3:50 PM
12033-73
Author(s): Zhou Zheng, Masahiro Oda, Kensaku Mori, Nagoya Univ (Japan)
In person: 24 February 2022 • 3:50 PM - 4:10 PM
12033-74
Author(s): Shadab Momin, Yang Lei, Neal S. McCall, Jiahan Zhang, Sibo Tian, Joseph Harms, Michael Lloyd, Jeffrey D. Bradley, The Winship Cancer Institute of Emory Univ. (United States); Tian Liu, The Winship Cancer Institute of Emory Univ. (United States); Kristin Higgins, Xiaofeng Yang, The Winship Cancer Institute of Emory Univ. (United States)
In person: 24 February 2022 • 4:10 PM - 4:30 PM
12033-75
Author(s): Abhi Lad, Adithya Narayan, Hari Shankar, Jagruthi Atada, Origin Health India (India); Jens Thang, Origin Health Singapore (Singapore); Shefali Jain, Bangalore Fetal Medicine Ctr. (India); Pooja Vyas, Jaslok Hospital & Research Ctr. (India); Divya Singh, Prime Imaging and Prenatal Diagnostics (India); Nivedita Hegde, Kasturba Medical College (India); Saw Shier Nee, Univ. of Malaya (Malaysia); Arunkumar Govindarajan, Aarthi Scans & Labs (India); Roopa PS, Muralidhar V. Pai, Akhila Vasudeva, Kasturba Medical College (India); Prathima Radhakrishnan, Bangalore Fetal Medicine Ctr. (India); Sripad Krishna Devalla, Origin Health Singapore (Singapore)
In person: 24 February 2022 • 4:30 PM - 4:50 PM
Show Abstract + Hide Abstract
Access to quality prenatal ultrasonography (USG) is limited by a number of well-trained fetal sonographers. By leveraging on deep learning (DL), we can assist even novice users in delivering standardized and quality prenatal USG examinations, necessary for the timely screening and specialists referrals in case of fetal anomalies. We propose a DL framework to segment 10 key fetal brain structures across 2 axial views necessary for the standardized USG examination. Despite training on images from only 1 center (2 USG devices), our DL model was able to generalize well even on unseen devices from other centers. The use of domain-specific data augmentation significantly improved the segmentation performance across test sets and across other benchmarking DL models as well. We believe, our work opens doors for the development of device-independent and robust models, a necessity for seamless clinical translation and deployment.
12033-76
Author(s): Yichao Li, Leiden Univ. (Netherlands); Mohamed S. Elmahdy, Leiden Univ. Medical Ctr. (Netherlands); Michael S. K. Lew, Leiden Univ. (Netherlands); Marius Staring, Leiden Univ. Medical Ctr. (Netherlands)
In person: 24 February 2022 • 4:50 PM - 5:10 PM
Show Abstract + Hide Abstract
Deep supervised models often require a large amount of labelled data, which is difficult to obtain in the medical domain. Therefore, semi-supervised learning (SSL) has been an active area of research due to its promise to minimize training costs by leveraging unlabelled data. Previous research have shown that SSL is especially effective in low labelled data regimes, we show that outperformance can be extended to high data regimes by applying Stochastic Weight Averaging (SWA), which incurs zero additional training cost. Our model was trained on a prostate CT dataset and achieved improvements of 0.12 mm, 0.14 mm, 0.32 mm, and 0.14 mm for the prostate, seminal vesicles, rectum, and bladder respectively, in terms of median test set mean surface distance (MSD) compared to the supervised baseline in our high data regime.
12033-77
Author(s): Hyeon Dham Yoon, Hyeonjin Kim, Helen Hong, Seoul Women's Univ. (Korea, Republic of)
In person: 24 February 2022 • 5:10 PM - 5:30 PM
Show Abstract + Hide Abstract
Pancreas segmentation is very challenging due to the uncertain area arising from variability in the location and morphology of pancreas. The purpose of this study is to improve the performance of pancreas segmentation by improving the level of confidence through multi-scale prediction network (MP-Net) for areas with high uncertainty. First, the pancreas is localized using 2D U-Net on the three-orthogonal planes and by combining through a majority voting. Second, pancreas segmentation is performed in the localized area using a 2D MP-Net. Our deep pancreas segmentation can be used to reduce intra- and inter-patient variations for understanding the shape of pancreas.
Conference Chair
The Univ. of Chicago (United States)
Conference Chair
Old Dominion Univ. (United States)
Program Committee
U.S. National Library of Medicine (United States)
Program Committee
The Univ. of Chicago (United States)
Program Committee
Susan M. Astley
The Univ. of Manchester (United Kingdom)
Program Committee
Univ. of Central Florida (United States)
Program Committee
Erasmus MC (Netherlands)
Program Committee
Matthew S. Brown
Univ. of California, Los Angeles (United States)
Program Committee
U.S. Food and Drug Administration (United States)
Program Committee
Univ. of Michigan (United States)
Program Committee
U.S. Food and Drug Administration (United States)
Program Committee
Technische Univ. Braunschweig (Germany)
Program Committee
Univ. zu Lübeck (Germany)
Program Committee
Télécom SudParis (France)
Program Committee
The Univ. of Chicago (United States)
Program Committee
Hayit Greenspan
Tel Aviv Univ. (Israel)
Program Committee
Univ. of Michigan (United States)
Program Committee
Fraunhofer MEVIS (Germany), Jacobs Univ. Bremen (Germany)
Program Committee
Gifu Univ. School of Medicine (Japan)
Program Committee
Seoul Women's Univ. (Korea, Republic of)
Program Committee
Radboud Univ. Nijmegen Medical Ctr. (Netherlands)
Program Committee
Seoul National Univ. Hospital (Korea, Republic of)
Program Committee
Penn Medicine (United States)
Program Committee
Stony Brook Univ. (United States)
Program Committee
Children's National Medical Ctr. (United States)
Program Committee
Fourth Military Medical Univ. (China)
Program Committee
Duke Univ. (United States)
Program Committee
Univ. de Bourgogne (France)
Program Committee
Nagoya Univ. (Japan)
Program Committee
Shiga Univ. (Japan)
Program Committee
Massachusetts General Hospital (United States), Harvard Medical School (United States)
Program Committee
Univ. of Tokushima (Japan)
Program Committee
Siemens Healthineers (United States)
Program Committee
U.S. Food and Drug Administration (United States)
Program Committee
Stony Brook Univ. (United States)
Program Committee
Univ. Estadual de Campinas (Brazil)
Program Committee
U.S. Food and Drug Administration (United States)
Program Committee
Univ. of Amsterdam (Netherlands)
Program Committee
National Institutes of Health (United States)
Program Committee
Tokyo Institute of Technology (Japan)
Program Committee
Netherlands Cancer Institute (Netherlands), Radboud Univ. Medical Ctr. (Netherlands)
Program Committee
Case Western Reserve Univ. (United States)
Program Committee
Philips Research (Germany)
Program Committee
Univ. of Rochester Medical Ctr. (United States)
Program Committee
Univ. of Pittsburgh (United States)
Program Committee
Emory Univ. (United States)
Program Committee
Massachusetts General Hospital (United States), Harvard Medical School (United States)
Program Committee
Univ. of Michigan Health System (United States)