Share Email Print
cover

Proceedings Paper

Active semi-supervised expectation maximization learning for lung cancer detection from Computerized Tomography (CT) images with minimally label training data
Author(s): Phuong Nguyen; David Chapman; Sumeet Menon; Michael Morris; Yelena Yesha
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Artificial intelligence (AI) has great potential in medical imaging to augment the clinician as a virtual radiology assistant (vRA) through enriching information and providing clinical decision support. Deep learning is a type of AI that has shown promise in performance for Computer Aided Diagnosis (CAD) tasks. A current barrier to implementing deep learning for clinical CAD tasks in radiology is that it requires a training set to be representative and as large as possible in order to generalize appropriately and achieve high accuracy predictions. There is a lack of available, reliable, discretized and annotated labels for computer vision research in radiology despite the abundance of diagnostic imaging examinations performed in routine clinical practice. Furthermore, the process to create reliable labels is tedious, time consuming and requires expertise in clinical radiology. We present an Active Semi-supervised Expectation Maximization (ASEM) learning model for training a Convolutional Neural Network (CNN) for lung cancer screening using Computed Tomography (CT) imaging examinations. Our learning model is novel since it combines Semi-supervised learning via the Expectation-Maximization (EM) algorithm with Active learning via Bayesian experimental design for use with 3D CNNs for lung cancer screening. ASEM simultaneously infers image labels as a latent variable, while predicting which images, if additionally labeled, are likely to improve classification accuracy. The performance of this model has been evaluated using three publicly available chest CT datasets: Kaggle2017, NLST, and LIDC-IDRI. Our experiments showed that ASEM-CAD can identify suspicious lung nodules and detect lung cancer cases with an accuracy of 92% (Kaggle17), 93% (NLST), and 73% (LIDC) and Area Under Curve (AUC) of 0.94 (Kaggle), 0.88 (NLST), and 0.81 (LIDC). These performance numbers are comparable to fully supervised training, but use only slightly more than 50% of the training data labels .

Paper Details

Date Published: 16 March 2020
PDF: 12 pages
Proc. SPIE 11314, Medical Imaging 2020: Computer-Aided Diagnosis, 113142E (16 March 2020); doi: 10.1117/12.2549655
Show Author Affiliations
Phuong Nguyen, Univ. of Maryland, Baltimore County (United States)
David Chapman, Univ. of Maryland, Baltimore County (United States)
Sumeet Menon, Univ. of Maryland, Baltimore County (United States)
Michael Morris, Mercy Medical Ctr. (United States)
Yelena Yesha, Univ. of Maryland, Baltimore County (United States)


Published in SPIE Proceedings Vol. 11314:
Medical Imaging 2020: Computer-Aided Diagnosis
Horst K. Hahn; Maciej A. Mazurowski, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray