Share Email Print
cover

Proceedings Paper

A multidimensional scaling and sample clustering to obtain a representative subset of training data for transfer learning-based rosacea lesion identification
Author(s): Hamidullah Binol; M. Khalid Khan Niazi; Alisha Plotner; Jennifer Sopkovich; Benjamin H. Kaffenberger; Metin N. Gurcan
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Rosacea is a common cutaneous disorder characterized by facial redness, swelling, and flushing, and it is usually diagnosed by a dermatologist after a visual examination. Qualitative human assessment often results in relatively high intra- and interobserver variability, which can negatively affect patient outcomes. Computer-assisted image analysis may improve visual assessment by human observers because it enables quantitative, consistent, and accurate analysis. Here, we combine classical multidimensional scaling (MDS) with deep convolutional neural networks (CNNs) to create an efficient framework to identify rosacea lesions. MDS is utilized to determine an appropriate amount of training data, which are used to train Inception-ResNet-v2 to classify facial images into rosacea and non-rosacea regions. Using a leave-one-patient-out cross-validation scheme with 128 × 128 non-overlapping image patches, the method resulted in a class weighted average Dice coefficient (DC) of 82.1% ± 2.4% and accuracy of 85.0% ± 0.6%. While this average performance is almost identical to our previous results (81.7% ± 2.7% and 84.9% ± 0.6% for DC and accuracy, respectively), with the new scheme, we use approximately 90% less data to train the system. We also report the results of quantitative experiments with overlapping patches with a stride of 50 pixels. With the same experimental setup, speedups of 25.6 times (128 × 128), 23.4 times (192 × 192), and 23.2 times (256 × 256) have been observed when the network is trained with the entire training data as the baseline. The class weighted average DC for this experiment with the proposed method is 83.9% ± 2.1% as in the case of 192 × 192 pixels overlapping patches, while it is 84.4% ± 2.2% when the entire set is trained at each fold. We conclude that the proposed method can be an efficient way to train deep neural networks using only a small subset of the training data.

Paper Details

Date Published: 16 March 2020
PDF: 7 pages
Proc. SPIE 11314, Medical Imaging 2020: Computer-Aided Diagnosis, 1131415 (16 March 2020); doi: 10.1117/12.2549392
Show Author Affiliations
Hamidullah Binol, Wake Forest School of Medicine (United States)
M. Khalid Khan Niazi, Wake Forest School of Medicine (United States)
Alisha Plotner, The Ohio State Univ. (United States)
Jennifer Sopkovich, The Ohio State Univ. (United States)
Benjamin H. Kaffenberger, The Ohio State Univ. (United States)
Metin N. Gurcan, Wake Forest School of Medicine (United States)


Published in SPIE Proceedings Vol. 11314:
Medical Imaging 2020: Computer-Aided Diagnosis
Horst K. Hahn; Maciej A. Mazurowski, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray