Share Email Print
cover

Proceedings Paper

Semi-supervised noise distribution learning for low-dose CT restoration
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Fully supervised deep learning (DL) methods have been widely used in low-dose CT (LDCT) imaging field and can usually achieve high accuracy results. These methods require a large labeled training set which consists of pairs of LDCT images as well as their corresponding high-dose CT (HDCT) ones. They successfully learn intermediate concept of features describing important components in CT images, such as noise distribution, and structure details, which is important to capture dependencies from LDCT image to HDCT ones. However, it should be noted that it is quite time-consuming and costly to obtain such a large of labeled CT images especially the HDCT images are limited in clinics. In comparison, lots of unlabeled LDCT images are usually easily accessible and massive critical information latent in the unlabeled LDCT can be leveraged to further boost restoration performance. Therefore, in this work, we present a semi-supervised noise distribution learning network to suppress noise-induced artifacts in the LDCT images. For simplicity, the presented network in termed as "SNDL-Net". The presented SNDL-Net consists of two sub-networks, i.e., supervised network, and unsupervised network. In the supervised network, the LDCT/HDCT image pairs are used for network training. And the unsupervised network considers the complex noise distribution in the LDCT images, and model the noise with a Gaussian mixture framework, then learns the proper gradient of LDCT images in a purely unsupervised manner. Similar with the supervised network training, the gradient information in a large of unlabeled LDCT images can be used for unsupervised network training. Moreover, to learn the noise distribution accurately, the discrepancy between the learned noise distribution in the supervised network and learned noise distribution in the unsupervised network can be modeled by a Kullback-Leibler (KL) divergence. Experiments on the Mayo clinic dataset verify the method is effective in low-dose CT image restoration with only a small amount of labeled data compared to previous supervised deep learning methods.

Paper Details

Date Published: 16 March 2020
PDF: 5 pages
Proc. SPIE 11312, Medical Imaging 2020: Physics of Medical Imaging, 1131244 (16 March 2020); doi: 10.1117/12.2548944
Show Author Affiliations
Lei Wang, Southern Medical Univ. (China)
Qi Gao, Southern Medical Univ. (China)
Mingqiang Meng, Southern Medical Univ. (China)
Sui Li, Southern Medical Univ. (China)
Manman Zhu, Southern Medical Univ. (China)
Danyang Li, Southern Medical Univ. (China)
Gaofeng Chen, Southern Medical Univ. (China)
Dong Zeng, South China Univ. of Technology (China)
Qi Xie, Xi'an Jiaotong Univ. (China)
Qian Zhao, Xi'an Jiaotong Univ. (China)
Zhaoying Bian, Southern Medical Univ. (China)
Deyu Meng, Xi'an Jiaotong Univ. (China)
Jianhua Ma Sr., Southern Medical Univ. (China)


Published in SPIE Proceedings Vol. 11312:
Medical Imaging 2020: Physics of Medical Imaging
Guang-Hong Chen; Hilde Bosmans, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray