Share Email Print
cover

Proceedings Paper

CT artifact reduction via U-net CNN
Author(s): C. Zhang; Y. Xing
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Purpose: Our preliminary study showed us the capability of a deep learning neural network (DLNN) based method to eliminate a specific type of artifact in CT images. This work is to comprehensively study the applicability of a U-net CNN architecture in improving the image quality of CT reconstructions by respectively testing its performance in various artifact removal tasks. Methods: A U-net architecture is trained by a big dataset of contaminated and expected image pairs. The expected images known as reference images are acquired from groundtruths or using superior imaging system. A proper initialization of network parameters, a careful normalization of original data and a residual learning objective are incoorprated into the framework to boost training convergence. Both numerical and real data studies are conducted to validate this method. Results: In numerical studies, we found that the DLNN-based artifact reduction is powerful and can work well in reducing nearly all type artifacts and recover detailed structrual information in low-quliaty images (e.g. plain FBP reconstructions) if the network is trained with groundtruths provided. In real situations where groundtruth is not available, the proposed method can characterize the discrepancy between contaminated data and higher-quality reference labels produced by other techniques, mimicking their capability of reducing artifacts. Generalization to disjointed data is also examined using testing data. All results show that the DLNN framework can be applied to various artifact reduction tasks and outperforms conventional methods with shorter runtime. Conclusion: This work gained promising results of the U-net network architecture successfully characterizing both global and local artifact patterns. By forward propagating contaminated images through the trained network, undesired artifacts can be greatly reduced while structrual information maintained for an input of CT image. It should be noted that the proposed deep network should be trained independently for each specific case.

Paper Details

Date Published: 5 March 2018
PDF: 6 pages
Proc. SPIE 10574, Medical Imaging 2018: Image Processing, 105741R (5 March 2018); doi: 10.1117/12.2293903
Show Author Affiliations
C. Zhang, Tsinghua Univ. (China)
Y. Xing, Tsinghua Univ. (China)


Published in SPIE Proceedings Vol. 10574:
Medical Imaging 2018: Image Processing
Elsa D. Angelini; Bennett A. Landman, Editor(s)

© SPIE. Terms of Use
Back to Top