Proceedings Volume 11072

15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine

Samuel Matej, Scott D. Metzler
cover
Proceedings Volume 11072

15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine

Samuel Matej, Scott D. Metzler
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 4 November 2019
Contents: 14 Sessions, 120 Papers, 0 Presentations
Conference: Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine 2019
Volume Number: 11072

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 11072
  • Deep Learning within CT Reconstruction
  • Iterative Reconstruction in CT
  • CT Corrections
  • PET Reconstruction
  • Deep Learning within PET Reconstruction
  • CT Reconstruction and Imaging
  • Spectral CT / Material Decomposition
  • SPECT Imaging
  • Other Novel Applications and Approaches
  • Deep Learning for Image Denoising and Characterization
  • Quantitative Methods in PET
  • Poster Session I
  • Poster Session II
Front Matter: Volume 11072
icon_mobile_dropdown
Front Matter: Volume 11072
This PDF file contains the front matter associated with SPIE Proceedings Volume 11072, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Deep Learning within CT Reconstruction
icon_mobile_dropdown
A hierarchical approach to deep learning and its application to tomographic reconstruction
Lin Fu, Bruno De Man
Deep learning (DL) has been successfully applied to many image analysis and image enhancement tasks, but applying DL to inverse problems such as tomographic reconstruction remains challenging due to its high dimensionality and non-local spatial relationship. This paper introduces a hierarchical network architecture that enables purely DL-based tomographic reconstruction for full-size computed tomography (CT) datasets. The proposed method recursively decomposes the reconstruction problem into hierarchical sub problems that can each be solved by a neural network. Overall, the hierarchical approach requires exponentially fewer parameters than a generic network would, and in theory, is of lower computational order of complexity than analytical filtered-backprojection (FBP) reconstruction. As an example, we built a hierarchical network to reconstruct 2D CT images directly from sinograms without relying on conventional analytical or iterative reconstruction components. The hierarchical approach is extensible to three dimensions and to other applications such as emission and magnetic resonance reconstruction. Such DL-based reconstruction opens the door to an entirely new type of reconstruction, which could potentially lead to a better tradeoff between image quality and computational complexity.
Quality-guided deep reinforcement learning for parameter tuning in iterative CT reconstruction
Chenyang Shen, Min-Yu Tsai, Yesenia Gonzalez, et al.
Tuning parameters in a reconstruction model is of central importance to iterative CT reconstruction, since it critically affects the resulting image quality. Manual parameter tuning is not only tedious, but becomes impractical when there exits a number of parameters. In this paper, we develop a novel deep reinforcement learning (DRL) framework to train a parameter-tuning policy network (PTPN) to automatically adjust parameters in a human-like manner. A quality assessment network (QAN) is trained together with PTPN to learn how to judge CT image quality, serving as a reward function to guide the reinforcement learning. We demonstrate our idea in an iterative CT reconstruction problem with pixel-wise total-variation regularization. Experimental results demonstrates the effectiveness of both PTPN and QAN, in terms of tuning parameter and evaluating image quality, respectively.
A machine learning approach to construct a tissue-specific texture prior from previous full-dose CT for Bayesian reconstruction of current ultralow-dose CT images
Bayesian theory lies down a sound framework for ultralow-dose computed tomography (ULdCT) image reconstruction with two terms for modeling the data statistical property and incorporating a priori knowledge for the tobe- reconstructed image. This study investigates the feasibility of using machine learning strategy, particularly the convolutional neural network (CNN), to construct a tissue-specific texture prior from previous full-dose CT (FdCT) and integrates the prior with the pre-log shift Poisson (SP) data property for Bayesian reconstruction of ULdCT images. The Bayesian reconstruction was implemented by an algorithm, called SP-CNN-T, and compared with our previous Markov random field (MRF) based tissue-specific texture prior algorithm, called SP-MRF-T. Both training performance and image reconstruction results showed the feasibility of constructing CNN texture prior model and the potential of improving the structure preservation of the nodule comparing to our previous regional tissue-specific MRF texture prior model. Quantitative structure similarity index (SSIM) and texture Haralick features (HF) were used to measure the performance difference between SP-CNN-T and SP-MRF-T algorithms, demonstrating the feasibility and the potential of the investigated machine learning approach.
Low-dose CT reconstruction assisted by a global CT image manifold prior
Chenyang Shen, Guoyang Ma, Xun Jia
The use of X-ray Computed Tomography (CT) leads to the concern of lifetime cancer risk. Low-dose CT scan with reduced mAs can reduce the radiation exposure, but the image quality is usually degraded due to excessive image noise. Numerous studies have been conducted to regularize CT image during reconstruction for better image quality. In this paper, we propose a fully data-driven manifold learning approach. An auto-encoder-decoder convolutional neural network is established to map an entire CT image to the inherent low-dimensional manifold, and then to restore the CT image from its manifold representation. A novel reconstruction algorithm assisted by the leant manifold prior is developed to achieve high quality low-dose CT reconstruction. We perform comprehensive simulation studies using patient abdomen CT images. The trained network is capable of restoring high-quality CT images with average error of ~ 20 HU. The manifold prior assisted reconstruction scheme achieves high-quality low-dose CT reconstruction, with average reconstruction error of ~ 38.5 HU, 4.6 times and 3 times lower than that of filtered back projection method and total-variation based iterative reconstruction method, respectively.
Learned primal-dual reconstruction for dual energy computed tomography with reduced dose
Dufan Wu, Kyungsang Kim, Mannudeep K. Kalra, et al.
Dual energy computed tomography (DECT) usually uses 80kVp and 140kVp for patient scans. Due to high attenuation, the 80kVp image may become too noisy for reduced photon flux scenarios such as low-dose protocols or large-sized patients, further leading to unacceptable decomposed image quality. In this paper, we proposed a deep-neural-network-based reconstruction approach to compensate for the increased noise in low-dose DECT scan. The learned primal-dual network structure was used in this study, where the input and output of the network consisted of both low- and high-energy data. The network was trained on 30 patients who went through normal-dose chest DECT scans with simulated noises inserted into the raw data. It was further evaluated on another 10 patients undergoing half-dose chest DECT scans. Improved image quality close to the normal-dose scan was achieved and no significant bias was found on Hounsfield units (HU) values or iodine concentration.
Iterative Reconstruction in CT
icon_mobile_dropdown
Statistical iterative reconstruction for spectral phase contrast CT
Korbinian Mechlem, Thorsten Sellerer, Julia Herzen, et al.
Recently, we have investigated a new algorithm for combining grating-based differential phase contrast radiography and spectral radiography. The algorithm extracts two basis material images and a dark-field image by simultaneously using the spectral and the phase contrast information. Numerical simulations have shown that the combination of these two imaging methods benefits from the strengths of the individual methods while the weaknesses are mitigated. Quantitatively accurate basis material images are obtained and the additional phase shift information leads to highly reduced basis material image noise levels compared to conventional spectral material decomposition. In this work, we extend our approach to spectral phase contrast CT by developing a one-step statistical iterative reconstruction algorithm. In numerical simulations, we demonstrate the potential for further dose reductions as well as the possibility of eliminating the time-consuming phase stepping procedure which is incompatible with a continuously rotating gantry.
Application of Proximal Alternating Linearized Minimization (PALM) and inertial PALM to dynamic 3D CT
Nargiza Djurabekova, Andrew Goldberg, Andreas Hauptmann, et al.
The foot and ankle is a complex structure consisting of 28 bones and 30 joints that changes from being completely mobile when positioning the foot on the floor to a rigid closed pack position during propulsion such as when running or jumping. An understanding of this complex structure has largely been derived from cadaveric studies. In vivo studies have largely relied on skin surface markers and multi-camera systems that are unable to differentiate small motions between the bones of the foot. MRI and CT based studies have struggled to interpret functional weight bearing motion as imaging is largely static and non-load bearing. Arthritic diseases of the foot and ankle are treated either by fusion of the joints to remove motion, or joint replacement to retain motion. Until a better understanding of the biomechanics of these joints can be achieved.
Convergence criterion for MBIR based on the local noise-power spectrum: Theory and implementation in a framework for accelerated 3D image reconstruction with a morphological pyramid
A. Sisniega, J. W. Stayman, S. Capostagno, et al.
Model-based iterative reconstruction (MBIR) offers improved noise-resolution tradeoffs and artifact reduction in conebeam CT compared to analytical reconstruction, but carries increased computational burden. An important consideration in minimizing computation time is reliable selection of the stopping criterion to perform the minimum number of iterations required to obtain the desired image quality. Most MBIR methods rely on a fixed number of iterations or relative metrics on image or cost-function evolution, and it would be desirable to use metrics that are more representative of the underlying image properties. A second front for reduction of computation time is the use of acceleration techniques (e.g. subsets or momentum). However, most of these techniques do not strictly guarantee convergence of the resulting MBIR method. A data-dependent analytical model of noise-power spectrum (NPS) for penalized weighted least squares (PWLS) reconstruction is proposed as an absolute metric of image properties for the fully converged volume. Distance to convergence is estimated as the root mean squared error (RMSE) between the estimated NPS and an NPS measured on a uniform region of interest (ROI) in the evolving volume. Iterations are stopped when the RMSE falls below a threshold directly related with the properties of the target image. Further acceleration was achieved by combining the spectral stopping criterion with a morphological pyramid (mPyr) in which the minimization of the PWLS cost-function is divided in a cascade of stages. The algorithm parameters (voxel size in this work) change between stages to achieve faster evolution in early stages, and a final stage with the target parameters to guarantee convergence. Transition between stages is governed by the spectral stopping criterion.
Contrast-medium anisotropy-aware tensor total variation model for robust cerebral perfusion CT reconstruction with weak radiation: a preliminary study
Yuanke Zhang, Dong Zeng, Sui Li, et al.
In this study we present a novel contrast-medium anisotropy-aware TTV (Cute-TTV) model to reflect intrinsic sparsity configurations of a cerebral perfusion Computed Tomography (PCT) object. We also propose a PCT reconstruction scheme via the Cute-TTV model to improve the performance of PCT reconstructions in the weak radiation tasks (referred as CuteTTV-RECON). An efficient optimization algorithm is developed for the CuteTTV-RECON. Preliminary simulation studies demonstrate that it can achieve significant improvements over existing state-of-the-art methods in terms of artifacts suppression, structures preservation and parametric maps accuracy with weak radiation.
Clinical study of soft-tissue contrast resolution in cone-beam CT of the head using multi-resolution PWLS with multi-motion correction and an electronic noise model
Purpose: Improving soft-tissue contrast resolution beyond the capability of current cone-beam CT (CBCT) systems is essential to a growing range of image guidance and diagnostic imaging scenarios. We present a framework for CBCT model-based image reconstruction (MBIR) combining artifact corrections with multi-resolution reconstruction and multiregion motion compensation and apply the method for the first time in a clinical study of CBCT for high-quality imaging of head injury. Methods: A CBCT prototype was developed for mobile point-of-care imaging in the neuro-critical care unit (NCCU). Projection data were processed via poly-energetic gain correction and an artifacts correction pipeline treating scatter, beam hardening, and motion compensation. The scatter correction was modified to use a penalized weighted least-squares (PWLS) image in the Monte-Carlo (MC) object model for better uniformity in truncated data. The PWLS method included: (1) multi-resolution reconstruction to mitigate lateral truncation from the head-holder; (2) multi-motion compensation allowing separate motion of the head and head-holder; and (3) modified statistical weights to account for electronics noise and fluence modulation by the bowtie filter. Imaging performance was evaluated in simulation and in the first clinical study (N = 54 patients) conducted with the system. Results: Using a PWLS object model in the final iteration of the MC scatter estimate improved image uniformity by 40.4% for truncated datasets. The multi-resolution, multi-motion PWLS method greatly reduced streak artifacts and nonuniformity both in simulation (RMSE reduced by 65.5%) and in the clinical study (visual image quality assessed by a neuroradiologist). Up to 15% reduction in variance was achieved using statistical weights modified according to a model for electronic noise from the detector. Each component was important for improved contrast resolution in the patient data. Conclusion: An integrated pipeline for artifacts correction and PWLS reconstruction mitigated artifacts and noise to a level supporting visualization of low-contrast brain lesions and warranting future studies of diagnostic performance in the NCCU.
Adaptive smoothing algorithms for MBIR in CT applications
Jingyan Xu, Frederic Noo
Many model based image reconstruction (MBIR) methods for x-ray CT are formulated as convex minimization problems. If the objective function is nonsmooth. primal-dual algorithms are applicable with the drawback that there is an increased memory cost due to the dual variables. Some algorithms recently developed for large-scale nonsmooth convex programs use adaptive smoothing techniques and are of the primal type. That is, they achieve convergence without introducing the dual variables, hence without the increased memory. We discuss one such algorithm with an O(1/k) convergence rate, where k is the iteration number. We then present an extension of it to handle strong convex objective functions. This new algorithm has the optimal convergence rate of O(1/k 2) for its problem class. Our preliminary numerical studies demonstrate competitive performance with respect to an alternative method.
CT Corrections
icon_mobile_dropdown
Motion gradients for epipolar consistency
Alexander Preuhs, Michael Manhart, Elisabeth Hoppe, et al.
Enforcing geometric consistency of an acquired cone-beam computed tomography scan has been shown to be a promising approach for online geometry calibration and the compensation of rigid patient motion. The approach estimates the motion parameters by solving an optimization problem, where the cost function is the accumulated consistency based on Grangeat’s theorem. In all previous work, this is performed with zero-order optimization methods like the Nelder-Mead algorithm or grid search. We present a derivation of motion gradients enabling the usage of more efficient first-order optimization algorithms for the estimation of rigid patient motion or geometry misalignment. We first present a general formulation of the gradients, and explicitly compute the gradient for the longitudinal patient axis. To verify our results, we compare the presented analytic gradient with a finite difference. In a second experiment we compare the computational demand of the presented gradient with the finite differences. The analytic gradient clearly outperforms the finite differences with a speed up of ~35 %.
A motion estimation and compensation algorithm for 4D CBCT of the abdomen
Seongjin Yoon, Alexander Katsevich, Michael Frenkel, et al.
We propose an algorithm for periodic motion estimation and compensation in the case of a slowly rotating gantry, e.g., as is the case in cone beam CT. The main target application is abdomen imaging, which is quite challenging because of the absence of high-contrast features. The algorithm is based on minimizing a cost functional, which consists of the data fidelity term, the optical flow constraint term, and regularization terms. To find the appropriate solution we change the constraint strength and regularization strength parameters during the minimization. Results of experiments with simulated and clinical data demonstrate promising performance.
A preliminary study on explicit compensation for the non-linear-partial-volume effect in CT
In a standard data model for CT, a single ray often is assumed between a detector bin and the X-ray focal spot even though they are of finite sizes. However, due to their finite sizes, each pair of detector bin and X-ray focal spot necessarily involves multiple rays, thus resulting in the non-linear partial volume (NLPV) effect. When an algorithm developed for a standard data model is applied to data with NLPV effect, it may engender NLPV artifacts in images reconstructed. In the presence of the NLPV effect, data necessarily relates non-linearly to the image of interest, and image reconstruction free of NLPV is thus tantamount to inverting appropriately the non-linear data model. In this work, we develop an optimization-based algorithm for solving the non-linear data model in which the NLPV effect is included, and use the algorithm to investigate the characteristics and reduction of the NLPV artifacts in images reconstructed. The algorithm, motivated by our previous experience in dealing with a non-linear data model in multispectral CT reconstruction, compensates for the NLPV effect by numerically inverting the non-linear data model through solving a non-convex optimization program. The algorithm, referred to as the non-convex Chambolle-Pock (ncCP) algorithm, is used in simulation studies for numerically characterizing the inversion of the non-linear data model and the compensation for the NLPV effect.
Reduction of irregular view-sampling artifacts in a stationary gantry CT scanner
Alexander Katsevich, Seongjin Yoon, Michael Frenkel, et al.
We propose an FBP reconstruction algorithm for a stationary gantry CT scanner with distributed sources. The sources are fired in quasi-random order to improve data completeness across the field of view (FOV). The downsides of that are two-fold. The neighboring sources are fired non-sequentially, so the view derivative should be avoided. Second, the angular distribution of rays through each voxel is non-uniform and varies across the FOV. To overcome these challenges we incorporate a weight function into an FDK-type reconstruction algorithm, and integrate by parts to avoid view differentiation. Results of experiments with simulated data confirm that a properly selected weight significantly reduces irregular view sampling streaks.
Reduction of metal artefacts in CBCT caused by needles crossing the FOV border
Dirk Schäfer, Christian Haase, William van der Sterren, et al.
Cone beam CT reconstructions are an accurate and efficient intra-procedural method for assessing the positioning of biopsy or ablation needles inside the human body. Commonly encountered issues are metal artefacts produced by the needle that is crossing the border of the field of view (FOV). We combine two approaches for metal artefact reduction (MAR) by exchanging information between the two methods. The first method performs a second pass reconstruction after segmenting the metal inside the FOV volume and subtracting the metal shadow from the sinogram. The second method focuses on the sinogram data to suppress objects outside the reconstruction FOV. Superior performance of this combined method is demonstrated on simulated and clinical data.
PET Reconstruction
icon_mobile_dropdown
Simultaneous micro-PET imaging of F-18 and I-124 with correction for triple-random coincidences
Stephen C. Moore, Srilalan Krishnamoorthy, Eric Blankemeyer, et al.
Positron emission tomographic (PET) images of two radiopharmaceuticals may be obtained simultaneously if one tracer is a standard β+ emitter, while the other also emits prompt gammas that can be used to separate images of the two tracers. We developed and tested an approach to correct simultaneously acquired F-18 and I-124 μPET images for triple-random events that contaminate the desired β+γ triples from I-124. The background compartment of a NEMA-NU4 phantom was filled with approximately equal F-18 and I-124 activities. One of the two smaller cylinders contained a 4.8-times higher concentration of I-124 only; the other contained a 2.8-times higher concentration of F-18 only. List-mode data were acquired on a Molecubes b-CUBE scanner at 0.34h, 4.3h, and 72.2h after phantom filling. After triple-random and crosscontamination correction, the absolute bias of the activity concentration measured in images of the background compartment was ≤2.1% for F-18 and ≤6.8% for I-124. The ratios of hot tube-to-background I-124 concentration underestimated the true ratios by 16.2%, 10.4%, and 9.5% for the three scans; the F-18 ratios overestimated the true ratios by 10.4% and 3.9% for the two scans with F-18. Triple-random correction is important and useful for simultaneous F-18 + I-124 μPET imaging.
Application of the pseudoinverse for real-time 3D PET image reconstruction
Real-time Positron Emission Tomography (PET) has the potential to become a new imaging tool providing useful information, such as first-shot images, medical intervention guidance, information about patient position and motion, and to perform PET image guided biopsy. Fully-3D iterative reconstruction methods in PET provide highest quality images, but they are still not suitable for real-time imaging due to their large computational time requirements. On the other hand, analytical methods are much faster, but they exhibit low-quality images and artifacts when using noisy or incomplete data. We propose an alternative reconstruction method based on the pseudoinverse of the System Response Matrices (SRM), which can be very fast while yielding good quality images. The reconstruction problem is separated into two independent ones. First, the axial part of the SRM is pseudoinverted and used to rebin in the axial direction 3D data into 2D datasets with resolution recovery. The resulting 2D datasets can be reconstructed with standard analytical methods such as Filtered Back-Projection (FBP), or with another in-plane pseudoinverse algorithm. Pseudoinverse rebinning is as fast as standard Single Slice ReBinning (SSRB), but with image quality comparable to FOurierREbinning (FORE). With regards to the transaxial image reconstruction, pseudoinverse rebinning is as fast as FBP, but obtains improved resolution recovery and uniformity. Overall, the two-step psudoinverse reconstruction yields much more acceptable images than SSRB+FBP, at a rate of several frames per second, compatible with real time applications.
Non-TOF fourier-based analytic reconstruction from TOF histo-projections for high resolution TOF scanners
Vladimir Y. Panin, Samuel Matej
We have extended Fourier-based reconstruction approaches from histo-projection TOF data to include non-TOF reconstruction. TOF information is used here for Fourier domain interpolation and backward extension of available histo-projection data to a larger number of azimuthal views, essential for the artifact-free non-TOF reconstruction. Only part of the available data (selection was defined by zero TOF frequency location) are used in the final reconstruction. We demonstrate, using experimental data, that the proposed approach is insensitive to the time calibration errors, while it preserves the spatial resolution of the Fourier-based analytic TOF algorithm (3D DIFTOF), although with higher noise levels, as given (as a trade-off) by the non-TOF reconstruction.
Preliminary investigation of optimization-based image reconstruction for TOF PET with sparse configurations
Zheng Zhang, Buxin Chen, Amy E. Perkins, et al.
In this work, we investigate and characterize optimization-based image reconstruction from list-mode TOFPET data collected by using a digital TOF-PET scanner with reduced detectors, while seeking possibly to maintain the image quality and volume coverage. In particular, we focused on two patterns of sparse configurations, in both of which the total number of crystals is reduced to 50% of the corresponding clinical TOF-PET scanner. The reconstruction problem from data of the two sparse configurations is formulated as the solution to an image-TV-constrained, data-KLminimization optimization problem, and the image is reconstructed by use of an algorithm tailored from a Chambolle and Pock (CP) algorithm through solving the optimization problem. The characteristics of each sparse configuration was investigated by assessing the corresponding reconstructions visually and quantitatively. Results of the study suggest that certain sparse TOF-PET configurations may yield images with quality and volume coverage comparable to that obtained with current clinical TOF-PET scanner that has densely populated detectors.
Rapid construction of system response matrix based on geometric symmetries for the quad-head PET system
Jian Cheng, Fanzhen Meng, Yu Shi, et al.
The quad-head PET system has a compact structure which leads to the depth of interaction (DOI) blurring. The Monte Carlo (MC) simulation can eliminate the DOI effect significantly, and it has been utilized in the dual-head PET systems. For the quad-head PET system, the geometric symmetry is less, which makes the MC simulation difficult. The multi-ray method combined with the DOI model can also relieve the effect of the DOI blurring, but it is time-consumed. In this study, we focus on the rapid construction of the system response matrix (SRM) based on geometric symmetries for the multi-ray method. During the construction of the SRM, the SRM is divided into two parts: the SRM of the opposite detectors and the SRM of the adjacent detectors. The general processor unit (GPU) is utilized to improve the computation speed. The result shows that the computation time is largely decreased when the geometric symmetries are used. The simulation experiments indicate that the data of adjacent detector heads and the DOI model are helpful to improve the quality of the quad-head PET reconstruction.
Extension of emission EM look-alike algorithms to Bayesian algorithms
Larry Zeng
Recently we developed a family of image reconstruction algorithms that look like emission maximum-likelihood expectation-maximization (ML-EM) algorithm. In this paper, we extend these algorithms to Bayesian algorithms. The family of emission-EM-lookalike algorithms uses multiplicative update scheme. The extension of these algorithms to Bayesian algorithms is achieved by introducing a new simple factor, which contains the Bayesian information. One of the extended algorithms can be applied to emission tomography, and another can be applied to transmission tomography. Computer simulations are performed and compared with the corresponding un-extended algorithms. The totalvariation (TV) norm is used as the Bayesian constraint in the computer simulations. The newly developed algorithms demonstrate stable performance. For any noise variance function, a simple Bayesian algorithm can be derived. The proposed algorithms have properties such as multiplicative update, non-negativity, faster convergence rate for the bright objects, and ease of implementation. Our algorithms are inspired by Green’s one-step-late (OSL) algorithm. “One-step-late” is an undesirable feature. Our algorithms do not have this undesirable one-step-late feature.
Deep Learning within PET Reconstruction
icon_mobile_dropdown
MAPEM-Net: an unrolled neural network for Fully 3D PET image reconstruction
Kuang Gong, Dufan Wu, Kyungsang Kim, et al.
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently deep neural networks have been widely applied to medical imaging denoising applications. In this work, based on the MAPEM algorithm, we propose a novel unrolled neural network framework for 3D PET image reconstruction. In this framework, the convolutional neural network is combined with the MAPEM update steps so that data consistency can be enforced. Both simulation and clinical datasets were used to evaluate the effectiveness of the proposed method. Quantification results show that our proposed MAPEM-Net method can outperform the neural network and Gaussian denoising methods.
Generative adversarial networks based regularized image reconstruction for PET
Zhaoheng Xie, Reheman Baikejiang, Kuang Gong, et al.
Image reconstruction in positron emission tomography (PET), especially from low-count projection data, is challenging due to the ill-posed nature of the inverse problem. Prior information can substantially improve the quality of reconstructed PET images. Previously, a PET image reconstruction method using a convolutional neural network (CNN) representation was proposed. In this work, we replace the original network with a generative adversarial network (GAN) to improve the network performance under limited number of training data. We also introduce an additional likelihood function in the objective function, which acts as a soft constraint on the network input. Evaluation study using real patient data with artificially inserted lesions demonstrated noticeable improvements in terms of lesion contrast recovery versus background noise trade-off.
Motion correction of respiratory-gated PET image using deep learning based image registration framework
Tiantian Li, Mengxi Zhang, Wenyuan Qi, et al.
Artifacts caused by patient breathing and movement during PET data acquisition affect image quality. Respiratory gating has been proposed to gate the list-mode PET data into multiple bins over a respiratory cycle. Non-rigid registration of respiratory-gated PET images can reduce the motion artifacts and preserve the count statistics, but it is time consuming. In this work, we propose an unsupervised non-rigid image registration framework using deep learning. We use a differentiable spatial transformer layer to warp the source image to the target image and use a stacked structure for deformation field refinement. Estimated deformation fields were incorporated into an iterative image reconstruction algorithm to perform motion compensated PET image reconstruction. We validated the proposed method using simulation and clinical data and showed its ability to reduce the motion artifact in PET images.
Direct patlak reconstruction from dynamic PET using unsupervised deep learning
Kuang Gong, Ciprian Catana, Jinyi Qi, et al.
Direct reconstruction methods have been developed to estimate parametric images directly from the measured sinogram by combining the PET imaging model and tracer kinetics in an integrated framework. Due to limited counts received, especially for low-dose scenarios, SNR and resolution of parametric images produced by direct reconstruction frameworks are still limited. Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available. For static PET imaging, high-quality training labels can be acquired by extending scanning time. However, this is not feasible for dynamic PET imaging, where the scanning time is already long enough. In this work, we present a novel unsupervised deep learning method for direct Patlak reconstruction from low-dose dynamic PET. The training label is measured sinogram itself and the only requirement is the patients own anatomical prior image, which is readily available from PET/CT or PET/MR scans. Experiment evaluation based on a low-dose dynamic dataset shows that the proposed method can outperform Gaussian post-smoothing and anatomically-guided direct reconstruction using the kernel method.
On the impact of input feature selection in deep scatter estimation for positron emission tomography
Deep scatter estimation (DSE) for X-ray computed tomography or positron emission tomography (PET) uses convolutional neural networks (CNNs) to estimate scatter distributions. We investigate the impact of physically motivated transformations and combinations of emission and attenuation input features on PET-DSE performance. Therefore, we decompose the analytical expression of a convolutional scatter model into different feature sets as a function of measured prompts and attenuation correction factors, and propose to use individual attenuation sinograms of central slabs and peripheral regions. Data from 20 patients ( 71 bed positions, 17 892 direct views) were collected and used to train CNNs to estimate the single scatter simulation (SSS) from various feature sets. Adding redundant attenuation features improved the convergence of validation metrics. Slab-wise attenuation sinograms improved training mean absolute errors by 10% and early-epoch validation metrics, yet without improvement in later epochs. In conclusion, physically motivated transformation of input features can help improve training and estimation performance in PET-DSE.
CT Reconstruction and Imaging
icon_mobile_dropdown
Theoretically-exact filtered-backprojection reconstruction from real data on the line-ellipse-line trajectory
Zijia Guo, Günter Lauritsch, Andreas Maier, et al.
C-arm CT imaging can be improved in terms of axial coverage and cone-beam artifacts using advanced data acquisition geometries such as the extend line-ellipse-line trajectory. Previously, we showed that such a geometry can be robustly implemented on a clinical system. Here, we demonstrate that imperfections in the trajectory realization can be addressed so as to achieve accurate high contrast imaging with a theoretical-exact filtered- backprojection algorithm. The performance of the proposed algorithm is evaluated using the FORBILD head phantom as well as real data of an anthropomorphic head phantom.
Optimization of cone-beam CT scan orbits for cervical spine imaging
Purpose: We investigate cone-beam CT (CBCT) imaging protocols and scan orbits for 3D cervical spine imaging on a twin-robotic x-ray imaging system (Multitom Rax). Tilted circular scan orbits are studied to assess potential benefits in visualization of lower cervical vertebrae, in particular in low-dose imaging scenarios. Methods: The Multitom Rax system enables flexible scan orbit design by using two robotic arms to independently move the x-ray source and detector. We investigated horizontal and tilted circular scan orbits (up to 45° tilt) for 3D imaging of the cervical spine. The studies were performed using an advanced CBCT simulation framework involving GPU accelerated x-ray scatter estimation and accurate modeling of x-ray source, detector and noise. For each orbit, the x-ray scatter and scatter-to-primary ratio (SPR) were evaluated; cervical spine image quality was characterized by analyzing the contrast-to-noise ratio (CNR) for each vertebrae. Performance evaluation was performed for a range of scan exposures (263 mAs/scan – 2.63 mAs/scan) and standard and dedicated low dose reconstruction protocols. Results: The tilted orbit reduces scatter and increases primary detector signal for lower cervical vertebrae because it avoids ray paths crossing through both shoulders. Orbit tilt angle of 35° was found to achieve a balanced performance in visualization of upper and lower cervical spine. Compared with a flat orbit, using the optimized 35° tilted orbit reduces lateral projection SPR at the C7 vertebra by <40%, and increases CNR by 220% for C6 and 76% for C7. Adequate visualization of the vertebrae with CNR <1 was achieved for scan exposures as low as 13.2 mAs / scan, corresponding to ~3 mGy absorbed spine dose. Conclusion: Optimized tilted scan orbits are advantageous for CBCT imaging of the cervical spine. The simulation studies presented here indicate that CBCT image quality sufficient for evaluation of spine alignment and intervertebral joint spaces might be achievable at spine doses below 5 mGy.
Low frequency recovery in 16cm coverage axial multi-detector computed tomography
Stanislav Žabić, Zhicong Yu, Wenjing Cao, et al.
Multiple CT vendors have released clinical multi-detector (MD) computed tomography (CT) systems with 16cm coverage. Axial CT for voxels outside the acquisition plane does not satisfy a fundamental completeness condition, which leads to so called cone-beam artifacts. This paper revisits the iterative filtered back-projection (FBP) algorithm from 2008 and analyzes it in the context of Brerman iterations. Also, we propose a one application of this algorithm along with a recently published filtering orthogonal to the acquisition plane as a pragmatic way to considerably reduce the cone-beam artifacts in axial CT scans with high coverage.
Performance analysis for nonlinear tomographic data processing
Image quality analysis of nonlinear algorithms is challenging due to numerous dependencies on the imaging system, algorithmic parameters, object, and stimulus. In particular, traditional notions of linearity and local linearity are of limited utility when the system response is dependent on the stimulus itself. In this work, we analyze the performance of nonlinear systems using perturbation response - the difference between the mean output with and without a stimulus, and introduce a new metric to examine variation of the responses in individual images. We applied the analysis to four algorithms with different degrees of nonlinearity for a spherical stimulus of varying contrast. For model-based reconstruction methods [penalized-likelihood (PL) reconstruction with a quadratic penalty and a Huber penalty], perturbation response analysis reaffirmed known trends in terms of object- and location-dependence. For a CNN denoising network, the response exhibits highly nonlinear behavior as the contrast increases – from the stimulus completely disappearing, to appearing at the right contrast but smaller in size, to being fully admitted by the algorithm. Furthermore, the variation metric for PL reconstruction with a Huber penalty and the CNN network reveals high variation at the edge of the stimulus, i.e., perturbation response computed from the mean images is a smoothed version of individual responses due to “jitter” in edges. This behavior suggests that the mean response alone may not be representative of performance in individual images and image quality metrics traditionally defined based on the mean response may be inappropriate for certain nonlinear algorithms. This work demonstrates the potential utility of perturbation response and response variation in the analysis and optimization of nonlinear imaging algorithms.
Simulating lower-dose scans from an available CT scan
Masoud Elhamiasl, Johan Nuyts
Low-dose CT scans can be obtained by reducing the radiation dose to the patient; however, lowering the dose results in a lower signal-to-noise ratio and therefore also in a reduced image quality. In this research, we aim to develop a tool to simulate a reduced-dose scan from an existing standard-dose scan. The motivation for simulating a reduced-dose scan is to determine how much the dose can be reduced without losing the relevant information required for proton treatment planning. The method estimates the noise equivalent number of photons in the sinogram and applies a thinning to reduce that number. The method accounts for the bowtie filter, for the noise correlation between neighboring detector elements and for the fact that for the same image intensity, a harder beam has fewer photons and therefore a higher variance. The proposed model shows a close agreement between the variance in the observed and in the simulated lower-dose scans. Simulations of low-dose scans of a 21 cm and a 6 cm water phantom in a range from 300 to 20 mAs show that the noise variance of the reconstructed images matches the reconstructions from the real scans with less than 5% error.
Optimized conversion from CT numbers to proton relative stopping power based on proton radiography and scatter corrected cone-beam CT images
Nils Krah, Simon Rit
We propose a method to generate accurate proton relative stopping power (RSP) maps from patient cone-beam CT (CBCT) images. The scatter polluted low frequency component in the CBCT projections is replaced by an analytically calculated estimate of the scatter-free component. This is obtained by forward projecting the segmented CBCT image overridden with reference materials (air, soft tissue, bone). The projection model accounts for polychromaticity and uses an estimate of the combined source-detector spectral function. High and low frequency image components are automatically matched. The accurate conversion curve from CT numbers to RSP is obtained by comparison of a proton radiography and a proton digitally reconstructed radiography. CBCT images are acquired on a clinical scanner and proton images are simulated by Monte Carlo. Results show clearly reduced cupping effect and overall better RSP accuracy when CBCT images are scatter corrected.
Spectral CT / Material Decomposition
icon_mobile_dropdown
Local response prediction in model-based CT material decomposition
Spectral CT is an emerging modality that permits material decomposition and density estimation through the use of energy-dependent information in measurements. Direct model-based material decomposition algorithms have been developed that incorporate statistical models and advanced regularization schemes to improve density estimates and lower exposure requirements. However, understanding and control of the relationship between regularization and image properties is complex with interactions between spectral channels and material bases. In particular, regularization in one material basis can affect the image properties of other material bases, and vice versa. In this work, we derived a closed-form set of local impulse responses for the solutions to a general, regularized, model-based material decomposition (MBMD) objective. These predictors quantify both the spatial resolution in each material image as well as the influence of regularization of one material basis on other material images. This information can be used prospectively to tune regularization parameters for specific imaging goals.
Image-domain multi-material decomposition using a union of cross-material models
Zhipeng Li, Saiprasad Ravishankar, Yong Long
Penalized weighted-least squares (PWLS) with learned material priors is a promising way to achieve high quality basis material images using dual energy CT (DECT). We propose a new image-domain multi-material decomposition (MMD) method that combines PWLS estimation with regularization based on a union of learned crossmaterial transforms (CULTRA) model. Numerical experiments with the XCAT phantom show that the proposed method significantly improves the basis materials’ image quality over direct matrix inversion and PWLS decomposition with regularization involving a total nuclear norm (TNV) term and a ℓ0 norm term (PWLS-TNV-ℓ0).
Optimized spatial-spectral CT for multi-material decomposition
Spectral CT is an emerging modality that uses a data acquisition scheme with varied spectral responses to provide enhanced material discrimination in addition to the structural information of conventional CT. Existing clinical and preclinical designs with this capability include kV-switching, split-filtration, and dual-layer detector systems to provide two spectral channels of projection data. In this work, we examine an alternate design based on a spatialspectral filter. This source-side filter is made up a linear array of materials that divide the incident x-ray beam into spectrally varied beamlets. This design allows for any number of spectral channels; however, each individual channel is sparse in the projection domain. Model-based iterative reconstruction methods can accommodate such sparse spatialspectral sampling patterns and allow for the incorporation of advanced regularization. With the goal of an optimized physical design, we characterize the effects of design parameters including filter tile order and filter tile width and their impact on material decomposition performance. We present results of numerical simulations that characterize the impact of each design parameter using a realistic CT geometry and noise model to demonstrate feasibility. Results for filter tile order show little change indicating that filter order is a low-priority design consideration. We observe improved performance for narrower filter widths; however, the performance drop-off is relatively flat indicating that wider filter widths are also feasible designs.
Photon-counting Spectral CT with De-noised Principal Component Analysis (PCA)
Huiqiao Xie, Yufei Liu, Thomas Thuering, et al.
While energy-integration spectral CT with the capability of material decomposition has been providing added value to diagnostic CT imaging in the clinic, photon-counting spectral CT is gaining momentum in research and development, with the potential of overcoming more clinically relevant challenges. In practice, the photon-counting spectral CT provides the opportunity for principal component analysis to effectively extract information from the raw data. However, the principal component analysis in spectral CT may suffer from high noise induced by photon starvation, especially in energy bins at the high energy end. Via phantom and small animal studies, we investigate the feasibility of principal component analysis in photon-counting spectral CT and the benefit that can be offered by de-noising with the Content-Oriented Sparse Representation method.
Known-component model-based material decomposition for dual energy imaging of bone compositions in the presence of metal implant
Dual energy computed tomography (DE CT) is a promising technology for the assessment of bone compositions. One of potential applications involves evaluations of fracture healing using longitudinal measurements of callus mineralization. However, imaging of fractures is often challenged by the presence of metal fixation hardware. In this work, we report on a new simultaneous DE reconstruction-decomposition algorithm that integrates the previously introduced Model-Based Material Decomposition (MBMD) with a Known-Component (KC) framework to mitigate metal artifacts. The algorithm was applied to the DE data obtained on a dedicated extremity cone-beam CT (CBCT) with capability for weight-bearing imaging. To acquire DE projections in a single gantry rotation, we exploited a unique multisource design of the system, where three X-ray sources were mounted parallel to the axis of rotation. The central source provided high energy (HE) data at 120 kVp, while the two remaining sources were operated at a low energy (LE) of 60 kVp. This novel acquisition trajectory further motivates the use of MBMD to accommodate this complex DE sampling pattern. The algorithm was validated in a simulation study using a digital extremity phantom. The phantom consisted of a water background with an insert containing varying concentrations of calcium (50 – 175 mg/mL). Two configurations of titanium implants were considered: a fixation plate and an intramedullary nail. The accuracy of calcium-water decompositions obtained with the proposed KC-MBMD algorithm was compared to MBMD without metal component model. Metal artifacts were almost completely removed by KC-MBMD. Relative absolute errors of calcium concentration in the vicinity of metal were 6% - 31% for KC-MBMD (depending on the calcium insert and implant configuration), compared favorably to 48% - 273% for MBMD. Moreover, accuracy of concentration estimates for KC-MBMD in the presence of metal implant approached that of MBMD in a configuration without implant (6%-23%). The proposed algorithm achieved accurate DE material decomposition in the presence of metal implants using a non-conventional, axial multisource DE acquisition pattern.
SPECT Imaging
icon_mobile_dropdown
Investigation of a Monte Carlo simulation and an analytic-based approach for modeling the system response for clinical I-123 brain SPECT imaging
Benjamin Auer, Navid Zeraatkar , Jan De Beenhouwer, et al.
The use of accurate system response modeling has been proven to be an essential key of SPECT image reconstruction, with its usage leading to overall improvement of image quality. The aim of this work was to investigate the imaging performance using an XCAT brain perfusion phantom of two modeling strategies, one based on analytic techniques and the other one based on GATE Monte-Carlo simulation. In addition, an efficient forced detection approach to improve the overall simulation efficiency was implemented and its performance was evaluated. We demonstrated that accurate modeling of the system matrix generated by Monte-Carlo simulation for iterative reconstruction leads to superior performance compared to analytic modeling in the case of clinical 123I brain imaging. It was also shown that the use of the forced detection approach provided a quantitative and qualitative enhancement of the reconstruction.
Preliminary investigation of AdaptiSPECT-C designs with square or square and hexagonal detectors employing direct and oblique apertures
We report our investigation of system designs and 3D reconstruction for a dedicated brain-imaging SPECT system using multiple square or square and hexagonal detector modules. The system employs shuttering to vary which of multiple pinhole apertures are enabled to pass photons through to irradiate the detectors. Both multiplexed and nonmultiplexed irradiation by the pinholes are investigated. Sampling is assessed by simulated imaging of a uniform activity concentration in a spherical tub filling the VOI and a tailored Defrise phantom consisting of a series of activity containing slabs aligned axially. Potential image quality for clinical imaging is assessed through simulated imaging of an XCAT brain phantom with an activity distribution simulating perfusion imaging.
GPU-accelerated generic analytic simulation and image reconstruction platform for multi-pinhole SPECT systems
We introduce a generic analytic simulation and image reconstruction software platform for multi-pinhole (MPH) SPECT systems. The platform is capable of modeling common or sophisticated MPH designs as well as complex data acquisition schemes. Graphics processing unit (GPU) acceleration was utilized to make a high-performance computing software. Herein, we describe the software platform and provide verification studies of the simulation and image reconstruction software.
Other Novel Applications and Approaches
icon_mobile_dropdown
Exact inversion of an integral transform arising in passive detection of gamma-ray sources with a Compton camera
In this paper, we address exact inversion of the integral transform, called Compton (or cone) transform, that maps a function on R3 to its integrals over conical surfaces. Compton transform arises in passive detection of gammaray sources with a Compton camera which has promising applications in medical and industrial imaging as well as in homeland security imaging and astronomy. We present a two-step method that uses the full set of available projections for inverting the Compton transform: first the recovery of the Radon transform from the Compton transform, and then the Radon transform inversion. The first step can be done in various ways by means of the generalization of a previously obtained result relating the Compton and Radon transforms. This leads to a variety of Compton inversion formulas that are independent of the geometry of detectors as long as a generous admissibility condition is met. We formulate one inversion formula that is simpler and performs well in the case of noisy data, which is demonstrated by a numerical simulation.
Task-driven acquisition in anisotropic x-ray dark-field tomography
Anisotropic X-ray Dark-field Tomography (AXDT) is a novel imaging modality aimed at the reconstruction of spherical scattering functions in every three-dimensional volume element, based on the directional X-ray dark-field contrast as measured by an X-ray grating interferometer. In this work, we re-derive a detectability index for the AXDT forward model directly using the spherical function formulation, and use it to compute optimized acquisition trajectories using a greedy algorithm. The results demonstrate that the optimized trajectories can represent task-specific features in AXDT accurately using only a fraction of the data.
A step toward a clinically viable ABI phase-contrast imaging: double emission line artifacts correction
Oriol Caudevilla, Wei Zhou, Jovan G. Brankov
Analyzer-based phase–contrast imaging (ABI) is a promising X-ray imaging technique with huge potential for soft tissue imaging. Unfortunately, ABI requires quasi-monochromatic beam, which limits the beam photon budget, therefore imaging requires a long exposure time. In classical ABI imaging only one K-alpha emission line is permitted. Relaxing this requirement and even further by utilizing both K-alpha emission lines for imaging can significantly reduce the exposure time. However, accepting both emission lines introduce a double-image artifact due to the energy-angular difference between the emission lines. In this paper we introduce a method to correct for such artifacts and overcome one of the main design limitation of Analyzer-Based systems to achieve high quality phase-contrast mammograms in a clinically relevant time.
Registration methods to enable augmented reality-assisted 3D image-guided interventions
Augmented reality (AR) can be used to visualize virtual 3D models of medical imaging in actual 3D physical space. Accurate registration of these models onto patients will be essential for AR-assisted image-guided interventions. In this study, registration methods were developed, and registration times for aligning a virtual 3D anatomic model of patient imaging onto a CT grid commonly used in CT-guided interventions were compared. The described methodology enabled automated and accurate registration within seconds using computer vision detection of the CT grid as compared to minutes using user-interactive registration methods. Simple, accurate, and near instantaneous registration of virtual 3D models onto CT grids will facilitate the use of AR for real-time procedural guidance and combined virtual/actual 3D navigation during image-guided interventions.
Deep Learning for Image Denoising and Characterization
icon_mobile_dropdown
Feature aware deep learning CT image reconstruction
In conventional CT, it is difficult to generate consistent organ specific noise and resolution with a single reconstruction kernel. Therefore, it is necessary in principle to reconstruct a single scan multiple times using different kernels in order to obtain clinical diagnosis information for different anatomies. In this paper, we provide a deep learning solution which can obtain organ specific noise and resolution balance with one single reconstruction. We propose image reconstruction using a deep convolution neural network (DCNN) trained by a specific feature aware reconstruction target. It integrates desirable features from multiple reconstructions each of which provides optimal noise and resolution tradeoff for one specific anatomy. The performance of our proposed method has been verified with actual clinical data. The results show that our method can outperform standard model based iterative reconstruction (MBIR) by offering consistent noise and resolution properties across different organs using only one single image reconstruction.
Low-dose CT image denoising without high-dose reference images
Reducing radiation dose of computed tomography (CT) and thereby decreasing the potential risk to patients are desirable in CT imaging. Deep neural network has been proposed to reduce noise in low-dose CT images. However, the conventional way to train a neural network requires using high-dose CT images as the reference. Recently, a noise-tonoise (N2N) training method was proposed, which showed that a neural network could be trained with only noisy images. In this work, we applied the N2N training to low-dose CT denoising. Our results show that the N2N training works in both count and image domains without using any high-dose reference images.
Deep learning based adaptive filtering for projection data noise reduction in x-ray computed tomography
In conventional x-ray CT imaging, noise reduction is often applied on raw data to remove noise while improving reconstruction quality. Adaptive data filtering is one noise reduction method that suppresses data noise using a local smooth kernel. The design of the local kernel is important and can greatly affect the reconstruction quality. In this report we develop a deep learning convolutional neural network to help predict the local kernel automatically and adaptively to the data statistics. The proposed network is trained to directly generate kernel parameters and hence allow fast data filtering. We compare our method to the existing filtering method. The results shows that our deep learning based method is more efficient and robust over a variety of scan conditions.
Population and individual information guided PET image denoising using deep neural network
Jianan Cui, Kuang Gong, Ning Guo, et al.
Positron emission tomography (PET) images still suffer from low signal-to-noise ratio (SNR) due to various physical degradation factors. Recently deep neural networks (DNNs) have been successfully applied to medical image denoising tasks when large number of training pairs are available. Previously the deep image prior framework1 shows that individual information can be enough to train a denoising network, with noisy image itself as the training label. In this work, we propose to improve PET image quality by jointly employing population and individual information based on DNN. The population information was utilized by pre-training the network using a group of patients. The individual information was introduced during testing phase by fine-tuning the population-information-trained network. Unlike traditional DNN denoising, in this framework fine-tuning during testing phase is available as the noisy PET image itself was treated as the training label. Quantification results based on clinical PET/MR datasets containing thirty patients demonstrate that the proposed framework outperforms Gaussian, non-local mean and deep image prior denoising methods.
Comparison of deep learning and human observer performance for lesion detection and characterization
The detection and characterizations of abnormalities in clinical imaging is of the utmost importance for patient diagnosis and treatment. In this paper, we present a comparison of convolutional neural network (CNN) and human observer performance on a simulated lesion detection and characterization task. We apply both conventional performance metrics including accuracy and non-conventional metrics such as lift charts to perform qualitative and quantitative comparison of each type of observer. It is determined that the CNN generally outperforms the human observers, particularly at high noise levels. However, high noise correlation reduces the relative performance of the CNN, and human observer performance is comparable to CNN under these conditions. These findings extend into the field of diagnostic radiology, where the adoption of deep learning is starting to become widespread. The importance of considering the applications for which deep learning is most effective is of critical importance to this development.
Quantitative Methods in PET
icon_mobile_dropdown
A linear estimator for timing calibration in time-of-flight PET
Michel Defrise
We study the performance of a method for the timing calibration of a TOF scanner using arbitrary phantom or patient data. The method uses an initial non TOF reconstruction and estimates the timing offsets of the detectors by weighted least-squares fitting. Assuming bias-free and noise-free estimates of scatter and randoms, the method is shown to yield an unbiased estimate. In addition a theorem and numerical results show that this simple method is close to optimal for the type of phantoms typically used for calibration.
Joint reconstruction of activity and attenuation with autonomous scaling for time-of-flight PET
Yusheng Li, Samuel Matej, Joel S. Karp
Recent research showed that the attenuation can be determined from emission data, jointly with reconstructed activity images, up to a scaling constant when utilizing the time-of-flight (TOF) information. We aim to develop practical joint reconstruction for clinical TOF PET scanners with autonomous scaling determination and joint TOF scatter estimation from TOF PET data, to obtain quantitatively accurate activity and attenuation images. In this work, we present a joint reconstruction of activity and attenuation based on MLAA with autonomous scaling determination. Our idea is to use a segmented region in a reconstructed attenuation image with known attenuation, e.g., a liver in patient imaging. First, we construct a unit attenuation medium which has a similar, not necessarily the same, support to the imaging patient. All detectable LORs intersecting the unit media have an attenuation factor of e−1 ≈ 0.3679, i.e., the line integral is one. The scaling factor can then be determined by comparing the reconstructed attenuation image and the unit attenuation medium within the segmented known region(s). A three-step iterative joint reconstruction algorithm is developed. In each iteration, first the activity is updated using TOF OSEM from TOF list-mode data; then the attenuation image is updated using XMLTR—a modified MLTR from non-TOF LOR sinograms; a scaling factor is determined based on the segmented region(s) and both activity and attenuation images are updated using the estimated scaling. We implement the joint reconstruction with autonomous scaling, and evaluate using 3-D simulations. The joint reconstructions are also compared with the reference reconstruction with true attenuation image. In summary, we present a joint reconstruction of activity and attenuation with autonomous scaling. The scaling determination at each iteration allows the joint reconstruction to obtain a unique and faithful solution of activity and attenuation.
Dynamic PET imaging with the generalized method of moments
Joaquín L. Herraiz, Miguel Angel Morcillo, Jose Manuel Udias
Dynamic PET imaging is usually performed dividing the acquired data into time frames which are reconstructed independently and then fitted using a kinetic model. This approach requires many image reconstructions, and data corrections, and the use of short frames usually produces noisy images with significant positive bias. In this work we propose to use a generalized version of the method of moments (MoM), already in use in other fields such as fluorescence decay studies, to address these problems. In the MoM, the events of the list-mode data are weighted based on the time they were detected and stored in sinograms. These sinograms are reconstructed with standard algorithms, and the dynamic parameters of interest are derived from the resulting images using algebraic relations, which depend on the specific dynamic model and selected set of weights. The method was evaluated with data from preclinical and clinical scanners with several dynamical studies such as a decaying 13N phantom acquired with the Biograph TP scanner and a PatLak analysis in the myocardium region of a mouse injected with 18F-FDG, reaching in all cases similar results to the ones obtained using frames. We also successfully tested the MoM with more complex dynamic models with simulated data obtained with dPETSTEP. In summary, the MoM applied to dynamic PET has the potential to be a very effective way to reduce the computational cost and bias in many different studies.
Multiresolution spatiotemporal mechanical model of the heart as a prior to constrain the solution for 4D models of the heart
Grant T. Gullberg, Alexander I. Veress, Uttam M. Shrestha, et al.
In several nuclear cardiac imaging applications (SPECT and PET), images are formed by reconstructing tomographic data using an iterative reconstruction algorithm with corrections for physical factors involved in the imaging detection process and with corrections for cardiac and respiratory motion. The physical factors are modeled as coefficients in the matrix of a system of linear equations and include attenuation, scatter, and spatially varying geometric response. The solution to the tomographic problem involves solving the inverse of this system matrix. This requires the design of an iterative reconstruction algorithm with a statistical model that best fits the data acquisition. The most appropriate model is based on a Poisson distribution. Using Bayes Theorem, an iterative reconstruction algorithm is designed to determine the maximum a posteriori estimate of the reconstructed image with constraints that maximizes the Bayesian likelihood function for the Poisson statistical model. The a priori distribution is formulated as the joint entropy (JE) to measure the similarity between the gated cardiac PET image and the cardiac MRI cine image modeled as a FE mechanical model. The developed algorithm shows the potential of using a FE mechanical model of the heart derived from a cardiac MRI cine scan to constrain solutions of gated cardiac PET images.
Poster Session I
icon_mobile_dropdown
Analysis of scatter artifacts in cone-beam CT due to scattered radiation of metallic objects
Domenico Iuso M.D., Robert Frysch M.D., Tim Pfeiffer, et al.
Cone-beam computed tomography (CBCT) is a widely used technique for diagnostic or monitoring purposes. Compared to the traditional CT, a CBCT is more affected by scatter artifacts because of the large volume being irradiated by the beam. The research divulged in this paper is about the assessment of the influence that metallic implants may have on degrading image quality of CBCT due to scattered radiation. The evaluation method is based on Monte-Carlo (MC) simulations of the physical processes that X-ray photons undergo in typical CBCT setups, in presence and absence of highly scattering metallic implants (coils used for treatment of aneurysms and pacemakers). The results show that the scattered radiation caused by metallic objects and reaching the detector produces slight degradation of CBCT image quality and, moreover, it is demonstrated that the intrinsic absorption and beam-hardening effect of these implants have bigger impact on the overall image fidelity.
CTL: modular open-source C++-library for CT-simulations
Tim Pfeiffer, Robert Frysch, Richard N. K. Bismark, et al.
Simulated data can play an important role in many research topics in the field of X-ray computed tomography (CT). Most existing tools lack the flexibility, the ease of use, or possibilities to incorporate own routines to fulfill all needs of researchers. We propose a novel, modular C++ open-source simulation toolkit that provides full flexibility for system setups, acquisition geometry, forward projection models, as well as physical effects to be considered in the simulation. All mentioned aspects are freely customizable (and extendable) to grant users full control to tailor the toolkit to their specific needs. Here, we present an early version which is under active development. By that, we want to encourage the community to provide feedback and suggestions already at an early stage of development.
Photon-counting CBCT iterative reconstruction for adaptive proton therapy
Takashi Yamaguchi, Kiyotaka Akabori
Cone beam computed tomography (CBCT) is used to determine a patient position in proton therapy, but its image quality is low compared to that of a conventional CT because data measured by a two-dimensional detector used in CBCT contain scattered X-ray components. Correcting for scattered X-rays using the Klein-Nishina’s formula can improve CBCT image quality, but the formula requires the atomic number and number density of substances. In this work, we developed a photon-counting image reconstruction method for estimating the atomic number and number density using the energy information of X-rays. When the developed method was applied to an X-ray energy spectrum of a gantry-mounted CBCT which was simulated with a Monte Carlo simulation code, it was possible to distinguish soft tissues from water in the simulated object, which was not possible without the energy information. An atomic number and number density obtained with our method allows to calculate the stopping power of protons more accurately, which can contribute to improving dose calculation accuracy.
A fast gradient-based algorithm for image reconstruction in inverse geometry CT architecture with sparse distributed sources
Frédéric Jolivet, Clarisse Fournier, Andrea Brambilla
Conventional Cone Beam CT (CBCT) is composed of a single source and a large detector to aquire a full sinogram of the object. Multi-source inverse geometry CT system (IGCT) consists, for its part in using several sources and a small detector to acquire several partial sinograms of the object. For technological, financial and medical reasons the reduction of the number of sources and the reduction of the detector size are interesting but induce to solve an ill-posed and ill-conditionned problem. We propose a regularized iterative algorithm which is able to reconstruct the object volume from partial sinograms acquired with a an optimized multi-source IGCT system : we will demonstrate the performance of the proposed algorithm when we reduce the size of the detector and the number of sources. Realistically simulated CT data is reconstructed with the proposed algorithm and the results are compared to those obtained by filtered backprojection (FBP) and those obtained by a maximum likelihood estimation to show the impact of the regularization.
Clipping-induced bias correction for low-dose CT imaging
Ultra-low dose CT scanning produces non-ideal data with many problems when the number of photons reaching the detector is very small. One such problem is the bias introduced by clipping of negative measurement values prior to the log operation. This paper proposes a correction method for this clipping-induced bias, in particular for the case when access to the original un-clipped measurements is no longer available.
Multislice anthropomorphic model observer for detectability evaluation on breast cone beam CT images
We predict human observer performance for lesion detection on breast cone beam computed tomography (CBCT) images using single-slice and multislice model observers with a constant internal noise level. We evaluate human observer performance on single-slice and multislice simulated breast CBCT images with 1 mm signal, and predict the performance using model observers. We use a channelized Hotelling observer (CHO) and nonprewhitening observer with eye-filter (NPWE). We employ dense difference-of-Gaussian (D-DOG) channels for CHO, and eyefilter with peak value at 7 cyc/deg for NPWE. We include channel internal noise for CHO and decision variable internal noise for NPWE. For single-slice images, D-DOG CHO and NPWE predict human observer performance well. For multislice images, D-DOG CHO overestimates human observer performance, while NPWE predict human observer performance successfully.
Low-dose photon counting CT reconstruction bias reduction with multi-energy alternating minimization algorithm
Jingwei Lu, Shuangyue Zhang, David G. Politte, et al.
Photon counting CT (PCCT) is an x-ray imaging technique that has undergone great development in the past decade. PCCT has the potential to improve dose efficiency and low-dose performance. In this paper, we propose a statistics-based iterative algorithm to perform a direct reconstruction of material-decomposed images. Compared with the conventional sinogram-based decomposition method which has degraded performance in low- dose scenarios, the multi-energy alternating minimization algorithm for photon counting CT (MEAM-PCCT) can generate accurate material-decomposed image with much smaller biases.
Noise reduction in photon-counting CT using frequency-dependent optimal weighting
Mats Persson, Norbert J. Pelc
Spectral computed tomography (CT) allows optimizing image quality by combining the data in several energy channels with optimal weighting factors. In an improvement of this technique, the weighting factors are made dependent on spatial frequency, and previous work has shown that this can improve detectability for a simple detector model. In this work, we investigate the achievable detectability improvement from frequency-dependentweighting for realistic models of photon-counting detectors. We use a Monte-Carlo based simulation model to obtain point-spread functions and autocovariances for two detector models with 0.5 × 0.5 mm2 pixels, one CdTe-based with five energy bins and one silicon-based with eight energy bins. We generated noise-only images for two different energy weighting schemes: one where optimal weights were selected individually for each spatial frequency, and one where the weights optimal for zero frequency were applied to all frequencies. The modulation transfer function was set equal in both schemes. Results show that frequency-based weighting can decrease noise variance by 11 % for Si and by 38 % for CdTe, for an edge-enhancing MTF, demonstrating that optimal frequency-dependent weighting has the capability of reducing noise in high-resolution CT images.
Reduction of beam hardening induced metal artifacts using consistency conditions
Shiras Abdurahman, Robert Frysch, Georg Rose
Metal artifacts are increasingly prevalent in CT reconstructed volumes due to the presence of implants in the aging population. High degree of beam hardening in severely attenuating objects is one of the main contributors of strong metal artifacts. In this paper, we propose a method to reduce metal artifacts due to beam hardening by enforcing consistency conditions on the uncorrected polychromatic projections. Our results from clinical datasets show the reduction of artifacts after the proposed correction.
Beam hardening correction using pair-wise fan beam consistency conditions
Shiras Abdurahman, Robert Frysch, Steffen Melnik, et al.
The polychromatic X-ray spectrum and the energy-dependent material attenuation coefficients generate beam hardening artifacts in CT reconstructed images. The artifacts can be corrected by projection linearization using polynomials. Recently, consistency conditions derived from Grangeat’s fundamental relation have been successfully employed for estimating the correction polynomials without calibration or prior knowledge. In this paper, we show that the polynomials can also be computed by enforcing pair-wise fan beam consistency conditions on cone beam projections. Our preliminary results from simulation and real data experiments show the significant reduction of first-order artifacts after correction with the proposed method.
Bone sparsity model for computed tomography image reconstruction
Emil Y. Sidky, Holly L. Stewart, Christopher E. Kawcak, et al.
Gradient sparsity regularization is an effective way to mitigate artifacts due to sparse-view sampling or data noise in computed tomography (CT) image reconstruction. The effectiveness of this type of regularization relies on the scanned object being approximately piecewise constant. Trabecular bone tissue is also technically piecewise constant, but the fine internal structure varies at a spatial scale that is smaller than the resolution of a typical CT scan; thus it is not clear what form of sparsity regularization is most effective for this type of tissue. In this conference submission, we develop a pixel-sparsity regularization model, which is observed to be effective at reducing streak artifacts due to sparse-view sampling and noise. Comparison with gradient sparsity regularization is also shown.
Edge-masked CT image reconstruction from limited data
Victor Churchill, Anne Gelb
This paper presents a preliminary investigation of an iterative inversion algorithm for computed tomography image reconstruction that early results show performs well in terms of accuracy and speed using limited data. The computational method combines an image domain technique and statistical reconstruction by using an initial filtered back projection reconstruction to create a binary edge mask, which is then used in a weighted ℓ2-regularized reconstruction. Both theoretical and empirical results are offered to support the algorithm. While in this paper a simple forward model is used and physical edges are used as the sparse feature, the proposed method is flexible and can accommodate any forward model and sparsifying transform.
Real-time GPU implementation of a weighted filtered back-projection algorithm for stationary gantry CT reconstruction
William Thompson, Edward Morton, Alexander Katsevich, et al.
We present details of a real-time implementation of a new algorithm designed to reduce streak artifacts in switched-source stationary gantry CT reconstruction. The algorithm is of the filtered back-projection type, and uses a voxel-specific weighting function to account for the non-uniform distribution of illumination angles caused by such a scanning geometry. The main challenge in developing a real-time implementation is the storage and memory bandwidth requirements imposed by the weighting function. This has been addressed by storing weights at a low precision and reduced resolution, and using interpolation to recover weights at the full resolution. Results demonstrate real-time performance of the algorithm at a realistic problem size, running on a low-cost consumer grade laptop.
Toward quantitative short-scan cone beam CT using shift-invariant filtered-backprojection with equal weighting and image domain shading correction
Linxi Shi, Lei Zhu, Adam Wang
Quantitative short-scan cone beam CT (CBCT) is impeded by streaking and shading artifacts. Streaking artifacts can be caused by approximate handling of data redundancy in short-scan FDK with Parker's weighting, while shading artifacts are caused by scatter and beam hardening effects. In this work, we improve the image quality of short-scan CBCT by removing the streaking artifacts using a previously proposed algorithm in a framework of filtered backprojection with shift-invariant filtering and equal weighting. An efficient image domain shading correction using sparse samples is subsequently applied to further improve the image uniformity. The improved image quality is shown for this approach, both in visual appearance and quantitative measurements of three clinical head scans.
Double-helix trajectory for image guided radiation therapy: geometry and image reconstruction
Zhicong Yu, Chuanyong Bai, Daniel Gagnon
Cone-beam CT (CBCT) is a prevalent tool for image-guided radiation therapy (IGRT). It can be used for patient positioning and dose calculation, which are needed at the early and later stages of each fraction of treatment, respectively. The requirement of image quality on patient positioning is less demanding than that on dose calculation. This work introduces a two-pass data acquisition approach for CBCT imaging in IGRT, with the first pass being left-handed helix and the second pass being right-handed helix. The first scan alone produces images for patient positioning, whereas the two scans together are used to produce quality improved images for dose calculation. We refer to this two-pass data acquisition geometry as the double-helix trajectory. We propose a dedicated image reconstruction algorithm for the double-helix trajectory and demonstrate the algorithm via computer simulations.
Combination of CT motion simulation and deep convolutional neural networks with transfer learning to recover Agatston scores
Thomas Wesley Holmes, Kevin Ma, Amir Pourmorteza
Motion of the coronary arteries during the cardiac cycle can distort the reconstructed CT image and negatively affect the evaluation of calcified plaques. These movements are manifested as motion artifacts. These artifacts and their corresponding stationary calcifications were used to train a Deep Convolutional Neural Network (DCNN). We used reported ranges of motions for coronary arteries to create a computer moving phantom of calcified plaques. We created a computer model of a CT scanner and created CT projections and reconstructions of stationary and moving plaques. CT images with artifacts and stationary images were used as input and targets of the DCNN, respectively. To control the progression of the DCNN, transfer learning was implemented to slowly introduce increasingly complicated images. The results of the regression plots generated before and after from a representative data set show a slope of 1.85 (r2=0.72) vs 1.08 (r2=0.90) before the network recovery and after DCNN, respectively. DCNNs demonstrate a promising approach to the complicated problem of CT motion correction in computer simulations. Further evaluation with actual motion artifacts is needed.
A sinogram inpainting method based on generative adversarial network for limited-angle computed tomography
Ziheng Li, Wenkun Zhang, Linyuan Wang, et al.
Limited-angle computed tomography (CT) image reconstruction is a challenging reconstruction problem in the fields of CT. With the development of deep learning, the generative adversarial network (GAN) perform well in image restoration by approximating the distribution of training sample data. In this paper, we proposed an effective GAN-based inpainting method to restore the missing sinogram data for limited-angle scanning. To estimate the missing data, we design the generator and discriminator of the patch-GAN and train the network to learn the data distribution of the sinogram. We obtain the reconstructed image from the restored sinogram by filtered back projection and simultaneous algebraic reconstruction technique with total variation. Experimental results show that serious artifacts caused by missing projection data can be reduced by the proposed method, and it is hopeful to solve the reconstruction problem of 60° limited scanning angle.
Bone induced artifacts elimination using two-step convolutional neural network
Bin Su, Yanyan Liu, Yifeng Jiang, et al.
Bone induced artifacts caused by spectral absorption of skull is intrinsic to head images in CT. Artifacts which blur the images and further temper with the diagnostic power of CT. Several algorithms have been proposed to address the artifacts, but most are complex and take long time to eliminate the artifacts. In the past decade, the deep learning (DL) approach has demonstrated excellent effects in image processing. In this work, we present a twostep convolutional neural networks (CNNs) that reduces the artifacts. First step uses the U-shape network (UNet) to learn and correct the low frequency artifacts. Second step uses residual network (ResNet) to extract the high frequency artifacts. Our proposed method is capable of eliminating the bone induced artifacts within a relatively low time cost. Promising results have been obtained in our experiment with a large number of CT head images.
A deep learning approach for dual-energy CT imaging using a single-energy CT data
Wei Zhao, Tianling Lv, Peng Gao, et al.
In a standard computed tomography (CT) image, pixels having the same Hounsfield Units (HU) can correspond to different materials and it is therefore challenging to differentiate and quantify materials. Dual-energy CT (DECT) is desirable to differentiate multiple materials, but DECT scanners are not widely available as singleenergy CT (SECT) scanners. Here we develop a deep learning approach to perform DECT imaging by using standard SECT data. The end point of the deep learning approach is a model capable of providing the high-energy CT image for a given input low-energy CT image. We retrospectively studied 22 patients who received contrast-enhanced abdomen DECT scan. The difference between the predicted and original high-energy CT images are 3.47 HU, 2.95 HU, 2.38 HU, and 2.40 HU for spine, aorta, liver and stomach, respectively. The difference between virtual non-contrast (VNC) images obtained from original DECT and deep learning DECT are 4.10 HU, 3.75 HU, 2.33 HU and 2.92 HU for spine, aorta, liver and stomach, respectively. The aorta iodine quantification difference between iodine maps obtained from original DECT and deep learning DECT images is 0.9%. This study demonstrates that highly accurate DECT imaging with single low-energy data is achievable by using a deep learning approach. The proposed method can significantly simplify the DECT system design, reducing the scanning dose and imaging cost.
Learned digital subtraction angiography (Deep DSA): method and application to lower extremities
Elias Eulig, Joscha Maier, Michael Knaup, et al.
Digital Subtraction Angiography (DSA) aims at selectively displaying vessels by subtracting an unenhanced mask image from a contrast-enhanced fluoroscopic image. This strategy requires the data to be static, i.e. to be acquired without patient or C-arm motion. Thus, conventional DSA cannot be applied to dynamic acquisition protocols such as bolus injection chases, which are particularly useful for the diagnosis of peripheral arterial disease (PAD). Preliminary studies have shown that convolutional neural networks (CNNs) are capable of overcoming this drawback, by predicting DSA-like images directly from their corresponding fluoroscopic x-ray images without the need for the acquisition of a mask image. Here, we demonstrate the potential of this approach for fluoroscopic acquisitions of the lower extremities. We apply the network to twelve different patient exams of which nine are without C-arm motion and the remaining three are bolus chase studies with C-arm motion. For cases where a conventional DSA is feasible we examine very small deviations and observe predictions for the bolus chase studies of similar visual impression as with conventional DSA. The results indicate that Deep DSA has the potential to improve the diagnosis of PAD by generating DSA-equivalent images from bolus chase studies of the lower extremities.
Low-dose cerebral CT perfusion restoration via non-local convolution neural network: initial study
Sui Li, Dong Zeng, Zhaoying Bian, et al.
Computed tomography perfusion (CTP) imaging can be used to detect ischemic stroke via high-resolution and quantitative hemodynamic maps. However, due to its repeated scanning protocol, CTP imaging involves a substantial radiation dose, which might increase potential cancer risks. Therefore, reducing radiation dose in CTP has raised significant research interests. In this work, we present a non-local convolution neural network (NL-Net) to yield high quality CTP images and high precision hemodynamic maps at low-dose cases. Specifically, different from the traditional network in CT imaging, this NL-Net takes into consideration the non-local information from adjacent frames as one of the input. Then, the low-dose CTP images combining with the non-local information feeds into the pre-trained network to produce desired CTP images with high quality. The clinical patient data are used to demonstrate the performance of the NL-Net, and corresponding results indicate that the presented NL-Net can obtain better CTP images and more accurate hemodynamic maps compared with the competing approaches.
Direct image reconstruction from raw measurement data using an encoding transform refinement-and-scaling neural network
William Whiteley, Jens Gregor
Direct reconstruction of raw measurement data into a final image using a neural network is currently an uncommon approach to the use of deep learning in medical imaging. One reason may be the relatively recent adoption of deep learning. Another reason may be the computational requirements associated with performing the domain transform using fully connected perceptron layers. We propose an AUTOMAP inspired multi-segment Encoding Transform Refinement-and-Scaling (ETRS) neural network that allows reconstruction of full size 512x512 images compared to the 128x128 image size of AUTOMAP.
A hybrid ring artifact reduction algorithm based on CNN in CT images
In flat-panel based cone beam computed tomography (CBCT), ring artifacts always exist and degrade the quality of reconstructed images. In this work, we propose a convolutional neural network (CNN) based ring artifact reduction algorithm in CT images, which fuses the information from the original and corrected images to eliminate the artifacts. The proposed method consists of two steps. First, we establish a database consisting of three types of images for training, artifact-free, ring artifact and pre-corrected images. Second, the original and pre-corrected images are input to the trained CNN to generate an image with less artifacts. To further reduce the artifacts, by using image mutual correlation, pixels in the pre-corrected image and the CNN output image, which are less sensitive to artifacts, are combined to generate a hybrid corrected image. Both simulated and real data experiments were performed to verify the proposed method. Experimental results show that the proposed method can effectively suppress the ring artifacts without introducing processing distortion to the image structure.
GCC-based extrapolation of truncated CBCT data with dimensionality-reduced extrapolation models
Daniel Punzet, Robert Frysch, Tim Pfeiffer, et al.
A typical incomplete data problem arising in cone-beam computed tomography (CBCT) occurs when an object is either too large to be projected onto the detector or is deliberately only projected in parts. This problem is called truncation. Tomographic images reconstructed from truncated projection data can be severely impaired by image artifacts depending on the degree of truncation. A typical strategy to counter this is to extend the projection data by some smooth extrapolation. In order to accurately approximate the shape of the scanned object outside of the volume of interest (VOI), we previously presented a method which fits an extrapolation model to the truncated data by minimizing an error function based on the Grangeat consistency condition (GCC). In this work we propose a method of reducing the complexity of the extrapolation by making use of the 0th image moments of the truncated projection data.
Non-uniformity correction for photon-counting detectors using double GANs
Wei Fang, Liang Li
The development of energy-resolving photon-counting detectors provides a new approach for obtaining spectral information in computed tomography. However, the non-uniformity between different photon-counting detector pixels can cause stripe artefacts in projection domain and concentric ring artefacts in image domain. Here we propose a nonuniformity correction method based on two generative adversarial nets (GANs). The first GAN is a conditional GAN and is responsible for ring artifacts estimation in image domain. The first GAN is trained on 2016 AAPM Grand Challenge dataset, with ring artifacts artificially introduced. The second GAN is an ordinary GAN and is responsible for experimental ring artifacts auto-modeling. The second GAN is trained on real ring artifacts removed from experimental images by the first GAN and aims to provide ample and realistic training labels for re-training the first GAN. Experimental results show that GAN can accurately extract the characteristics of ring artefacts and get them removed from original images to get clear images. Besides, GAN can also be used for realistic training label generation thus better improve the performance of ring artifacts estimation network on experimental datasets.
Synthesize monochromatic images in spectral CT by dual-domain deep learning
Chuqing Feng, Zhiqiang Chen, Kejun Kang, et al.
Spectral computed tomography (CT) with photon counting detectors (PCDs) can collect photons by setting different energy bins. It is well acknowledged that PCD-based spectral CT has great potential for lowering radiation dose and improve material discrimination. One critical processing in spectral CT is energy spectrum modelling or spectral information decomposition. In this work, we proposed a dual-domain deep learning (DDDL) method to calibrate a spectral CT system by a neural network. Without explicit energy spectrum and detector response model, we train a neural network to implicitly define the non-linear relationship in spectral CT. Virtual monochromatic attenuation maps are synthesized directly from polychromatic projections. Simulation and real experimental results verified the feasibilities and accuracies of the proposed method.
Green’s one-step-late algorithm dose not work for SPECT with attenuation correction
Larry Zeng
Green’s one-step-late (OSL) algorithm is a popular image reconstruction algorithm in emission data tomography, in spite of its convergence issues. One drawback of Green’s algorithm is that the algorithm exhibits non-stationary regularization when the algorithm’s projector and backprojector model the attenuation effects in single photon emission computed tomography (SPECT). This paper suggests a remedy to improve Green’s OSL algorithm so that stationary regularization can be obtained.
Super-iterative image reconstruction in PET
Most current Positron Emission Tomography (PET) scanners use pixelated detector crystals, and the crystal pitch limits the sampling and the image resolution. In this paper we present a maximum-likelihood based method to go beyond the existing discrete sampling in PET scanners. After an initial standard image reconstruction, the projection of the reconstructed image is used to redistribute the counts of each original LOR among several subLORs. The new dataset with increased sampling is reconstructed again, obtaining improved image resolution without increasing the noise. The procedure can be repeated several times for further improvements, being each reconstruction a super-iteration. We validated the method with data acquired with the preclinical Super Argus PET/CT scanner. We used the NEMA NU4- 2008 for the Super Argus PET/CT scanner to quantitatively measure the image quality improvement, which resulted in a Recovery Coefficient (RC) increase of 14% for the smallest rod. Results with in-vivo acquisitions of a rat cardiac study injected with FDG also confirm the improvement in image quality. The proposed method can be considered a generalization of standard reconstruction algorithms, which is able to achieve better images at the expense of increasing the reconstruction time.
Reconstruction performance for long axial field-of-view PET scanners with large axial gaps
Margaret E. Daube-Witherspoon, Varsha Viswanath, Suleman Surti, et al.
The increased axial coverage of long axial field-of-view (AFOV) PET scanners leads to dramatically higher sensitivity and the potential to significantly reduce scan time and/or radiation dose. Axial gaps allow for longer coverage for a given detector area but have the disadvantages of lower sensitivity and additional complexity for reconstruction. The PennPET Explorer scanner currently has an AFOV of 64 cm with two 7.6-cm axial gaps. We used the system in its current configuration with data gaps and simulations without gaps (70-cm AFOV) to study the impact of large axial gaps on the choices and performance of 3D reconstruction algorithms for long AFOV scanners. The results of this work will inform the extension of the PennPET Explorer beyond 64 cm, guide the development and choice of reconstruction algorithms and parameters for quantitative imaging in long AFOV scanners with gaps, and predict reconstruction performance of longer AFOV systems with and without large axial gaps.
Versatile regularisation toolkit for iterative image reconstruction with proximal splitting algorithms
Daniil Kazantsev, Edoardo Pasca, Mark Basham, et al.
Ill-posed image recovery requires regularisation to ensure stability. The presented open-source regularisation toolkit consists of state-of-the-art variational algorithms which can be embedded in a plug-and-play fashion into the general framework of proximal splitting methods. The packaged regularisers aim to satisfy various prior expectations of the investigated objects, e.g., their structural characteristics, smooth or non-smooth surface morphology. The flexibility of the toolkit helps with the design of more advanced model-based iterative reconstruction methods for different imaging modalities while operating with simpler building blocks. The toolkit is written for CPU and GPU architectures and wrapped for Python/MATLAB. We demonstrate the functionality of the toolkit in application to Positron Emission Tomography (PET) and X-ray synchrotron computed tomography (CT).
Multi-streaming and multi-GPU optimization for a matched pair of Projector and Backprojector
Nicolas Georgin, Camille Chapdelaine, Nicolas Gac, et al.
Iterative reconstruction methods are used in X-ray Computed Tomography in order to improve the quality of reconstruction compared to filtered backprojection methods. However, these methods are computationally expensive due to repeated projection and backprojection operations. Among the possible pairs of projector and backprojector, the Separable Footprint (SF) pair has the advantage to be matched in order to ensure the convergence of the reconstruction algorithm. Nevertheless, this pair implies more computations compared to unmatched pairs commonly used in order to reduce the computation time. In order to speed up this pair, the projector and the backprojector can be parallelized on GPU. Following one of our previous work, in this paper, we propose a new implementation which takes benefits from the factorized calculations of the SF pair in order to increase the number of data handled by each thread. We also describe the adaptation of this implementation for multi-streaming computations. The method is tested on large volumes of size 10243 and 20483 voxels.
Bulk motion detection and correction using list-mode data for cardiac PET imaging
Tao Sun, Yoann Petibon, Paul Han, et al.
Purpose: Image quality of cardiac PET is degraded by cardiac, respiratory, and bulk motion. The purpose of this work is to use PET list-mode data to detect and correct for bulk motion, which is unpredictable and must therefore be tracked at all times. Methods: We propose a data-driven approach that can detect and compensate bulk motion in cardiac PET imaging. Events in a motion-contaminated scan are binned into static (without intra-frame motion) and moving (with intra-frame motion) frames based on the variance of the center positions of line-of-responses calculated in each 1-second time window. Each moving frame is further divided into subframes, within which no motion is assumed. Data in each static and sub-moving-frame are then back-projected to the image space. The resulting images are used to estimate motion transformation from all static and sub-moving frames to a selected static reference frame. Finally, the data in all the frames are jointly reconstructed by incorporating motion estimation in the system matrix. We have applied our method to three human cardiac PET studies. Results: Visual assessment indicated the greatly improved image quality of the motion-corrected image over non-motion-corrected image. Also, motion correction yielded higher myocardium to blood pool concentration ratios than non-motion correction. Conclusion: The proposed bulk motion correction method improves the image quality of cardiac PET and can potentially be applied to other PET imaging applications such as brain PET.
Poster Session II
icon_mobile_dropdown
Truncation artifacts caused by the patient table in polyenergetic statistical reconstruction on real C-arm CT data
Richard N.K. Bismark, Oliver Beuing M.D., Georg Rose
In this work, we applied the polyenergetic statistical reconstruction (PSR) technique by A. Elbakri and J. A. Fessler in order to reduce beam hardening artifacts in C-arm CT data. Astonishingly, the corrections were strongly disturbed by truncation artifacts caused by the patient table. Such truncation artifacts are typically invisible with other reconstruction methods. Our findings suggest that this is due to the mathematical structure of the update step in the reconstruction algorithm. We propose two solutions—without changing the actual PSR algorithm—that help reduce the table-induced truncation artifacts in PSR and demonstrate their viability on clinical data sets.
K-edge imaging visualization of multi-material decomposition in CT using virtual mono-energetic images
Kevin C. Ma, Thomas W. Holmes, Amir Pourmorteza
Virtual mono-energetic images (VMI) are derived from dual- and multi-energy CT acquisitions to visualize attenuation tomograms of an object at a specific x-ray energy. Conventional VMI calculation does not involve k-edge imaging for heavier contrast materials, such as gadolinium and bismuth. To achieve results closer to real-world values, we have developed a VMI calculation and visualization tool that includes k-edge imaging. The algorithm is tested on multi-material decomposition of iodine and gadolinium in a virtual phantom. Attenuation values are calculated for varying concentration of iodine and gadolinium in the simulated phantom. A visualization toolkit is developed to perform VMI calculation and displays output images. Attenuation changes based on incident x-ray energies can be tested and observed, and a characteristic jump in attenuation can be observed in VMI at the 50kV energy (k-edge of gadolinium) in vials containing Gd.
Scatter correction using pair-wise fan beam consistency conditions
Shiras Abdurahman, Robert Frysch, Georg Rose
Due to wide cone angle, the artifacts caused by scatter radiation are inevitable in flat detector CT reconstructed images. Cupping and streak artifacts are the main manifestations of scatter artifacts which will degrade low contrast resolution and Hounsfield unit accuracy. Scatter artifacts can be mitigated by subtracting the two-dimensional distribution of scatter radiation from the measured projections. Convolution-based scatter modeling can be used for the approximate scatter estimation with a high degree of computational efficiency. In this paper, we propose an algorithm to optimize the scatter kernel parameters by enforcing pair-wise fan beam consistency conditions on cone beam projections. The proposed method does not require prior Monte-Carlo simulation, additional reconstruction, or calibration experiments. Our results from the simulated datasets show the reduction of artifacts after the minimization of scatter-induced data inconsistency.
Enhanced spatial resolution in cone beam X-ray luminescence computed tomography using primal-dual Newton conjugate gradient method
Cone beam X-ray luminescence computed tomography (CB-XLCT) is a novel dual-model imaging technique which opens new possibilities to perform molecular imaging with X-ray. However, the spatial resolution of CB-XLCT is low due the ill-posedness of the inverse problem. Considering the sparse distribution characteristics of the nanophosphors in imaging object, in this paper we proposed a compressive sensing based reconstruction algorithm by using the preconditioned primal-dual newton conjugate gradient (pdNCG) method. Imaging experiments were performed on a physical phantom by the custom-made CB-XLCT. The reconstruction results demonstrate that two adjacent targets with an edge-to-edge distance of 1 mm can be effectively resolved.
Linear interpolation based structure preserved metal artifact reduction in x-ray computed tomography
Huisu Yoon, Kyoung-Yong Lee
In X-ray CT imaging, metal in imaging FOV deteriorates diagnostic quality of the reconstructed image. This is because rays penetrating dense metal implants are highly corrupted, resulting in huge inconsistency between projection data and the basic assumption of the image reconstruction principle is broken. For several decades, there have been various trials to address this problem. As computing power of computer processors increased, more complex algorithms with improved performance such as iterative reconstruction have been introduced. Recently, machine learning based techniques were introduced to the community. The purpose of this paper is to introduce a computationally effective MAR reducing severe metal artifacts while preserving fine internal structures. Thanks to its low cost, the algorithm can be carried out as a partial module of other MARs, or can be integrated into mobile CT scanners having a low computing power. The proposed algorithm adopts an idea based on a linear interpolation and it reduces severe artifacts such as black shadings well while not distorting neighboring structures. The proposed algorithm was integrated with a sinogram correction type MAR for better image quality. Results of our physical phantom experiment show that the proposed algorithm reduces metal artifacts effectively under low computational cost.
Curvature constraint based image reconstruction for limited-angle computed tomography
Xiao Xue, Shusen Zhao, Yunsong Zhao, et al.
Compared with traditional CT with full angular scan, limited-angle CT has advantages in scanning plateshaped objects and reducing imaging dose. But image reconstruction from limited-angle CT is challenging, because the acquired data are not complete. In this abstract, we proposed an imaging model for limited-angle CT, which is an extension of our previous work, where edge information is used to recover the blurred image edges and the distorted gray values of non-edge points. The new model introduces an extra curvature term in the objective function to constraint the length of the edges of the object and thus eliminates the possible jagged artifacts in the images reconstructed with the model proposed earlier. Numerical experiments with real data verify the effectiveness of the proposed imaging model and the corresponding reconstruction algorithm.
Fast ordered subsets Chambolle-Pock algorithm for CT reconstruction
The Chambolle-Pock (CP) algorithm has been successfully applied for solving optimization problems involving various data fidelity and regularization terms. However, when applied to CT image reconstruction, its efficiency is still far from being satisfactory. Another problem is that unmatched forward and backward operators are commonly used for CT reconstruction, in which case, the CP algorithm might fail to converge well. In this paper, based on a operator-splitting perspective of the simultaneous algebraic reconstruction technique (SART), the CP algorithm is generalized to incorporate the ordered subsets technique for fast convergence. The energy functional associated with the optimization problem is split into multiple terms, and the CP algorithm is employed to minimize each of them in an iterative manner. Numerical experiments show that the proposed algorithm could gain more than ten times faster convergence speed compared to the classical CP algorithm.
Attenuation correction for x-ray fluorescence computed tomography (XFCT) utilizing transmission CT image
This work proposes an attenuation correction method for x-ray fluorescence computed tomography (XFCT). The phantom is irradiated by a polychromatic cone-beam source produced by a conventional x-ray tube. X-ray fluorescence (XRF) photons are stimulated by the incident beam and are then collected by a photon counting detector placed on one side of the beamline. A flat-panel detector is placed along the beamline for detection of attenuation information. For quantitative reconstruction of XFCT images, the attenuation of incident photons as well as XRF photons in the phantom are estimated utilizing the transmission CT images. Simulation results show that the attenuation correction method proposed in this work significantly improves the accuracy of image reconstruction for XFCT, which enables quantitative identification of fluorescence materials in the objects.
Multi-energy computed tomography reconstruction using an average image induced low-rank tensor decomposition with spatial-spectral total variation regularization
Lisha Yao, Dong Zeng, Sui Li, et al.
With an advanced photon counting detector, multi-energy computed tomography (MECT) can classify the photons according to the presetting thresholds and then acquire CT measurements from multiple energy bins. However, the number of the photons at one energy bin is limited compared with that in the conventional polychromatic spectrum. Therefore, the MECT images could suffer from noise-induced artifacts. To address this issue, in this work, we present a MECT reconstruction scheme which incorporates a low-rank tensor decomposition with spatial-spectral total variation (LRTD_SSTV) regularization. Additionally, the prior information from the whole energy, i.e., the average image from the MECT images, is introduced to the LRTDSSTV regularization to further improve reconstruction performance. This reconstruction scheme is termed as “LRTD_SSTVavi”. Experimental results with a digital phantom demonstrate that the presented method produces better MECT images and more accurate basis images compared with the RPCA, TDL and LRTD_STTV methods.
Statistical iterative material image reconstruction with patch based enhanced 3DTV regularization for photon counting CT
Danyang Li, Sui Li, Dong Zeng, et al.
Photon counting computed tomography (PCCT) can simultaneously acquire measurements from multiple energies, and is able to differentiate material. However, material decomposition strategy typically leads to signal-tonoise ratio degradation and noise amplification due to limited photons detected at one energy bin in PCCT imaging. In this work, to address this issue, we present a statistical iterative material image reconstruction method to estimate material accurately. Specifically, the patch-based enhanced 3D total variation (PE3DTV) regularization is introduced into the statistical iterative model. Moreover, the PE3DTV extracts non-local similarities among all the desired material images, then stacks those similar patches to construct 3D tensor, and calculates the sparsity on the subspace of the 3D tensor based on gradient maps, encoding the correlation across nonlocal structures among material images. The numerical experiments show that the present method leads to reduced statistical bias and improved material image quality compared to the conventional TV-based method.
Reducing high-density object artifacts with iterative image reconstruction in digital tomosynthesis
In digital tomosynthesis, high-density object artifacts such as ripples and undershoots can show up in the reconstructed image in conjunction with a limited angle problem and may hinder an accurate diagnosis. In this study, we propose an iterative image reconstruction method for reducing such artifacts by use of a voting strategy with a data fidelity term that involves derivative data. It has been confirmed that the voting strategy can help reduce high-density object artifacts in the algebraic iterative reconstruction framework for tomosyntheis and more importantly shown that its contribution greatly improves when the derivative data term is jointly used in the cost function. For evaluation, the CIRS breast phantom and a forearm phantom with metal implants were scanned using a prototype digital breast tomosynthesis system and a chest digital tomosynthesis system, respectively.
Artifacts reduction method in 4DCBCT based on a weighted demons registration framework
Shaohua Zhi, Bangliang Jiang, Marc Kachelrieß, et al.
Motion blurring artifacts in CBCT can be alleviated by providing a sequence of phase-depended images through 4DCBCT technique. However, it introduces streaking artifacts due to the under-sampled projection problem for each phase. One possible solution is to use deformable registration algorithms to estimate the deformation vector fields (DVF) between different phase-depended images. Among them, the optical flow based Demons registration method is a major technique due to its simplicity and efficiency. However, current Demons algorithms still suffer from relative low registration precision due to only using gradient information of images to calculate the DVFs in different directions. To improve the registration precision, we took the interaction between the DVFs calculated in Demons process into account and then proposed a weighted Demons registration method. In this method, a joint distribution of the gradient magnitude and Laplace of Gaussian (GM-LoG) signal which could represent the edge features of magnitude and orientation was introduced. Such a joint distribution could be used to guide the calculation of DVF to preserve the more detailed features and topology structure of the image during the registration process. Both simulation and real data experiments have been carried out to verify the performance of our method. In specific, the image quality has been improved regarding to distinct features, especially in regions of interest of moving tissues. Quantitative evaluations were shown in terms of the rooted mean square error (RMSE) and correlation coefficient (CC) are achieved by our method when compared with existing single Demons method and double Demons method, respectively.
A field of view based metal artifact reduction method with the presence of data truncation
Seungwon Choi, Seunghyuk Moon, Jongduk Baek
In X-ray CT imaging, the metal objects produce significant beam hardening and streak artifacts in the reconstructed CT images. To reduce the metal artifacts, several sinogram inpainting based methods have been proposed, where projection data within the metal trace region of the sinogram are treated as missing, and estimated by interpolation. However, they generally assume data truncation does not occur and all metal objects reside inside the FOV. For small FOV imaging such as dental CT, these assumptions are violated, and thus using traditional inpainting based MAR would not be effective. In this work, we proposed a new MAR method to reduce the metal artifacts effectively when the metal objects reside outside the FOV for the small FOV imaging. The proposed method synthesizes the projection data of small FOV image by conducting forward projection, which is treated as the originally measured sinogram. Thus the effect of metal objects outside the FOV was minimized during the inpainting procedure. The performance of the proposed method is compared with the traditional linear MAR and NMAR. The results showed the effectiveness of the proposed method to reduce the residual artifacts, which were present in the traditional linear MAR and NMAR images.
Inverse-geometry CT with linearly distributed source and detector: stationary configuration and direct filtered-backprojection reconstruction
Tao Zhang, Yuxiang Xing, Hewei Gao, et al.
Inverse-geometry computed tomography (CT) has potential in security inspection and medical applications. In this work, we explore a new concept of IGCT in stationary configuration with linearly distributed source and detector (L-IGCT). To develop an exact analytical reconstruction for L-IGCT, we derive a direct filteredbackprojection (FBP) type algorithm. We validate our method by simulation of a Shepp-Logan head phantom, in which CT images are exactly reconstructed from two L-IGCT scans, whose detector arrays are perpendicular to each other to provide sufficient projection data.
Efficient nullspace-constrained modifications of incompletely sampled CT images
Robert Frysch, Sebastian Bannasch, Vojtech Kulvait, et al.
A recurring challenge in many contexts is the reconstruction of incompletely sampled CT images, e.g. due to few projections, limited angular range or truncated projections. From an algebraic point of view, an underdetermined system must be solved. Its solution has many degrees of freedom, which are determined by the nullspace of the system. We propose a method to apply generic modifications to a CT image that are restricted to this nullspace. We constructed a nullspace basis using ART or FBP algorithms and propose an image update with low computing effort using this basis. We used simulation experiments to provide a proof-of-concept for angular undersampled projections. Various nullspace-constrained modifications were applied to unconstrained ART reconstructions. The method provides the flexibility to incorporate prior knowledge after the reconstruction without violating data consistency and enables the use of unconstrained ARTs that are much faster than regularized ARTs. Our proposed method appears to be particularly promising for fast imaging with a low resolution while having certain prior knowledge about the object.
Dynamic angle selection for few-view X-ray inspection of CAD based objects
Alice Presenti, Jan Sijbers, Jan De Beenhouwer
In conventional X-ray-CT inspection of objects generated from a computer-aided design (CAD) model, a 3D CT reconstruction of the object is compared with the reference CAD model. This is a cost inefficient and tedious procedure, unsuitable for inline inspection. In this work, we propose an inspection scheme based on a limited set of radiographs, which are dynamically acquired during the scanning procedure. An efficient framework is described to determine the optimal view angle acquisition from a given CAD model and to automatically estimate the object pose in 3D with a fast, iterative algorithm that dynamically steers the acquisition geometry to acquire the optimal set of projections. We demonstrate the principle of our method on simulated data.
Non-uniformity correction for MARS photon-counting detectors
Matthew Getzin, Mengzhou Li, David S. Rundle, et al.
X-ray photon-counting detectors (PCDs) become increasingly popular with applications in medical imaging, material science, and other areas. In this paper, we propose a non-uniformity data correction method for photon-counting detectors based on the first and second moment correction. Using three measure datasets, we demonstrate the method’s efficacy in reducing spatial variance of pixel counts. The results demonstrate that both open beam and projection data can be corrected to nearly perfect Poisson counting behavior in both time and space when photon flux is in the detector’s linear response range.
Evaluation of image quality of a deep learning image reconstruction algorithm
Meghan Yue, Jie Tang, Brian E. Nett, et al.
The iterative reconstruction methods ASiR and ASiR-V have been accepted by hundreds of sites as their standard of care for a variety of protocols and applications. While the reduction in noise has been significant some readers have a preference for the classic image appearance. To maintain the classic image appearance of FBP at the same dose levels used for the standard of care with ASiR-V we introduce, Deep Learning Image Reconstruction (DLIR), a technique using artificial neural networks. This paper demonstrates that DLIR can maintain or improve upon the performance of the conventional iterative reconstruction algorithm (ASiR-V) in terms of low contrast detectability, noise, and spatial resolution.
A novel transfer learning framework for low-dose CT
Over the past few years, deep neural networks have made significant processes in denoising low-dose CT images. A trained denoising network, however, may not generalize very well to different dose levels, which follows from the dose-dependent noise distribution. To address this practically, a trained network requires re-training to be applied to a new dose level, which limits the generalization abilities of deep neural networks for clinical applications. This article introduces a deep learning approach that does not require re-training and relies on a transfer learning strategy. More precisely, the transfer learning framework utilizes a progressive denoising model, where an elementary neural network serves as a basic denoising unit. The basic units are then cascaded to successively process towards a denoising task; i.e. the output of one network unit is the input to the next basic unit. The denoised image is then a linear combination of outputs of the individual network units. To demonstrate the application of this transfer learning approach, a basic CNN unit is trained using the Mayo low- dose CT dataset. Then, the linear parameters of the successive denoising units are trained using a different image dataset, i.e. the MGH low-dose CT dataset, containing CT images that were acquired at four different dose levels. Compared to a commercial iterative reconstruction approach, the transfer learning framework produced a substantially better denoising performance.
Quadratic autoencoder for low-dose CT denoising
Recently, deep learning has transformed many fields including medical imaging. Inspired by diversity of biological neurons, our group proposed quadratic neurons in which the inner product in current artificial neurons is replaced with a quadratic operation on inputs, thereby enhancing the capability of an individual neuron. Along this direction, we are motivated to evaluate the power of quadratic neurons in representative network architectures, towards “quadratic neuron based deep learning”. In this regard, our prior theoretical studies have shown important merits of quadratic neurons and networks. In this paper, we use quadratic neurons to construct an encoder-decoder structure, referred to as the quadratic autoencoder, and apply it for low-dose CT denoising. Then, we perform experiments on the Mayo low-dose CT dataset to demonstrate that the quadratic autoencoder yields a better denoising performance.
Reconstructing interior transmission tomographic images with an offset-detector using a deep-neural-network
Interior tomography that acquires truncated data of a specific interior region-of-interest (ROI) is an attractive option to low-dose imaging. However, image reconstruction from such measurement does not yield an accurate solution because of data insufficiency. There have been developed a host of approaches to getting an approximate useful solution including various weighting methods, iterative reconstruction methods, and methods with prior knowledge. In this study, we use a deep-neural-network, which has shown its potentials in various fields including medical imaging, to reconstruct interior tomographic images. We assumed an offset-detector geometry which has wide applications in cone-beam CT (CBCT) imaging for its extended field-of-view (FOV) in this work. We trained a network to synthesize ‘ramp-filtered’ data within the detector active area so that the corresponding ROI reconstruction would be truncation-artifact-free in the filteredbackprojection (FBP) reconstruction framework. We have compared the results with post- and pre-convolution weighting methods and shown outperformance of the neural network approach.
Information retrieval in x-ray imaging with grating interferometry using convolution neural network
Chengpeng Wu, Yuxiang Xing, Hewei Gao, et al.
X-ray imaging with grating interferometry (GI) can obtain additional phase and dark-field contrasts simultaneously with the traditional absorption contrast. Due to higher sensitivity of phase contrast and subpixel spatial resolution of dark-field contrast, this technique has been established as a promising technique for low-density materials imaging. The information retrieval algorithm of three contrasts plays the key role in applications of the technique. The existing algorithms can be divided into two major types, the cosine-model analysis (CMA) method and the small angle x-ray scattering (SAXS) method. However, CMA method is established on the approximate cosine-model assumption and SAXS method requires relatively complicated and time-consuming iteration process of deconvolution. To overcome the aforementioned limitations, we introduce the convolution neural network (CNN) technique for the first time. With collected detector data as the input and retrieved information via SAXS method as the label, we design two CNN architectures. We train every network with 2160 exposure images of 6 breast specimen and test on another 720 images of 2 breast specimen. With structural similarity (SSIM) index as the quantitative standard, the results indicate retrieved images via the much faster CNN algorithms are consistent with SAXS method (best SSIM values are 0.9852, 0.9760 and 0.9006 respectively for absorption, phase and dark-field contrasts).
A spatial information incorporation method for irregular sampling CT based on deep learning
Zaifeng Shi, Zhongqi Wang, Huilong Li, et al.
Low dose CT is a popular research which focuses to reduce radiation damaging. Inspiring from the aperture coding method in optical imaging, azimuth coding projection method which belongs to the category of incomplete projection is proposed to shorten the exposure time and reduce the projection paths. Based on this coding method, the ROI will inevitably be sampled intensively, the information which is from region of interest (ROI)projection data was modulated by "coding". And the azimuth coding projection methods for the ROI will reflect the spatial continuity of the ROI. The spatial correlation between slice and adjacent slices is strong in human CT image sequences. Deep learning (DL) technology excels in medical image feature extraction. Convolutional neural network(CNN)was used to extract the modulated ROI projection information, and CNN incorporated the spatial information from adjacent slices based on the strong spatial correlation, then the obtained feature map is nonlinearly mapped to the feature map containing less artifacts. After training and testing the CNN, there is one azimuth coding method which are adapted to the corresponding the ROI at least, CT reconstructed images were restored well.
Projection super-resolution based on convolutional neural network for computed tomography
Chao Tang, Wenkun Zhang, Ziheng Li, et al.
The improvement of computed tomography (CT) image resolution is beneficial to the subsequent medical diagnosis, but it is usually limited by the scanning devices and great expense. Convolutional neural network (CNN)- based methods have achieved promising ability in super-resolution. However, existing methods mainly focus on the super-resolution of reconstructed image and do not fully explored the approach of super-resolution from projectiondomain. In this paper, we studied the characteristic of projection and proposed a CNN-based super-resolution method to establish the mapping relationship of low- and high-resolution projection. The network label is high-resolution projection and the input is its corresponding interpolation data after down sampling. FDK algorithm is utilized for three-dimensional image reconstruction and one slice of reconstruction image is taken as an example to evaluate the performance of the proposed method. Qualitative and quantitative results show that the proposed method is potential to improve the resolution of projection and enables the reconstructed image with higher quality.
Medical (CT) image generation with style
We propose the use of a conditional generative adversarial network (cGAN) to generate anatomically accurate full-sized CT images. Our approach is motivated by the recently discovered concept of style transfer and proposes to mix style and content of two separate CT images for generating a new image. We argue that by using these losses in a style transfer based architecture along with a cGAN, we can increase the size of clinically accurate, annotated datasets by multiple folds. Our framework can generate full-sized images with novel anatomy at spatial high resolution for all organs and only requires limited annotated input data of a few patients. The expanded datasets our framework generates can then be utilized within the many deep learning architectures designed for various processing tasks in medical imaging.
Awake preclinical brain PET imaging based on point sources
The presence of motion during the relatively long PET acquisitions is a very common problem, especially with awake animals, infants and patients with neurological disorders. External motion can be detected based on the optical tracking of markers placed on the skin of the patient, but it needs additional hardware and a somehow complex integration with the PET data. The possibility of motion detection directly from the acquired PET data would overcome these limitations. In this work, we propose the use of the centroid of lines of response to identify long motion free frames (more than 2.5 seconds). In these frames we identify in real-time the location of 18F markers placed on the head of the rat with the radiotracer labeled with 18F. We evaluated the performance of the proposed method in a preclinical PET/CT scanner with an awake rat injected with 600 μCi and four 18F sources attached in its head. After solid rigid motion compensation, we reconstruct an image that use 70% events of the acquisition, and the resolution is comparable with the motion-free frames.
EM-ML algorithm based on continuous-to-continuous model for PET
Robert Cierniak, Piotr Dobosz, Andrzej Grzybowski
The presented here abstract presents shortly an iterative approach to reconstruction problem for positron emission tomography (PET) imaging technique. The conception proposed here is based on a continuous-to-continuous data model, and a reconstruction problem is formulated as a shift-invariant system. The reconstruction problem is formulated taking into consideration the statistical properties of signals obtained by PET scanner. Computer simulations have been performed which prove that the reconstruction algorithm described here significantly overperforms EM-ML method based on discrete-to discrete data model on the quality of the images obtained.
Parametric image estimation using Residual simplified reference tissue model
Kyungsang Kim, Young Don Son, Jong-Hoon Kim, et al.
The simplified reference tissue model (SRTM) can provide a robust estimation of binding potential (BP) without a measured arterial blood input function. Although a voxel-wise estimation of BP (so called parametric image) is much more valuable than region of interested (ROI) based estimation of BP, it is challenging to compute it due to limited signal-to-noise ratio (SNR) in dynamic PET data. To achieve reliable parametric imaging, temporal images are commonly low-pass filtered prior to the kinetic parameter estimation, which sacrifices the resolution significantly. In this project, we propose an innovative method, the residual simplified reference tissue model (R-SRTM), to calculate parametric image with high resolution. In phantom simulation, we demonstrate that the proposed method outperforms the conventional SRTM method.
Virtual clinical trials using 3D PET imaging
Paul E. Kinahan, Darrin Byrd, Kristen Wangerin, et al.
Positron emission tomography (PET) imaging has emerged as a standard component for cancer diagnosis treatment and has increasing use in clinical trials of new therapies for cancer and other diseases. The use of PET imaging to assess response to therapy and its ability to measure change in radiotracer uptake is motivated by its potential for quantitative accuracy and high sensitivity. However, the effectiveness depends upon a number of factors, including both the bias and variance in the pre- and post-therapy reconstructed images. Despite all the attention paid to image reconstruction algorithms, little attention has been paid to the impact on task performance of the choice of algorithm or its parameters, even for FBP or OSEM. We have developed a method, called a 'virtual clinical trial', to evaluate the ability of PET imaging to measure response to cancer therapy in a clinical trial setting. Here our goal is to determine the impact of a fully-3D PET reconstruction algorithm and parameters on clinical trial power. Methods: We performed a virtual clinical trial by generating 90 independent and identically distributed PET imaging study realizations for each of 22 original dynamic 18F-FDG breast cancer patient studies pre- and post- therapy. Each noise realization accounted for known sources of uncertainty in the imaging process, specifically biological variability and quantum noise determined by the PET scanner sensitivity and/or imaging time, as well as the trade-offs introduced by the reconstruction algorithm in bias versus variance. Results: For high quantum noise levels, due to lower PET scanner sensitivity or shorter scan times, quantum noise has a measurable effect on signal to noise ratio (SNR) and study power. However, for studies with moderate to low levels of quantum noise, biological variability and other sources of variance determine SNR and study power. In other words, the choice of the fully-3D PET reconstruction algorithm and parameters has minimal impact on task performance. Conclusions: For many clinical trials, the variance aspects of 3D PET and reconstruction method and parameters have minimal to no impact. Variance for other factors, and bias introduced by changes in 3D PET reconstruction between scans can dramatically impact the utility of clinical trials that rely on quantitative accuracy.
Fiber assignment by continuous tracking for parametric fiber reinforced polymer reconstruction
Tim Elberfeld, Jan De Beenhouwer, Jan Sijbers
In this work, we propose an extension of the recently presented Parametric Reconstruction (PARE) algorithm1 towards the direct reconstruction of straight and curved fibers in glass fiber-refinforced polymer (GFRP) samples. The fibers are traced based on the Fiber Assignment by Continuous Tracking by introducing a piece-wise linear model. We show how the algorithm can estimate fiber parameters from the X-ray projection data and give an outlook on its application in our existing fiber estimation framework.
elsa - an elegant framework for tomographic reconstruction
Software for tomographic reconstruction has been around for decades now. So why yet another software framework for tomographic reconstruction? Because we needed a flexible, operator- and optimization-based framework in C++ for our own target applications, we developed our own some years ago. As our framework has been applied to many tomographic problems by now, ranging from optical tomography, lightfield tomography, SPECT, to various X-ray based imaging modalities (absorption contrast, differential phase contrast and anisotropic dark-field contrast), we decided to open source a modernized version of it. The framework elsa is written in platform-independent modern C++17 using the CMake build system, with high unit-test coverage and continuous integration to ascertain reliability and correctness, as well as a Python interface for easy and rapid prototyping. Our intent in open sourcing the framework and presenting it here is three-fold, first for easier reproducibility of our own research, second for use in teaching, and last but not least, in the hopes that some of you also find some usefulness in it for your own tasks.
Spectral CT reconstruction algorithm based on adaptive tight frame wavelet and total variation
Huihua Kong, Lei Lei, Ping Chen
With the fast development of photon counting detection techniques, spectral computed tomography (CT) has attracted considerable attention. Considering the fact that a narrowing energy bin has high noise which degrades the imaging quality of spectral CT, a new algorithm based tight frame wavelet and total variation (TV) is proposed. This algorithm can not only preserve the edges in the reconstructed image by minimizing TV, but also preserve the sharp features as well as smoothness by tight frame wavelet. And an anisotropic diffusion operator based on Perona-Malik (PM) diffusion model is applied to this algorithm in order to adaptively adjust the degree of smoothing of the reconstruction image. The Split-Bregman algorithm was used to solve the objective function. Experiments showed the proposed algorithm can further improve the quality of reconstructed image and preserve the edge and detail features of the image for spectral CT.
Study on spectral CT material decomposition via deep learning
Xiaochuan Wu, Peng He, Zourong Long, et al.
Spectral computed tomography (CT) has the capability to resolve the energy levels of incident photons, which is able to distinguish different material compositions. Nowadays, deep learning has generated widespread attention in CT imaging applications. In this paper, a method of material decomposition for spectral CT based on improved Fully Convolutional DenseNets (FC-DenseNets) was proposed. Spectral data were acquired by a photon-counting detector and reconstructed spectral CT images were used to construct a training dataset. Experimental results showed that the proposed method could effectively identify bone and different tissues in high noise levels. This work could establish guidelines for multi-material decomposition approaches with spectral CT.