Proceedings Volume 9412

Medical Imaging 2015: Physics of Medical Imaging

cover
Proceedings Volume 9412

Medical Imaging 2015: Physics of Medical Imaging

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 30 April 2015
Contents: 18 Sessions, 187 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2015
Volume Number: 9412

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9412
  • Physics of Contrast Enhancement
  • Image Reconstruction
  • Detector Technology
  • Phase Contrast Imaging
  • WORKSHOP: Uncertainties in the Medical Imaging Chain
  • Algorithmic Developments
  • Computed Tomography I
  • Photon Counting Imaging
  • Keynote and Novel Imaging Technologies
  • Measurements, Phantoms, Simulations
  • Breast Imaging
  • Radiation Dose and Dosimetry
  • Performance Evaluation
  • X-Ray Imaging
  • Computed Tomography II
  • Tomosynthesis
  • Poster Session
Front Matter: Volume 9412
icon_mobile_dropdown
Front Matter: Volume 9412
This PDF file contains the front matter associated with SPIE Proceedings Volume 9412, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Physics of Contrast Enhancement
icon_mobile_dropdown
Complementary contrast media for metal artifact reduction in dual-energy CT
Jack W. Lambert, Peter M. Edic, Paul Fitzgerald, et al.
Image artifacts generated by metal implants have been a problem associated with CT since its introduction. Recent techniques to mitigate this problem have included the utilization of certain Dual-Energy CT (DECT) features. DECT can produce virtual monochromatic spectral (VMS) images, simulating how the data would appear if scanned at a single x-ray energy (keV). High-keV VMS images can greatly reduce the severity of metal artifacts. A problem with these high-keV images is that contrast enhancement provided by all commercially-available contrast media is severely reduced. It is therefore impossible to generate VMS images with simultaneous high contrast and minimized metal artifact severity. Novel contrast agents based on higher atomic number elements can maintain contrast enhancement at the higher energy levels where artifacts are reduced. This study evaluated three such candidate elements: bismuth, tantalum, and tungsten, as well as two conventional contrast elements: iodine and barium. A water-based phantom with vials containing these five elements in solution, as well as different artifact-producing metal structures, was scanned with a DECT scanner capable of rapid operating voltage switching. In the VMS datasets, substantial reductions in the contrast were observed for iodine and barium, which suffered from contrast reduction of 97 and 91% respectively at 140 versus 40 keV. In comparison under the same conditions, the novel candidate agents demonstrated contrast enhancement reductions of only 20, 29 and 32% for tungsten, tantalum and bismuth respectively. At 140 versus 40 keV, metal artifact severity was reduced by 57-85% depending on the phantom configuration.
Preliminary study of copper oxide nanoparticles acoustic and magnetic properties for medical imaging
Or Perlman, Iris S. Weitz, Haim Azhari
The implementation of multimodal imaging in medicine is highly beneficial as different physical properties may provide complementary information, augmented detection ability, and diagnosis verification. Nanoparticles have been recently used as contrast agents for various imaging modalities. Their significant advantage over conventional large-scale contrast agents is the ability of detection at early stages of the disease, being less prone to obstacles on their path to the target region, and possible conjunction to therapeutics. Copper ions play essential role in human health. They are used as a cofactor for multiple key enzymes involved in various fundamental biochemistry processes. Extremely small size copper oxide nanoparticles (CuO-NPs) are readily soluble in water with high colloidal stability yielding high bioavailability. The goal of this study was to examine the magnetic and acoustic characteristics of CuO-NPs in order to evaluate their potential to serve as contrast imaging agent for both MRI and ultrasound. CuO-NPs 7nm in diameter were synthesized by hot solution method. The particles were scanned using a 9.4T MRI and demonstrated a concentration dependent T1 relaxation time shortening phenomenon. In addition, it was revealed that CuO-NPs can be detected using the ultrasonic B-scan imaging. Finally, speed of sound based ultrasonic computed tomography was applied and showed that CuO-NPs can be clearly imaged. In conclusion, the preliminary results obtained, positively indicate that CuO-NPs may be imaged by both MRI and ultrasound. The results motivate additional in-vivo studies, in which the clinical utility of fused images derived from both modalities for diagnosis improvement will be studied.
Determination of contrast media administration to achieve a targeted contrast enhancement in CT
Pooyan Sahbaee, Yuan Li, Paul Segars, et al.
Contrast enhancement is a key component of CT imaging and offer opportunities for optimization. The design and optimization of new techniques however requires orchestration with the scan parameters and further a methodology to relate contrast enhancement and injection function. In this study, we used such a methodology to develop a method, analytical inverse method, to predict the required injection function to achieve a desired contrast enhancement in a given organ by incorporation of a physiologically based compartmental model. The method was evaluated across 32 different target contrast enhancement functions for aorta, kidney, stomach, small intestine, and liver. The results exhibited that the analytical inverse method offers accurate performance with error in the range of 10% deviation between the predicted and desired organ enhancement curves. However, this method is incapable of predicting the injection function based on the liver enhancement. The findings of this study can be useful in optimizing contrast medium injection function as well as the scan timing to provide more consistency in the way that the contrast enhanced CT examinations are performed. To our knowledge, this work is one of the first attempts to predict the contrast material injection function for a desired organ enhancement curve.
Image Reconstruction
icon_mobile_dropdown
Application of a non-convex smooth hard threshold regularizer to sparse-view CT image reconstruction
In this work, we apply non-convex, sparsity exploiting regularization techniques to image reconstruction in computed tomography (CT).We modify the well-known total variation (TV) penalty to use a non-convex smooth hard threshold (SHT) penalty as opposed to the typical ℓ1 norm. The SHT penalty is different from the p <1 norms in that it is bounded above and has bounded gradient as its argument approaches the zero vector. We propose a re-weighting scheme utilizing the Chambolle-Pock (CP) algorithm in an attempt to solve a data-error constrained optimization problem utilizing the SHT penalty and call the resulting algorithm SHTCP. We then demonstrate the algorithm on sparse-view reconstruction of a simulated breast phantom with noiseless and noisy data and compare the converged images to those generated by a CP algorithm solving the analogous data-error constrained problem utilizing the TV. We demonstrate that SHTCP allows for more accurate reconstruction in the case of sparse-view noisy data and, in the case of noiseless data, allows for accurate reconstruction from fewer views than its TV counterpart.
Cone-beam CT of traumatic brain injury using statistical reconstruction with a post-artifact-correction noise model
Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.
Fat-constrained 18F-FDG PET reconstruction using Dixon MR imaging and the origin ensemble algorithm
Christian Wülker, Susanne Heinzer, Peter Börnert, et al.
Combined PET/MR imaging allows to incorporate the high-resolution anatomical information delivered by MRI into the PET reconstruction algorithm for improvement of PET accuracy beyond standard corrections. We used the working hypothesis that glucose uptake in adipose tissue is low. Thus, our aim was to shift 18F-FDG PET signal into image regions with a low fat content. Dixon MR imaging can be used to generate fat-only images via the water/fat chemical shift difference. On the other hand, the Origin Ensemble (OE) algorithm, a novel Markov chain Monte Carlo method, allows to reconstruct PET data without the use of forward- and back projection operations. By adequate modifications to the Markov chain transition kernel, it is possible to include anatomical a priori knowledge into the OE algorithm. In this work, we used the OE algorithm to reconstruct PET data of a modified IEC/NEMA Body Phantom simulating body water/fat composition. Reconstruction was performed 1) natively, 2) informed with the Dixon MR fat image to down-weight 18F-FDG signal in fatty tissue compartments in favor of adjacent regions, and 3) informed with the fat image to up-weight 18F-FDG signal in fatty tissue compartments, for control purposes. Image intensity profiles confirmed the visibly improved contrast and reduced partial volume effect at water/fat interfaces. We observed a 17±2% increased SNR of hot lesions surrounded by fat, while image quality was almost completely retained in fat-free image regions. An additional in vivo experiment proved the applicability of the presented technique in practice, and again verified the beneficial impact of fat-constrained OE reconstruction on PET image quality.
Feasibility of CT-based 3D anatomic mapping with a scanning-beam digital x-ray (SBDX) system
This study investigates the feasibility of obtaining CT-derived 3D surfaces from data provided by the scanning-beam digital x-ray (SBDX) system. Simulated SBDX short-scan acquisitions of a Shepp-Logan and a thorax phantom containing a high contrast spherical volume were generated. 3D reconstructions were performed using a penalized weighted least squares method with total variation regularization (PWLS-TV), as well as a more efficient variant employing gridding of projection data to parallel rays (gPWLS-TV). Voxel noise, edge blurring, and surface accuracy were compared to gridded filtered back projection (gFBP). PWLS reconstruction of a noise-free reduced-size Shepp-Logan phantom had 1.4% rRMSE. In noisy gPWLS-TV reconstructions of a reduced-size thorax phantom, 99% of points on the segmented sphere perimeter were within 0.33, 0.47, and 0.70 mm of the ground truth, respectively, for fluences comparable to imaging through 18.0, 27.2, and 34.6 cm acrylic. Surface accuracies of gFBP and gPWLS-TV were similar at high fluences, while gPWLS-TV offered improvement at the lowest fluence. The gPWLS-TV voxel noise was reduced by 60% relative to gFBP, on average. High-contrast linespread functions measured 1.25 mm and 0.96 mm (FWHM) for gPWLS-TV and gFBP. In a simulation of gated and truncated projection data from a full-sized thorax, gPWLS-TV reconstruction yielded segmented surface points which were within 1.41 mm of ground truth. Results support the feasibility of 3D surface segmentation with SBDX. Further investigation of artifacts caused by data truncation and patient motion is warranted.
Clinical image benefits after model-based reconstruction for low dose dedicated breast tomosynthesis
Model-based iterative reconstruction (MBIR) is implemented to process full clinical data sets of dedicated breast tomosynthesis (DBT) in a low dose condition and achieves less spreading of anatomical structure between slices. MBIR is a statistical based reconstruction which can control the trade-off between data fitting and image regularization. In this study, regularization is formulated with anisotropic prior weighting that independently controls the image regularization between in-plane and out-of-plane voxel neighbors. Studies at complete and partial convergence show that the appropriate formulation of data-fit and regularization terms along with anisotropic prior weighting leads to a solution with improved localization of objects within a more narrow range of slices. This result is compared with the solutions using simultaneous iterative reconstruction technique (SIRT), which is one of the state of art reconstruction in DBT. MBIR yields higher contrast-to-noise for medium and large size microcalcifications and diagnostic structures in volumetric breast images and supports opportunity for dose reduction for 3D breast imaging.
Rank-sparsity constrained, spectro-temporal reconstruction for retrospectively gated, dynamic CT
D. P. Clark, C. L. Lee, D. G. Kirsch, et al.
Relative to prospective projection gating, retrospective projection gating for dynamic CT applications allows fast imaging times, minimizing the potential for physiological and anatomic variability. Preclinically, fast imaging is attractive due to the rapid clearance of low molecular weight contrast agents and the rapid heart rate of rodents. Clinically, retrospective gating is relevant for intraoperative C-arm CT. More generally, retrospective sampling provides an opportunity for significant reduction in x-ray dose within the framework of compressive sensing theory and sparsity-constrained iterative reconstruction. Even so, CT reconstruction from projections with random temporal sampling is a very poorly conditioned inverse problem, requiring high fidelity regularization to minimize variability in the reconstructed results. Here, we introduce a highly novel data acquisition and regularization strategy for spectro-temporal (5D) CT reconstruction from retrospectively gated projections. We show that by taking advantage of the rank-sparse structure and separability of the temporal and spectral reconstruction sub-problems, being able to solve each sub-problem independently effectively guarantees that we can solve both problems together. In this paper, we show 4D simulation results (2D + 2 energies + time) using the proposed technique and compare them with two competing techniques— spatio-temporal total variation minimization and prior image constrained compressed sensing. We also show in vivo, 5D (3D + 2 energies + time) myocardial injury data acquired in a mouse, reconstructing 20 data sets (10 phases, 2 energies) and performing material decomposition from data acquired over a single rotation (360°, dose: ~60 mGy).
Detector Technology
icon_mobile_dropdown
Low-dose performance of wafer-scale CMOS-based X-ray detectors
Willem H. Maes, Inge M. Peters, Chiel Smit, et al.
Compared to published amorphous-silicon (TFT) based X-ray detectors, crystalline silicon CMOS-based active-pixel detectors exploit the benefits of low noise, high speed, on-chip integration and featuring offered by CMOS technology. This presentation focuses on the specific advantage of high image quality at very low dose levels. The measurement of very low dose performance parameters like Detective Quantum Efficiency (DQE) and Noise Equivalent Dose (NED) is a challenge by itself. Second-order effects like defect pixel behavior, temporal and quantization noise effects, dose measurement accuracy and limitation of the x-ray source settings will influence the measurements at very low dose conditions. Using an analytical model to predict the low dose behavior of a detector from parameters extracted from shot-noise limited dose levels is presented. These models can also provide input for a simulation environment for optimizing the performance of future detectors. In this paper, models for predicting NED and the DQE at very low dose are compared to measurements on different CMOS detectors. Their validity for different sensor and optical stack combinations as well as for different x-ray beam conditions was validated.
Apodized-aperture pixel design to increase high-frequency DQE and reduce noise aliasing in x-ray detectors
The detective quantum efficiency (DQE) of an x-ray detector, expressed as a function of spatial frequency, describes the ability to produce high-quality images relative to an ideal detector. While the DQE normally decreases substantially with increasing frequency, we describe an approach that can be used to improve the DQE response by increasing the DQE at high spatial frequencies. The approach makes use of an apodized-aperture pixel (AAP) design that requires use of a high-resolution x-ray converter such as selenium coupled to a sensor array with very small physical sensor elements, such as CMOS sensors. While sensors with elements of 10 - 25 μm are too small for most practical applications in medical radiography, we describe how larger image pixels of a practical size can be synthesized to provide a better DQE than simple binning or using physical pixels of the same size. A theoretical cascaded-systems analysis shows the DQE at the image sampling cut-off frequency can be improved by up to a factor of 2.5x. The AAP approach was validated experimentally using a CMOS/CsI-based detector having 0.05-mm sensor elements. Using AAP images with 0.2-mm pixels, the high-frequency DQE value was increased from 0.2 to 0.4 compared to simple 4x4 binning. It is concluded that ultra-high-resolution sensors can be used to optimize the high-frequency performance of x-ray detectors and make substantial improvements in image quality for visualization of small stuctures and fine image detail in comparison to current imaging systems.
Low dose digital X-ray imaging with avalanche amorphous selenium
James R. Scheuermann, Amir H. Goldan, Olivier Tousignant, et al.
Active Matrix Flat Panel Imagers (AMFPI) based on an array of thin film transistors (TFT) have become the dominant technology for digital x-ray imaging. In low dose applications, the performance of both direct and indirect conversion detectors are limited by the electronic noise associated with the TFT array. New concepts of direct and indirect detectors have been proposed using avalanche amorphous selenium (a-Se), referred to as high gain avalanche rushing photoconductor (HARP). The indirect detector utilizes a planar layer of HARP to detect light from an x-ray scintillator and amplify the photogenerated charge. The direct detector utilizes separate interaction (non-avalanche) and amplification (avalanche) regions within the a-Se to achieve depth-independent signal gain. Both detectors require the development of large area, solid state HARP. We have previously reported the first avalanche gain in a-Se with deposition techniques scalable to large area detectors. The goal of the present work is to demonstrate the feasibility of large area HARP fabrication in an a-Se deposition facility established for commercial large area AMFPI. We also examine the effect of alternative pixel electrode materials on avalanche gain. The results show that avalanche gain > 50 is achievable in the HARP layers developed in large area coaters, which is sufficient to achieve x-ray quantum noise limited performance down to a single x-ray photon per pixel. Both chromium (Cr) and indium tin oxide (ITO) have been successfully tested as pixel electrodes.
Multi-energy imagers for a radiotherapy treatment environment
Larry E. Antonuk, Langechuan Liu, Albert K. Liang, et al.
Over the last ~15 years, the central goal in external beam radiotherapy of maximizing dose to the tumor while minimizing dose to surrounding normal tissues has been greatly facilitated by the development and clinical implementation of many innovations. These include megavoltage active matrix flat-panel imagers (MV AMFPIs) designed to image the treatment beam, and separate kilovoltage (kV) AMFPIs and x-ray sources designed to provide high-contrast projection and cone-beam CT images in the treatment room. While these systems provide clinically valuable information, a variety of advantages would accrue through introduction of the capability to produce clinically useful, high quality imaging information at multiple energies (e.g., kV and MV) from a single detector along the treatment beam direction. One possible approach for achieving this goal involves substitution of the x-ray converters used in conventional MV AMFPIs with thick, segmented crystalline scintillators designed for dual-energy operation, coupled with the addition of x-ray imaging beams that contain a significant diagnostic component. A second approach involves introduction of a large area, monolithic array of photon counting pixels with multiple energy thresholds and event counters, which could provide multi-spectral views of the treatment beam with improved contrast. In this paper, the motivations behind, and the merits of each approach are described. In addition, prospects for such dual-energy imagers and photon counting array designs are discussed in the context of the radiotherapy environment.
Investigation of the screen optics of thick CsI(Tl) detectors
Adrian Howansky, Boyu Peng, Katsuhiko Suzuki, et al.
Flat panel imagers (FPI) are becoming the dominant detector technology for digital x-ray imaging. In indirect FPI, the scintillator that provides the highest image quality is Thallium (Tl) doped Cesium Iodide (CsI) with columnar structure. The maximum CsI thickness used in existing FPI is ~600 microns, due to concerns of loss in spatial resolution and light output with further increase in thickness. The goal of the present work is to investigate the screen-optics for CsI with thicknesses much larger than that used in existing FPI, so that the knowledge can be used to improve imaging performance in dose sensitive and higher energy applications, such as cone-beam CT (CBCT). Columnar CsI(Tl) scintillators up to 1 mm in thickness with different screen-optical design were investigated experimentally. Pulse height spectra (PHS) were measured to determine the Swank factor at x-ray energies between 25 and 75 keV, and to derive depth-dependent light escape efficiency i.e. gain. Detector presampling MTF, NPS and DQE were measured using a high-resolution CMOS optical sensor. Optical Monte Carlo simulation was performed to estimate optical parameters for each screen design and derive depth-dependent gain and MTF, from which overall MTF and DQE were calculated and compared with measured results. The depth-dependent imaging performance parameters were then used in a cascaded linear system model (CLSM) to investigate detector performance under screen- and sensor-side irradiation conditions. The methodology developed for understanding the optics of thick CsI(Tl) will lead to detector optimization in CBCT.
Phase Contrast Imaging
icon_mobile_dropdown
New signal extraction method in x-ray differential phase contrast imaging with a tilted collinear analyzer grating
In a grating interferometer-based x-ray differential phase contrast (DPC) imaging system, an analyzer grating (i.e. a G2 grating) is typically used to help record the important refraction information obtained with the phase stepping technique. Such a method requires the sequential movement of the G2 grating as well as multiple x-ray exposures to perform phase stepping, and thus conventional DPC imaging is very time-consuming. Additionally, it also has some mechanical instability issues due to the movement of the G2 grating. To accelerate the data acquisition speed and achieve single shot x-ray DPC imaging with a collinear type G2 grating, in this study, a new signal extraction method had been investigated. With this alternative approach, a non-zero angle of rotation between the diffraction pattern (generated by the G1 grating) and the collinear G2 grating is used during the entire data acquisition. Due to this deliberate grating misalignment, a visible moiré pattern with a certain period shall be detected. Initial experiments have demonstrated that this new signal extraction method is able to provide us with three different types of signal: absorption, differential phase, and the dark field image signals. Although the spatial resolution for both the differential phase and the dark field images is blurred by several pixel length due to the used interpolation operation, the absorption image maintains the same spatial resolution as in the conventional x-ray imaging. This developed novel signal analysis method enables single shot DPC imaging and can greatly reduce the data acquisition time, thus facilitating the implementation of DPC imaging in the medical field.
Phase-contrast imaging using radiation sources based on laser-plasma wakefield accelerators: state of the art and future development
D. Reboredo., S. Cipiccia, P. A. Grant, et al.
Both the laser-plasma wakefield accelerator (LWFA) and X-ray phase-contrast imaging (XPCi) are promising technologies that are attracting the attention of the scientific community. Conventional X-ray absorption imaging cannot be used as a means of imaging biological material because of low contrast. XPCi overcomes this limitation by exploiting the variation of the refraction index of materials. The contrast obtained is higher than for conventional absorption imaging and requires a lower dose. The LWFA is a new concept of acceleration where electrons are accelerated to very high energy (~150 MeV) in very short distances (mm scale) by surfing plasma waves excited by the passage of an ultra-intense laser pulse (~1018 Wcm-2) through plasma. Electrons in the LWFA can undergo transverse oscillation and emit synchrotron-like (betatron) radiation in a narrow cone around the propagation axis. The properties of the betatron radiation produced by LWFA, such as source size and spectrum, make it an excellent candidate for XPCi. In this work we present the characterization of betatron radiation produced by the LWFA in the ALPHA-X laboratory (University of Strathclyde). We show how phase contrast images can be obtained using the betatron radiation in a free-space propagation configuration and we discuss the potential and limitation of the LWFA driven XPCi.
Laboratory implementation of edge illumination X-ray phase-contrast imaging with energy-resolved detectors
P. C. Diemoz, M. Endrizzi, F. A. Vittoria, et al.
Edge illumination (EI) X-ray phase-contrast imaging (XPCI) has potential for applications in different fields of research, including materials science, non-destructive industrial testing, small-animal imaging, and medical imaging. One of its main advantages is the compatibility with laboratory equipment, in particular with conventional non-microfocal sources, which makes its exploitation in normal research laboratories possible. In this work, we demonstrate that the signal in laboratory implementations of EI can be correctly described with the use of the simplified geometrical optics. Besides enabling the derivation of simple expressions for the sensitivity and spatial resolution of a given EI setup, this model also highlights the EI’s achromaticity. With the aim of improving image quality, as well as to take advantage of the fact that all energies in the spectrum contribute to the image contrast, we carried out EI acquisitions using a photon-counting energy-resolved detector. The obtained results demonstrate that this approach has great potential for future laboratory implementations of EI.
Small animal lung imaging with an in-line X-ray phase contrast benchtop system
A. B. Garson III, S. Gunsten, H. Guan, et al.
We present the results from a benchtop X-ray phase-contrast (XPC) method for lung imaging that represents a paradigm shift in the way small animal lung imaging is performed. In our method, information regarding airway microstructure that is encoded within speckle texture of a single XPC radiograph is decoded to spatially resolve changes in lung properties such as microstructure sizes, air volumes, and compliance, to name a few. Such functional information cannot be derived from conventional lung radiography or any other 2D imaging modality. By computing these images at different time points within a breathing cycle, dynamic functional imaging can be potentially achieved without the need for tomography.
Redefining the lower statistical limit in x-ray phase-contrast imaging
M. Marschner, L. Birnbacher, M. Willner, et al.
Phase-contrast x-ray computed tomography (PCCT) is currently investigated and developed as a potentially very interesting extension of conventional CT, because it promises to provide high soft-tissue contrast for weakly absorbing samples. For data acquisition several images at different grating positions are combined to obtain a phase-contrast projection. For short exposure times, which are necessary for lower radiation dose, the photon counts in a single stepping position are very low. In this case, the currently used phase-retrieval does not provide reliable results for some pixels. This uncertainty results in statistical phase wrapping, which leads to a higher standard deviation in the phase-contrast projections than theoretically expected. For even lower statistics, the phase retrieval breaks down completely and the phase information is lost. New measurement procedures rely on a linear approximation of the sinusoidal phase stepping curve around the zero crossings. In this case only two images are acquired to obtain the phase-contrast projection. The approximation is only valid for small phase values. However, typically nearly all pixels are within this regime due to the differential nature of the signal. We examine the statistical properties of a linear approximation method and illustrate by simulation and experiment that the lower statistical limit can be redefined using this method. That means that the phase signal can be retrieved even with very low photon counts and statistical phase wrapping can be avoided. This is an important step towards enhanced image quality in PCCT with very low photon counts.
WORKSHOP: Uncertainties in the Medical Imaging Chain
icon_mobile_dropdown
Quantifying and reducing uncertainties in cancer therapy
Harrison H. Barrett, David S. Alberts, James M. Woolfenden, et al.
There are two basic sources of uncertainty in cancer chemotherapy: how much of the therapeutic agent reaches the cancer cells, and how effective it is in reducing or controlling the tumor when it gets there. There is also a concern about adverse effects of the therapy drug. Similarly in external-beam radiation therapy or radionuclide therapy, there are two sources of uncertainty: delivery and efficacy of the radiation absorbed dose, and again there is a concern about radiation damage to normal tissues. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall radiation dose level is varied, e.g. by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. The TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. In this paper we discuss the potential of using mathematical models of drug delivery and tumor response with imaging data to estimate AUTOC for chemotherapy, again for a single patient. This approach provides a basis for truly personalized therapy and for rigorously assessing and optimizing the therapy regimen for the particular patient. A key role is played by Emission Computed Tomography (PET or SPECT) of radiolabeled chemotherapy drugs.
Algorithmic Developments
icon_mobile_dropdown
Optimization and image quality assessment of the alpha-image reconstruction algorithm: iterative reconstruction with well-defined image quality metrics
Sergej Lebedev, Stefan Sawall, Stefan Kuchenbecker, et al.
The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.
An example-based brain MRI simulation framework
Qing He, Snehashis Roy, Amod Jog, et al.
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an “atlas” consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
Motion estimation and compensation for coronary artery and myocardium in cardiac CT
Qiulin Tang, James Matthews, Marco Razeto, et al.
Motion blurring is still a challenge for cardiac CT imaging. A new motion estimation (ME) and motion compensation method is developed for cardiac CT. The proposed method estimates motion of entire heart, and then applies motion compensation. Therefore, the proposed method reduces motion artifacts not only in coronary artery region as most other methods did, but also reduces motion blurring in myocardium region. In motion compensated reconstruction, we use the Fourier transfer method proposed by Pack et al to obtain a series of partial images, and then warp and sum together to obtain final motion compensated images. The robustness and performance of the proposed method was verified with data from 10 patients and improvements in sharpness of both coronary arteries and myocardium were obtained.
Estimating ROI activity concentration with photon-processing and photon-counting SPECT imaging systems
Recently a new class of imaging systems, referred to as photon-processing (PP) systems, are being developed that uses real-time maximum-likelihood (ML) methods to estimate multiple attributes per detected photon and store these attributes in a list format. PP systems could have a number of potential advantages compared to systems that bin photons based on attributes such as energy, projection angle, and position, referred to as photon-counting (PC) systems. For example, PP systems do not suffer from binning-related information loss and provide the potential to extract information from attributes such as energy deposited by the detected photon. To quantify the effects of this advantage on task performance, objective evaluation studies are required. We performed this study in the context of quantitative 2-dimensional single-photon emission computed tomography (SPECT) imaging with the end task of estimating the mean activity concentration within a region of interest (ROI). We first theoretically outline the effect of null space on estimating the mean activity concentration, and argue that due to this effect, PP systems could have better estimation performance compared to PC systems with noise-free data. To evaluate the performance of PP and PC systems with noisy data, we developed a singular value decomposition (SVD)-based analytic method to estimate the activity concentration from PP systems. Using simulations, we studied the accuracy and precision of this technique in estimating the activity concentration. We used this framework to objectively compare PP and PC systems on the activity concentration estimation task. We investigated the effects of varying the size of the ROI and varying the number of bins for the attribute corresponding to the angular orientation of the detector in a continuously rotating SPECT system. The results indicate that in several cases, PP systems offer improved estimation performance compared to PC systems.
Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework
Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8-6.4% (18.6-31.5 cm acrylic, 100 kV), versus 2.2-5.0% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems.
Computed Tomography I
icon_mobile_dropdown
Image-based material decomposition with a general volume constraint for photon-counting CT
Photon-counting CT (PCCT) potentially offers both improved dose efficiency and material decomposition capabilities relative to CT systems using energy integrating detectors. With respect to material decomposition, both projection-based and image-based methods have been proposed, most of which require accurate a priori information regarding the shape of the x-ray spectra and the response of the detectors. Additionally, projection-based methods require access to projection data. These data can be difficult to obtain, since spectra, detector response, and projection data formats are proprietary information. Further, some published image-based, 3-material decomposition methods require a volume conservation assumption, which is often violated in solutions. We have developed an image-based material decomposition method that can overcome those limitations. We introduced a general condition on volume constraint that does not require the volume to be conserved in a mixture. An empirical calibration can be performed with various concentrations of basis materials. The material decomposition method was applied to images acquired from a prototype whole-body PCCT scanner. The results showed good agreement between the estimation and known mass concentration values. Factors affecting the performance of material decomposition, such as energy threshold configuration and volume conservation constraint, were also investigated. Changes in accuracy of the mass concentration estimates were demonstrated for four different energy configurations and when volume conservation was assumed.
Fluence field modulated CT on a clinical TomoTherapy radiation therapy machine
Timothy P. Szczykutowicz, James Hermus
Purpose: The multi-leaf collimator (MLC) assembly present on TomoTherapy (Accuray, Madison WI) radiation therapy (RT) and mega voltage CT machines is well suited to perform fluence field modulated CT (FFMCT). In addition, there is a demand in the RT environment for FFMCT imaging techniques, specifically volume of interest (VOI) imaging. Methods: A clinical TomoTherapy machine was programmed to deliver 30% imaging dose outside predefined VOIs. Four different size ROIs were placed at varying distances from isocenter. Projections intersecting the VOI received "full dose" while those not intersecting the VOI received 30% of the dose (i.e. the incident fluence for non VOI projections was 30% of the incident fluence for projections intersecting the VOI). Additional scans without fluence field modulation were acquired at "full" and 30% dose. The noise (pixel standard deviation) was measured inside the VOI region and compared between the three scans. Results: The VOI-FFMCT technique produced an image noise 1.09, 1.05, 1.05, and 1.21 times higher than the "full dose" scan for ROI sizes of 10 cm, 13 cm, 10 cm, and 6 cm respectively within the VOI region. Conclusions: Noise levels can be almost unchanged within clinically relevant VOIs sizes for RT applications while the integral imaging dose to the patient can be decreased, and/or the image quality in RT can be dramatically increased with no change in dose relative to non-FFMCT RT imaging. The ability to shift dose away from regions unimportant for clinical evaluation in order to improve image quality or reduce imaging dose has been demonstrated. This paper demonstrates that FFMCT can be performed using the MLC on a clinical TomoTherapy machine for the first time.
Dual-energy imaging of bone marrow edema on a dedicated multi-source cone-beam CT system for the extremities
W. Zbijewski, A. Sisniega, J. W. Stayman, et al.
Purpose: Arthritis and bone trauma are often accompanied by bone marrow edema (BME). BME is challenging to detect in CT due to the overlaying trabecular structure but can be visualized using dual-energy (DE) techniques to discriminate water and fat. We investigate the feasibility of DE imaging of BME on a dedicated flat-panel detector (FPD) extremities cone-beam CT (CBCT) with a unique x-ray tube with three longitudinally mounted sources. Methods: Simulations involved a digital BME knee phantom imaged with a 60 kVp low-energy beam (LE) and 105 kVp high-energy beam (HE) (+0.25 mm Ag filter). Experiments were also performed on a test-bench with a Varian 4030CB FPD using the same beam energies as the simulation study. A three-source configuration was implemented with x-ray sources distributed along the longitudinal axis and DE CBCT acquisition in which the superior and inferior sources operate at HE (and collect half of the projection angles each) and the central source operates at LE. Three-source DE CBCT was compared to a double-scan, single-source orbit. Experiments were performed with a wrist phantom containing a 50 mg/ml densitometry insert submerged in alcohol (simulating fat) with drilled trabeculae down to ~1 mm to emulate the trabecular matrix. Reconstruction-based three-material decomposition of fat, soft tissue, and bone was performed. Results: For a low-dose scan (36 mAs in the HE and LE data), DE CBCT achieved combined accuracy of ~0.80 for a pattern of BME spherical lesions ranging 2.5 - 10 mm diameter in the knee phantom. The accuracy increased to ~0.90 for a 360 mAs scan. Excellent DE discrimination of the base materials was achieved in the experiments. Approximately 80% of the alcohol (fat) voxels in the trabecular phantom was properly identified both for single and 3-source acquisitions, indicating the ability to detect edemous tissue (water-equivalent plastic in the body of the densitometry insert) from the fat inside the trabecular matrix (emulating normal trabecular bone with significant fraction of yellow marrow). Conclusion: Detection of BME and quantification of water and fat content were achieved in extremities DE CBCT with a longitudinal configuration of sources providing DE imaging in a single gantry rotation. The findings support the development of DE imaging capability for CBCT of the extremities in areas conventionally in the domain of MRI.
Initial results from a prototype whole-body photon-counting computed tomography system
X-ray computed tomography (CT) with energy-discriminating capabilities presents exciting opportunities for increased dose efficiency and improved material decomposition analyses. However, due to constraints imposed by the inability of photon-counting detectors (PCD) to respond accurately at high photon flux, to date there has been no clinical application of PCD-CT. Recently, our lab installed a research prototype system consisting of two x-ray sources and two corresponding detectors, one using an energy-integrating detector (EID) and the other using a PCD. In this work, we report the first third-party evaluation of this prototype CT system using both phantoms and a cadaver head. The phantom studies demonstrated several promising characteristics of the PCD sub-system, including improved longitudinal spatial resolution and reduced beam hardening artifacts, relative to the EID sub-system. More importantly, we found that the PCD sub-system offers excellent pulse pileup control in cases of x-ray flux up to 550 mA at 140 kV, which corresponds to approximately 2.5×1011 photons per cm2 per second. In an anthropomorphic phantom and a cadaver head, the PCD sub-system provided image quality comparable to the EID sub-system for the same dose level. Our results demonstrate the potential of the prototype system to produce clinically-acceptable images in vivo.
Fluid dynamic bowtie attenuators
Timothy P. Szczykutowicz, James Hermus
Fluence field modulated CT allows for improvements in image quality and dose reduction. To date, only 1-D modulators have been proposed, the extension to 2-D modulation is difficult with solid-metal attenuation-based modulators. This work proposes to use liquids and gas to attenuate the x-ray beam which can be arrayed allowing for 2-D fluence modulation. The thickness of liquid and the pressure for a given path length of gas were determined that provided the same attenuation as 30 cm of soft tissue at 80, 100, 120, and 140 kV. Gaseous Xenon and liquid Iodine, Zinc Chloride, and Cerium Chloride were studied. Additionally, we performed some proof-of-concept experiments in which (1) a single cell of liquid was connected to a reservoir which allowed the liquid thickness to be modulated and (2) a 96 cell array was constructed in which the liquid thickness in each cell was adjusted manually. Liquid thickness varied as a function of kV and chemical composition, with Zinc Chloride allowing for the smallest thickness; 1.8, 2.25, 3, and 3.6 cm compensated for 30 cm of soft tissue at 80, 100, 120, and 140 kV respectively. The 96 cell Iodine attenuator allowed for a reduction in both dynamic range to the detector and scatter to primary ratio. Successful modulation of a single cell was performed at 0, 90, and 130 degrees using a simple piston/actuator. The thickness of liquids and the Xenon gas pressure seem logistically implementable within the constraints of CBCT and diagnostic CT systems.
A new CT system architecture for high temporal resolution with applications to improved geometric dose efficiency and sparse sampling
A new scalable CT system architecture is introduced with the potential to achieve much higher temporal resolution than is possible with current CT designs while maintaining the flux per rotation near today’s levels. Higher effective rotation speeds can be achieved leveraging today’s x-ray tube designs and capabilities. The new CT architecture comprises the following elements: (1) decoupling of the source rotation from the detector rotation through the provision of two independent, coaxial and coplanar rotating gantries (drums); (2) observation of a source at a range of azimuthal angles with respect to a given detector cell; (3) utilization of a multiplicity of x-ray sources; (4) use of a wide-angle iso-centered detector mounted on the independent detector drum; (5) the detector drum presents a wide angular aperture allowing x-rays from the various sources to pass through, with the active detector cells occupying about 240-degrees in one configuration, and the wide aperture the complementary 120-degrees; (6) anti-scatter grids with absorbing lamellas oriented substantially parallel to the main gantry plane; (7) optional sparse view acquisition in “bunches,” a unique sparse sampling pattern potentially enabling further data acquisition speed-up for specific applications. Temporal resolution gains are achieved when multiple sources are simultaneously in view of the extended detector. Accurate data acquisition then relies on multiplexing in space, time, or spectra. Thus the use of an energy-discriminating detector, such as a photon-counting detector, and of tube pulsing will be advantageous. Volume-based scatter correction methods have the potential to apply when space multiplexing is used.
Photon Counting Imaging
icon_mobile_dropdown
Spectral CT of the extremities with a silicon strip photon counting detector
A. Sisniega, W. Zbijewski, J. W. Stayman, et al.
Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.
Pulse detection logic for multibin photon counting detectors: beyond the simple comparator
Energy-discriminating, photon counting (EDPC) detectors have been proposed for CT systems for their spectral imaging capabilities, improved dose efficiency and higher spatial resolution. However, these advantages disappear at high flux because of the damaging effects of pulse pileup. From an information theoretic standpoint, spectral information is lost. The information loss is particularly high when we assume that the EDPC detector extracts information using a bank of comparators, as current EDPC detectors do. We analyze the use of alternative pulse detection logic which could preserve information in the presence of pileup. For example, the peak-only detector counts only a single event at the peak energy of multiple pulses which are piled up. We describe and evaluate five of these alternatives in simulation by numerically estimating the Cramer-Rao lower bound of the variance. At high flux, alternative mechanisms outperform comparators. In spectral imaging tasks, the variance reduction can be as high as an order of magnitude.
Evaluation of spectral CT data acquisition methods via non-stochastic variance maps
Adrian A. Sanchez, Emil Y. Sidky, Taly Gilat Schmidt, et al.
Recently, photon counting detectors capable of extracting spectral information have received much attention in CT, with promise of using spectral information to construct material basis images, correct beam-hardening artifacts, or provide improved imaging of K-edge contrast agents.1, 2 In this work, we focus on the goal of constructing images of basis material maps, and investigate the feasibility of analytically computing pixel variance maps for these images, so that alternative data acquisition and reconstruction methods can be compared and evaluated with respect to their noise properties. Our approach is based on linearization of the basis material decomposition and reconstruction operations, and we therefore demonstrate the method using the ubiquitous filtered back-projection algorithm, which is linear. We then performed preliminary investigation of the method by comparing basis material variance maps for two data acquisition methods that were previously found to have different noise properties:3 two-sided bin measurements acquired from separate, independent data realizations and two-sided bin measurements acquired from a single data realization.
Low rank approximation (LRA) based noise reduction in spectral-resolved x-ray imaging using photon counting detector
Spectral imaging with photon counting detectors has recently attracted a lot of interest in X-ray and CT imaging due to its potential to enable ultra low radiation dose x-ray imaging. However, when radiation exposure level is low, quantum noise may be prohibitively high to hinder applications. Therefore, it is desirable to develop new methods to reduce quantum noise in the acquired data from photon counting detectors. In this paper, we propose a new denoising algorithm to reduce quantum noise in data acquired using an ideal photon counting detector. The proposed method exploits the intrinsic low dimensionality of acquired spectral data to decompose the acquired data in a series of orthonormal spectral bases. The first few spectral bases contain object information while the rest of the bases contain primarily quantum noise. The separation of image content and noise in these orthogonal spatial bases provides a means to reject noise without losing image content. Numerical simulations were conducted to validate and evaluate the proposed noise reduction algorithm. The results demonstrated that the proposed method can effectively reduce quantum noise while maintaining both spatial and spectral fidelity.
Multivariate Gaussian model based Cramér-Rao lower bound evaluation of the in-depth PCXD
Purpose: In-Depth photon counting detectors (PCXD) use an edge-on configuration and have multi-layer segmentations. The benefit of this configuration for additional spectral information depends on the energy response. Also, inter-layer cross-talk introduces correlation to the signal collected from each layer, which makes the independent Poisson model no longer valid for estimating the Cramér-Rao lower bound (CRLB) of the material decomposition variance. We proposed to use a multivariate Gaussian model as the substitute address the data correlation. Methods: A 120 kVp incident spectrum was simulated and transmitted through 25cm of water and 1cm of calcium. 5- layer In-Depth and 1-layer Edge-On PCXDs with full energy resolution were simulated using Monte Carlo methods. We selected Si, GaAs and CdTe as detector materials. The detectors were defined to have 1mm wide pixels and thickness of 70mm (Si), 10.5mm (GaAs) and 3mm (CdTe). Geant4 was used and energy response functions (ERF) capturing secondary events were obtained, together with the Gaussian parameter estimates. We evaluated the CRLBs of the In-Depth and Edge-On detectors for each material and the systematic variance bounds were compared. Results: For uncorrelated data, the CRLB can assume Poisson statistics. As the data becomes more correlated, the Poisson CRLB fails to capture the cross-talk effect, but a Gaussian model can, and is accurate if the number of photons is not small. The CRLB analysis shows that the effects of the ERF and the noise correlation are significant. If cross-talk can be corrected, the depth information proves to be beneficial and can reduce the variance lower bound by 3% to 10% depending on the detector material. Conclusions: The multivariate Gaussian model was validated to be a good substitute to the Poisson model for PCXD CRLB estimation. It can avoid the errors that would otherwise be caused by correlated measurements.
Energy calibration of photon counting detectors using x-ray tube potential as a reference for material decomposition applications
Mini Das, Bigyan Kandel, Chan Soo Park, et al.
Photon counting spectral detectors (PCSD) with smaller pixels and efficient sensors are desirable in applications like material decomposition and phase contrast x-ray imaging where discrimination of small signals and fine structure may be desired. Charge sharing in PCSD increases with decreasing pixel sizes and increasing sensor thickness such that the energy calibration or utility of spectral information can become a major hurdle. Utility of a combination of high Z sensors and small pixel sizes in PCSD is limited without efficient threshold calibration and charge sharing mitigation. Here we explore the utility of x-ray tube kVp as a reference to achieve efficient and fast calibration of PCSDs. This calibration method itself does not require rearranging the imaging setup and is not impacted by charge sharing. Our preliminary results indicate that this method can be useful even in scenarios where metal fluorescence and radioactive source based calibration techniques may be practically impossible. Our results are validated using x-ray fluorescence based calibration for a Silicon detector with moderate charge sharing. Calibration of a particularly challenging case of a Medipix2 detector (55 μm pixel size) with a 1 mm thick CdTe sensor and a Medipix3 detector with CdTe sensor is also demonstrated. A cross validation with K-edge identification of Gd is also presented here.
Modelling the channel-wise count response of a photon-counting spectral CT detector to a broad x-ray spectrum
Xuejin Liu, Han Chen, Hans Bornefalk, et al.
Variations among detector channels in CT very sensitively lead to ring artefacts in the reconstructed images. For material decomposition in the projection domain, the variations can result in intolerable biases in the material line integral estimates. A typical way to overcome these effects is to apply calibration methods that try to unify spectral responses from different detector channels to an ideal response from a detector model. However, the calibration procedure can be rather complex and require excessive calibration measurements for a multitude of combinations of x-ray shapes, tissue combinations and thicknesses. In this paper, we propose a channel-wise model for a multibin photon-counting detector for spectral CT. Predictions of this channel-wise model match well with their physical performances, which can thus be used to eliminate ring artefacts in CT images and achieve projection-basis material decomposition. In an experimental validation, image data show significant improvement with respect to ring artefacts compared to images calibrated with flat-fielding data. Projection-based material decomposition gives basis material images showing good separation among individual materials and good quantification of iodine and gadolinium contrast agents. The work indicates that the channel-wise model can be used for quantitative CT with this detector.
Keynote and Novel Imaging Technologies
icon_mobile_dropdown
Quantitative imaging as cancer biomarker
The ability to assay tumor biologic features and the impact of drugs on tumor biology is fundamental to drug development. Advances in our ability to measure genomics, gene expression, protein expression, and cellular biology have led to a host of new targets for anticancer drug therapy. In translating new drugs into clinical trials and clinical practice, these same assays serve to identify patients most likely to benefit from specific anticancer treatments. As cancer therapy becomes more individualized and targeted, there is an increasing need to characterize tumors and identify therapeutic targets to select therapy most likely to be successful in treating the individual patient’s cancer. Thus far assays to identify cancer therapeutic targets or anticancer drug pharmacodynamics have been based upon in vitro assay of tissue or blood samples. Advances in molecular imaging, particularly PET, have led to the ability to perform quantitative non-invasive molecular assays. Imaging has traditionally relied on structural and anatomic features to detect cancer and determine its extent. More recently, imaging has expanded to include the ability to image regional biochemistry and molecular biology, often termed molecular imaging. Molecular imaging can be considered an in vivo assay technique, capable of measuring regional tumor biology without perturbing it. This makes molecular imaging a unique tool for cancer drug development, complementary to traditional assay methods, and a potentially powerful method for guiding targeted therapy in clinical trials and clinical practice. The ability to quantify, in absolute measures, regional in vivo biologic parameters strongly supports the use of molecular imaging as a tool to guide therapy. This review summarizes current and future applications of quantitative molecular imaging as a biomarker for cancer therapy, including the use of imaging to (1) identify patients whose tumors express a specific therapeutic target; (2) determine whether the drug reaches the target; (3) identify an early response to treatment; and (4) predict the impact of therapy on long-term outcomes such as survival. The manuscript reviews basic concepts important in the application of molecular imaging to cancer drug therapy, in general, and will discuss specific examples of studies in humans, and highlight future directions, including ongoing multi-center clinical trials using molecular imaging as a cancer biomarker.
Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study
Christopher M. Rank, Thorsten Heußer, Barbara Flach, et al.
We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.
Theoretical and experimental comparison of image signal and noise for dual-energy subtraction angiography and conventional x-ray angiography
Cardiovascular disease is currently the leading cause of mortality worldwide. Digital subtraction angiography (DSA) is widely used to enhance the visibility of small vessels and vasculature obscurred by overlying bone and lung fields by subtracting a mask and contrast image. However, motion between these mask and contrast images can introduce artifacts that can render a study non-diagnostic. This makes DSA particularly unsuccessful for cardiac imaging. A method called dual-energy, or energy subtraction angiography (ESA), was proposed in the past as an alternative for vascular imaging, however it was not pursued because experimental results suggested that image quality was deemed as poor and inferior to DSA. Image quality for angiography comes down to iodine signal and noise. In this paper we investigate the fundamental iodine signal and noise analysis of ESA and compare it to DSA. Method: We developed a polyenergetic and monoenergetic theoretical model for iodine signal and noise for both ESA and DSA. We validated our polyenergetic model by experiment where ESA and DSA images of a vascular phantom were acquired using an x-ray system with a flat panel CsI Xmaru1215CF-MPTM (Rayence Co., Ltd., Republic of Korea) detector. For ESA low and high applied tube voltages of 50 kV and 120 kV (2.5 mmCu), respectively, and for DSA the applied tube voltage was 80 kV. Iodine signal-to-noise ratio (SNR) per entrance exposure was calculated for each iodine concentration for both ESA and DSA. Results: Our measured iodine SNR agreed well with theoretical calculations. Iodine SNR for ESA was relatively higher than DSA for low iodine mass loadings, and as iodine mass loading increases iodine SNR decreases. Conclusions: We have developed a model for iodine SNR for both DSA and ESA. Our model was validated with experiment and showed excellent agreement. We have shown that there is potential for obtaining iodine-specific images using ESA that are similar to DSA.
Measurements, Phantoms, Simulations
icon_mobile_dropdown
A quantitative metrology for performance characterization of breast tomosynthesis systems based on an anthropomorphic phantom
Lynda Ikejimba, Yicheng Chen, Nadia Oberhofer, et al.
Purpose: Common methods for assessing image quality of digital breast tomosynthesis (DBT) devices currently utilize simplified or otherwise unrealistic phantoms, which use inserts in a uniform background and gauge performance based on a subjective evaluation of insert visibility. This study proposes a different methodology to assess system performance using a three-dimensional clinically-informed anthropomorphic breast phantom. Methods: The system performance is assessed by imaging the phantom and computationally characterizing the resultant images in terms of several new metrics. These include a contrast index (reflective of local difference between adipose and glandular material), a contrast to noise ratio index (reflective of contrast against local background noise), and a nonuniformity index (reflective of contributions of noise and artifacts within uniform adipose regions). Indices were measured at ROI sizes of 10mm and 37 mm, respectively. The method was evaluated at fixed dose of 1.5 mGy AGD. Results: Results indicated notable differences between systems. At 10 mm, vendor A had the highest contrast index, followed by B and C in that. The performance ranking was identical at the largest ROI size. The non-uniformity index similarly exhibited system-dependencies correlated with visual appearance of clutter from out-of-plane artifacts. Vendor A had the greatest NI at all ROI sizes, B had the second greatest, and C the least. Conclusions: The findings illustrate that the anthropomorphic phantom can be used as a quality control tool with results that are targeted to be more reflective of clinical performance of breast tomosynthesis systems of multiple manufacturers.
Volumetric limiting spatial resolution analysis of four dimensional digital subtraction angiography (4D-DSA)
Static C-Arm CT 3D FDK baseline reconstructions (3D-DSA) are unable to provide temporal information to radiologists. 4D-DSA provides a time series of 3D volumes implementing a constrained image, thresholded 3D-DSA, reconstruction utilizing temporal dynamics in the 2D projections. Volumetric limiting spatial resolution (VLSR) of 4DDSA is quantified and compared to a 3D-DSA reconstruction using the same 3D-DSA parameters. Investigated were the effects of varying over significant ranges the 4D-DSA parameters of 2D blurring kernel size applied to the projection and threshold applied to the 3D-DSA when generating the constraining image of a scanned phantom (SPH) and an electronic phantom (EPH). The SPH consisted of a 76 micron tungsten wire encased in a 47 mm O.D. plastic radially concentric thin walled support structure. An 8-second/248-frame/198° scan protocol acquired the raw projection data. VLSR was determined from averaged MTF curves generated from each 2D transverse slice of every (248) 4D temporal frame (3D). 4D results for SPH and EPH were compared to the 3D-DSA. Analysis of the 3D-DSA resulted in a VLSR of 2.28 and 1.69 lp/mm for the EPH and SPH respectively. Kernel (2D) sizes of either 10x10 or 20x20 pixels with a threshold of 10% of the 3D-DSA as a constraining image provided 4D-DSA VLSR nearest to the 3D-DSA. 4D-DSA algorithms yielded 2.21 and 1.67 lp/mm with a percent error of 3.1% and 1.2% for the EPH and SPH respectively as compared to the 3D-DSA. This research indicates 4D-DSA is capable of retaining the resolution of the 3D-DSA.
New family of generalized metrics for comparative imaging system evaluation
M Russ, V. Singh, B. Loughran, et al.
A family of imaging task-specific metrics designated Relative Object Detectability (ROD) metrics was developed to enable objective, quantitative comparisons of different x-ray systems. Previously, ROD was defined as the integral over spatial frequencies of the Fourier Transform of the object function, weighted by the detector DQE for one detector, divided by the comparable integral for another detector. When effects of scatter and focal spot unsharpness are included, the generalized metric, GDQE, is substituted for the DQE, resulting in the G-ROD metric. The G-ROD was calculated for two different detectors with two focal spot sizes using various-sized simulated objects to quantify the improved performance of new high-resolution CMOS detector systems. When a measured image is used as the object, a Generalized Measured Relative Object Detectability (GM-ROD) value can be generated. A neuro-vascular stent (Wingspan) was imaged with the high-resolution Micro-Angiographic Fluoroscope (MAF) and a standard flat panel detector (FPD) for comparison using the GM-ROD calculation. As the lower integration bound increased from 0 toward the detector Nyquist frequency, increasingly superior performance of the MAF was evidenced. Another new metric, the R-ROD, enables comparing detectors to a reference detector of given imaging ability. R-RODs for the MAF, a new CMOS detector and an FPD will be presented. The ROD family of metrics can provide quantitative more understandable comparisons for different systems where the detector, focal spot, scatter, object, techniques or dose are varied and can be used to optimize system selection for given imaging tasks.
Approximate path seeking for statistical iterative reconstruction
Meng Wu, Qiao Yang, Andreas Maier, et al.
Statistical iterative reconstruction (IR) techniques have demonstrated many advantages in X-ray CT reconstruction. The statistical iterative reconstruction approach is often modeled as an optimization problem including a data fitting function and a penalty function. The tuning parameter values that regulate the strength of the penalty function are critical for achieving good reconstruction results. However, appropriate tuning parameter values that are suitable for the scan protocols and imaging tasks are often difficult to choose. In this work, we propose a path seeking algorithm that is capable of generating a series of IR images with different strengths of the penalty function. The path seeking algorithm uses the ratio of the gradients of the data fitting function and the penalty function to select pixels for small fixed size updates. We describe the path seeking algorithm for penalized weighted least squares (PWLS) with a Huber penalty function in both the directions of increasing and decreasing tuning parameter value. Simulations using the XCAT phantom show the proposed method produces path images that are very similar to the IR images that are computed via direct optimization. The root-mean- squared-error of one path image generated by the proposed method relative to full iterative reconstruction is about 6 HU for the entire image and 10 HU for a small region. Different path seeking directions, increment sizes and updating percentages of the path seeking algorithm are compared in simulations. The proposed method may reduce the dependence on selection of good tuning parameter values by instead generating multiple IR images, without significantly increasing the computational load.
Enhancing 4D PC-MRI in an aortic phantom considering numerical simulations
Jonas Kratzke, Nicolai Schoch, Christian Weis, et al.
To date, cardiovascular surgery enables the treatment of a wide range of aortic pathologies. One of the current challenges in this field is given by the detection of high-risk patients for adverse aortic events, who should be treated electively. Reliable diagnostic parameters, which indicate the urge of treatment, have to be determined. Functional imaging by means of 4D phase contrast-magnetic resonance imaging (PC-MRI) enables the time-resolved measurement of blood flow velocity in 3D. Applied to aortic phantoms, three dimensional blood flow properties and their relation to adverse dynamics can be investigated in vitro. Emerging ”in silico” methods of numerical simulation can supplement these measurements in computing additional information on crucial parameters. We propose a framework that complements 4D PC-MRI imaging by means of numerical simulation based on the Finite Element Method (FEM). The framework is developed on the basis of a prototypic aortic phantom and validated by 4D PC-MRI measurements of the phantom. Based on physical principles of biomechanics, the derived simulation depicts aortic blood flow properties and characteristics. The framework might help identifying factors that induce aortic pathologies such as aortic dilatation or aortic dissection. Alarming thresholds of parameters such as wall shear stress distribution can be evaluated. The combined techniques of 4D PC-MRI and numerical simulation can be used as complementary tools for risk-stratification of aortic pathology.
Experimental implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples
A fast and accurate scatter imaging technique to differentiate cancerous and healthy breast tissue is introduced in this work. Such a technique would have wide-ranging clinical applications from intra-operative margin assessment to breast cancer screening. Coherent Scatter Computed Tomography (CSCT) has been shown to differentiate cancerous from healthy tissue, but the need to raster scan a pencil beam at a series of angles and slices in order to reconstruct 3D images makes it prohibitively time consuming. In this work we apply the coded aperture coherent scatter spectral imaging technique to reconstruct 3D images of breast tissue samples from experimental data taken without the rotation usually required in CSCT. We present our experimental implementation of coded aperture scatter imaging, the reconstructed images of the breast tissue samples and segmentations of the 3D images in order to identify the cancerous and healthy tissue inside of the samples. We find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside of them. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside of ex vivo samples within a time on the order of a minute.
Breast Imaging
icon_mobile_dropdown
Monte Carlo evaluation of the relationship between absorbed dose and contrast-to-noise ratio in coherent scatter breast CT
B. Ghammraoui, L. M. Popescu, A. Badal
The objective of this work was to evaluate the advantages and shortcomings associated with Coherent Scatter Computed Tomography (CSCT) systems for breast imaging and study possible alternative configurations. The relationship between dose in a breast phantom and a simple surrogate of image quality in pencil-beam and fan-beam CSCT geometries was evaluated via Monte Carlo simulation, and an improved pencil-beam setup was proposed for faster CSCT data acquisition. CSCT projection datasets of a simple breast phantom have been simulated using a new version of the MC-GPU code that includes an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The breast phantom was composed of an 8 cm diameter cylinder of 50/50 glandular/adipose material and nine rods with different diameters of cancerous, adipose and glandular tissues. The system performance has been assessed in terms of the contrast-to-noise ratio (CNR) in multiple regions of interest within the reconstructed images, for a range of exposure levels. The enhanced pencil-beam setup consisted of multiplexed pencil beams and specific post-processing of the projection data to calculate the scatter intensity coming from each beam separately. At reconstruction spatial resolution of 1×1×1 mm3 and from 1 to 10 mGy of received breast dose, fan-beam geometry showed higher statistical noise and lower CNR than pencil-beam geometry. Conventional CT acquisition had the highest CNR per dose. However, the CNR figure of merit did not combine yet all the information available at different scattering angles in the CSCT, which has potential for increased discrimination of materials with similar attenuation properties. Preliminary evaluation of the multiplexed pencil-beam geometry showed that the scattering profiles simulated with the new approach are similar to those of the single pencil-beam geometry. Conclusion: It has been shown that the GPU-accelerated MC-GPU code is a practical tool to simulate complete CSCT scans with different acquisition geometries and exposure levels. The simulation showed better performance in terms of the received dose and CNR with pencil-beam geometry in comparison to the fan-beam geometry. Finally, we demonstrated that the proposed multiplexed-beam geometry might be useful for faster acquisition of CSCT while providing comparable image quality as the pencil-beam geometry.
Monte Carlo simulation of breast tomosynthesis: visibility of microcalcifications at different acquisition schemes
Hannie Petersson, Magnus Dustler, Anders Tingberg, et al.
Microcalcifications are one feature of interest in mammography and breast tomosynthesis (BT). To achieve optimal conditions for detection of microcalcifications in BT imaging, different acquisition geometries should be evaluated. The purpose of this work was to investigate the influence of acquisition schemes with different angular ranges, projection distributions and dose distributions on the visibility of microcalcifications in reconstructed BT volumes.

Microcalcifications were inserted randomly in a high resolution software phantom and a simulation procedure was used to model a MAMMOMAT Inspiration BT system. The simulation procedure was based on analytical ray tracing to produce primary images, Monte Carlo to simulate scatter contributions and flatfield image acquisitions to model system characteristics. Image volumes were reconstructed using the novel method super-resolution reconstruction with statistical artifact reduction (SRSAR). For comparison purposes, the volume of the standard acquisition scheme (50° angular range and uniform projection and dose distribution) was also reconstructed using standard filtered backprojection (FBP).

To compare the visibility and depth resolution of the microcalcifications, signal difference to noise ratio (SDNR) and artifact spread function width (ASFW) were calculated. The acquisition schemes with very high central dose yielded significantly lower SDNR than the schemes with more uniform dose distributions. The ASFW was found to decrease (meaning an increase in depth resolution) with wider angular range. In conclusion, none of the evaluated acquisition schemes were found to yield higher SDNR or depth resolution for the simulated microcalcifications than the standard acquisition scheme.
Asymmetric scatter kernels for software-based scatter correction of gridless mammography
Scattered radiation remains one of the primary challenges for digital mammography, resulting in decreased image contrast and visualization of key features. While anti-scatter grids are commonly used to reduce scattered radiation in digital mammography, they are an incomplete solution that can add radiation dose, cost, and complexity. Instead, a software-based scatter correction method utilizing asymmetric scatter kernels is developed and evaluated in this work, which improves upon conventional symmetric kernels by adapting to local variations in object thickness and attenuation that result from the heterogeneous nature of breast tissue. This fast adaptive scatter kernel superposition (fASKS) method was applied to mammography by generating scatter kernels specific to the object size, x-ray energy, and system geometry of the projection data. The method was first validated with Monte Carlo simulation of a statistically-defined digital breast phantom, which was followed by initial validation on phantom studies conducted on a clinical mammography system. Results from the Monte Carlo simulation demonstrate excellent agreement between the estimated and true scatter signal, resulting in accurate scatter correction and recovery of 87% of the image contrast originally lost to scatter. Additionally, the asymmetric kernel provided more accurate scatter correction than the conventional symmetric kernel, especially at the edge of the breast. Results from the phantom studies on a clinical system further validate the ability of the asymmetric kernel correction method to accurately subtract the scatter signal and improve image quality. In conclusion, software-based scatter correction for mammography is a promising alternative to hardware-based approaches such as anti-scatter grids.
Anatomical background noise power spectrum in differential phase contrast breast images
In x-ray breast imaging, the anatomical noise background of the breast has a significant impact on the detection of lesions and other features of interest. This anatomical noise is typically characterized by a parameter, β, which describes a power law dependence of anatomical noise on spatial frequency (the shape of the anatomical noise power spectrum). Large values of β have been shown to reduce human detection performance, and in conventional mammography typical values of β are around 3.2. Recently, x-ray differential phase contrast (DPC) and the associated dark field imaging methods have received considerable attention as possible supplements to absorption imaging for breast cancer diagnosis. However, the impact of these additional contrast mechanisms on lesion detection is not yet well understood. In order to better understand the utility of these new methods, we measured the β indices for absorption, DPC, and dark field images in 15 cadaver breast specimens using a benchtop DPC imaging system. We found that the measured β value for absorption was consistent with the literature for mammographic acquisitions (β = 3.61±0.49), but that both DPC and dark field images had much lower values of β (β = 2.54±0.75 for DPC and β = 1.44±0.49 for dark field). In addition, visual inspection showed greatly reduced anatomical background in both DPC and dark field images. These promising results suggest that DPC and dark field imaging may help provide improved lesion detection in breast imaging, particularly for those patients with dense breasts, in whom anatomical noise is a major limiting factor in identifying malignancies.
Three dimensional dose distribution comparison of simple and complex acquisition trajectories in dedicated breast CT using radiochromic film
Jainil P. Shah, Steve D. Mann, Randolph L. McKinley, et al.
A novel breast CT system capable of traversing non-traditional 3D trajectories was developed to address cone beam sampling insufficiency for pendant breast imaging. The purpose of this study was to characterize differences in three dimensional x-ray dose distributions in a target volume due to the acquisition trajectory. Three cylindrical phantoms of different diameters and an anthropomorphic breast phantom were scanned in a pendant geometry with two orbits- azimuthal orbit with no polar tilt, and a saddle orbit with ±15° contiguous polar tilts. The phantoms were initially filled with water and then with a 75:25% water: methanol mixture, to simulate different density breast tissues. Fully-3D CT scans were performed using a tungsten anode x-ray source. Ionization chamber calibrated radiochromic film was used to determine average dose delivered to the central sagittal slice of a volume, as well as to visualize the 2D dose distribution across the slice. Results indicated that the mean glandular dose for normal imaging exposures, measured at the central slice across different diameters ranged from 3.93-5.28 mGy, with the lowest average dose measured on the largest diameter cylinder. In all cases, the dose delivered by the saddle was consistently 1-3% lower than the no-tilt scans. These results corroborate previous cylinder Monte Carlo studies which showed a 1% reduction in saddle dose. The average dose measured in the breast phantom filled with 75:25 mixture was slightly higher for saddle. Non-traditional 3D breast CT scans have slightly better dose performance for equal image noise compared with simple, under sampled circular orbits.
Radiation Dose and Dosimetry
icon_mobile_dropdown
Fluid-filled dynamic bowtie filter: a feasibility study
By varying its thickness to compensate for the different path length through the patient as a function of fan angle, a pre-patient bowtie filter modulates flux distribution to reduce patient dose, scatter, and detector dynamic range, and to improve image quality. A dynamic bowtie filter is superior to its traditional, static counterpart in its ability to adjust its thickness along different fan and view angles to suit a specific patient and task. Among the proposed dynamic bowtie designs, the piecewise-linear and the digital beam attenuators offer more flexibility than conventional filters, but rely on analog positioning of a limited number of wedges. In this work, we introduce a new approach with digital control, called the fluid-filled dynamic bowtie filter. It is a two-dimensional array of small binary elements (channels filled or unfilled with attenuating liquid) in which the cumulative thickness along the x-ray path contributes to the bowtie’s total attenuation. Using simulated data from a pelvic scan, the performance is compared with the piecewise-linear attenuator. The fluid-filled design better matches the desired target attenuation profile and delivers a 4.2x reduction in dynamic range. The variance of the reconstruction (or noise map) can also be more homogeneous. In minimizing peak variance, the fluid-filled attenuator shows a 3% improvement. From the initial simulation results, the proposed design has more control over the flux distribution as a function of both fan and view angles.
Imaging task-based optimal kV and mA selection for CT radiation dose reduction: from filtered backprojection (FBP) to statistical model based iterative reconstruction (MBIR)
Ke Li, Daniel Gomez-Cardona, Meghan G. Lubner, et al.
Optimal selections of tube potential (kV) and tube current (mA) are essential in maximizing the diagnostic potential of a given CT technology while minimizing radiation dose. The use of a lower tube potential may improve image contrast, but may also require a significantly higher tube current to compensate for the rapid decrease of tube output at lower tube potentials. Therefore, the selection of kV and mA should take those kinds of constraints as well as the specific diagnostic imaging task in to consideration. For conventional quasi-linear CT systems employing linear filtered back-projection (FBP) image reconstruction algorithm, the optimization of kV-mA combinations are relatively straightforward, as neither spatial resolution nor noise texture has significant dependence on kV and mA settings. In these cases, zero-frequency analysis such as contrast-to-noise ratio (CNR) or normalized CNR by dose (CNRD) can be used for optimal kV-mA selection. The recently introduced statistical model-based iterative reconstruction (MBIR) method, however, has introduced new challenges to optimal kV and mA selection, as both spatial resolution and noise texture become closely correlated with kV and mA. In this work, a task-based approach based on modern signal detection theory and the corresponding frequency-dependent analysis has been proposed to perform the kV and mA optimization for both FBP and MBIR. By performing exhaustive measurements of task-based detectability index through the technically accessible kV-mA parameter space, iso-detectability contours were generated and overlaid on top of iso-dose contours, from which the kV-mA pair that minimize dose yet still achieving the desired detectability level can be identified.
First results from a prototype dynamic attenuator system
Scott S. Hsieh, Mark V. Peng, Christopher A. May, et al.
The dynamic, piecewise-linear attenuator has been proposed as a concept which can shape the radiation flux incident on the patient. By reducing the signal to photon-rich measurements and increasing the signal to photon-starved measurements, the piecewise-linear attenuator has been shown to improve dynamic range, scatter, and variance and dose metrics in simulation. The piecewise-linear nature of the proposed attenuator has been hypothesized to mitigate artifacts at transitions by eliminating jump discontinuities in attenuator thickness at these points. We report the results of a prototype implementation of this concept. The attenuator was constructed using rapid prototyping technologies and was affixed to a tabletop x-ray system. Images of several sections of an anthropormophic pediatric phantom were produced and compared to those of the same system with uniform illumination. The thickness of the illuminated slab was limited by beam collimation and an analytic water beam hardening correction was used for both systems. Initial results are encouraging and show improved image quality, reduced dose and low artifact levels.
Ultra low radiation dose digital subtraction angiography (DSA) imaging using low rank constraint
In this work we developed a novel denoising algorithm for DSA image series. This algorithm takes advantage of the low rank nature of the DSA image sequences to enable a dramatic reduction in radiation and/or contrast doses in DSA imaging. Both spatial and temporal regularizers were introduced in the optimization algorithm to further reduce noise. To validate the method, in vivo animal studies were conducted with a Siemens Artis Zee biplane system using different radiation dose levels and contrast concentrations. Both conventionally processed DSA images and the DSA images generated using the novel denoising method were compared using absolute noise standard deviation and the contrast to noise ratio (CNR). With the application of the novel denoising algorithm for DSA, image quality can be maintained with a radiation dose reduction by a factor of 20 and/or a factor of 2 reduction in contrast dose. Image processing is completed on a GPU within a second for a 10s DSA data acquisition.
Performance Evaluation
icon_mobile_dropdown
Evaluation of a video-based head motion tracking system for dedicated brain PET
S. Anishchenko, D. Beylin, P. Stepanov, et al.
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient’s head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.
Computation of synthetic mammograms with an edge-weighting algorithm
Hanno Homann, Frank Bergner, Klaus Erhard
The promising increase in cancer detection rates1, 2 makes digital breast tomosynthesis (DBT) an interesting alternative to full-field digital mammography (FFDM) in breast cancer screening. However, this benefit comes at the cost of an increased average glandular dose in a combined DBT plus FFDM acquisition protocol. Synthetic mammograms, which are computed from the reconstructed tomosynthesis volume data, have demonstrated to be an alternative to a regular FFDM exposure in a DBT plus synthetic 2D reading mode.3 Besides weighted averaging and modified maximum intensity projection (MIP) methods,4, 5 the integration of CAD techniques for computing a weighting function in the forward projection step of the synthetic mammogram generation has been recently proposed.6, 7 In this work, a novel and computationally efficient method is presented based on an edge-retaining algorithm, which directly computes the weighting function by an edge-detection filter.
Lesion insertion in projection domain for computed tomography image quality assessment
To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way to achieve this objective is to create hybrid images that combine patient images with simulated lesions. Because conventional hybrid images generated in the image-domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Liver lesion models were forward projected according to the geometry of a commercial CT scanner to acquire lesion projections. The lesion projections were then inserted into patient projections (decoded from commercial CT raw data with the assistance of the vendor) and reconstructed to acquire hybrid images. To validate the accuracy of the forward projection geometry, simulated images reconstructed from the forward projections of a digital ACR phantom were compared to physically acquired ACR phantom images. To validate the hybrid images, lesion models were inserted into patient images and visually assessed. Results showed that the simulated phantom images and the physically acquired phantom images had great similarity in terms of HU accuracy and high-contrast resolution. The lesions in the hybrid image had a realistic appearance and merged naturally into the liver background. In addition, the inserted lesion demonstrated reconstruction-parameter-dependent appearance. Compared to conventional image-domain approach, our method enables more realistic hybrid images for image quality assessment.
Examining wide-arc digital breast tomosynthesis: optimization using a visual-search model observer
Mathematical model observers are expected to assist in preclinical optimization of image acquisition and reconstruction parameters. A clinically realistic and robust model observer platform could help in multiparameter optimizations without requiring frequent human-observer validations. We are developing search-capable visual-search (VS) model observers with this potential. In this work, we present initial results on optimization of DBT scan angle and the number of projection views for low-contrast mass detection. Comparison with human-observer results shows very good agreement. These results point towards the benefits of using relatively wider arcs and low projection angles per arc degree for improved mass detection. These results are particularly interesting considering that the FDA-approved DBT systems like Hologic Selenia Dimensions uses a narrow (15-degree) acquisition arc and one projection per arc degree.
Performance comparison of breast imaging modalities using a 4AFC human observer study
Premkumar Elangovan, Alaleh Rashidnasab, Alistair Mackenzie, et al.
This work compares the visibility of spheres and simulated masses in 2D-mammography and tomosynthesis systems using human observer studies. Performing comparison studies between breast imaging systems poses a number of practical challenges within a clinical environment. We therefore adopted a simulation approach which included synthetic breast blocks, a validated lesion simulation model and a set of validated image modelling tools as a viable alternative to clinical trials. A series of 4-alternative forced choice (4AFC) human observer experiments has been conducted for signal detection tasks using masses and spheres as targets. Five physicists participated in the study viewing images with a 5mm target at a range of contrast levels and 60 trials per experimental condition. The results showed that tomosynthesis has a lower threshold contrast than 2D-mammography for masses and spheres, and that detection studies using spheres may produce overly-optimistic threshold contrast values.
X-Ray Imaging
icon_mobile_dropdown
X-ray attenuation of adipose breast tissue: in-vitro and in-vivo measurements using spectral imaging
The development of new x-ray imaging techniques often requires prior knowledge of tissue attenuation, but the sources of such information are sparse. We have measured the attenuation of adipose breast tissue using spectral imaging, in vitro and in vivo. For the in-vitro measurement, fixed samples of adipose breast tissue were imaged on a spectral mammography system, and the energy-dependent x-ray attenuation was measured in terms of equivalent thicknesses of aluminum and poly-methyl methacrylate (PMMA). For the in-vivo measurement, a similar procedure was applied on a number of spectral screening mammograms. The results of the two measurements agreed well and were consistent with published attenuation data and with measurements on tissue-equivalent material.
Detector, collimator and real-time reconstructor for a new scanning-beam digital x-ray (SBDX) prototype
Michael A. Speidel, Michael T. Tomkowiak, Amish N. Raval, et al.
Scanning-beam digital x-ray (SBDX) is an inverse geometry fluoroscopy system for low dose cardiac imaging. The use of a narrow scanned x-ray beam in SBDX reduces detected x-ray scatter and improves dose efficiency, however the tight beam collimation also limits the maximum achievable x-ray fluence. To increase the fluence available for imaging, we have constructed a new SBDX prototype with a wider x-ray beam, larger-area detector, and new real-time image reconstructor. Imaging is performed with a scanning source that generates 40,328 narrow overlapping projections from 71 x 71 focal spot positions for every 1/15 s scan period. A high speed 2-mm thick CdTe photon counting detector was constructed with 320x160 elements and 10.6 cm x 5.3 cm area (full readout every 1.28 s), providing an 86% increase in area over the previous SBDX prototype. A matching multihole collimator was fabricated from layers of tungsten, brass, and lead, and a multi-GPU reconstructor was assembled to reconstruct the stream of captured detector images into full field-of-view images in real time. Thirty-two tomosynthetic planes spaced by 5 mm plus a multiplane composite image are produced for each scan frame. Noise equivalent quanta on the new SBDX prototype measured 63%-71% higher than the previous prototype. X-ray scatter fraction was 3.9-7.8% when imaging 23.3-32.6 cm acrylic phantoms, versus 2.3- 4.2% with the previous prototype. Coronary angiographic imaging at 15 frame/s was successfully performed on the new SBDX prototype, with live display of either a multiplane composite or single plane image.
Digital breast tomosynthesis with minimal breast compression
David A. Scaduto, Min Yang, Jennifer Ripton-Snyder, et al.
Breast compression is utilized in mammography to improve image quality and reduce radiation dose. Lesion conspicuity is improved by reducing scatter effects on contrast and by reducing the superposition of tissue structures. However, patient discomfort due to breast compression has been cited as a potential cause of noncompliance with recommended screening practices. Further, compression may also occlude blood flow in the breast, complicating imaging with intravenous contrast agents and preventing accurate quantification of contrast enhancement and kinetics. Previous studies have investigated reducing breast compression in planar mammography and digital breast tomosynthesis (DBT), though this typically comes at the expense of degradation in image quality or increase in mean glandular dose (MGD). We propose to optimize the image acquisition technique for reduced compression in DBT without compromising image quality or increasing MGD. A zero-frequency signal-difference-to-noise ratio model is employed to investigate the relationship between tube potential, SDNR and MGD. Phantom and patient images are acquired on a prototype DBT system using the optimized imaging parameters and are assessed for image quality and lesion conspicuity. A preliminary assessment of patient motion during DBT with minimal compression is presented.
Computed Tomography II
icon_mobile_dropdown
Task-driven imaging in cone-beam computed tomography
G. J. Gang, J. W. Stayman, S. Ouadah, et al.
Purpose: Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. Methods: The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered back-projection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Results: Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. Conclusions: This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
The rotate-plus-shift C-arm trajectory: complete CT data with limited angular rotation
In the last decade C–arm–based cone–beam CT became a widely used modality for intraoperative imaging. Typically a C–arm scan is performed using a circle–like trajectory around a region of interest. Therefor an angular range of at least 180° plus fan–angle must be covered to ensure a completely sampled data set. This fact defines some constraints on the geometry and technical specifications of a C–arm system, for example a larger C radius or a smaller C opening respectively. These technical modifications are usually not beneficial in terms of handling and usability of the C–arm during classical 2D applications like fluoroscopy. The method proposed in this paper relaxes the constraint of 180◦ plus fan–angle rotation to acquire a complete data set. The proposed C–arm trajectory requires a motorization of the orbital axis of the C and of ideally two orthogonal axis in the C plane. The trajectory consists of three parts: A rotation of the C around a defined iso–center and two translational movements parallel to the detector plane at the begin and at the end of the rotation. Combining these three parts to one trajectory enables for the acquisition of a completely sampled dataset using only 180° minus fan–angle of rotation. To evaluate the method we show animal and cadaver scans acquired with a mobile C-arm prototype. We expect that the transition of this method into clinical routine will lead to a much broader use of intraoperative 3D imaging in a wide field of clinical applications.
Simultaneous imaging of multiple contrast agents using full-spectrum micro-CT
D. P. Clark, M. Touch, W. Barber, et al.
One of the major challenges for in vivo, micro-computed tomography (CT) imaging is poor soft tissue contrast. To increase contrast, exogenous contrast agents can be used as imaging probes. Combining these probes with a photon counting x-ray detector (PCXD) allows energy-sensitive CT and probe material decomposition from a series of images associated with different x-ray energies. We have implemented full-spectrum micro-CT using a PCXD and 2 keV energy sampling. We then decomposed multiple k-edge contrast materials present in an object (iodine, barium, and gadolinium) from water. Since the energy bins were quite narrow, the projection data was very noisy. This noise and further spectral distortions amplify errors in post-reconstruction material decompositions. Here, we propose and demonstrate a novel post-reconstruction denoising scheme which jointly enforces local and global gradient sparsity constraints, improving the contrast-to-noise ratio in full-spectrum micro-CT data and resultant material decompositions. We performed experiments using both calibration phantoms and ex vivo mouse data. Denoising increased the material contrast-to-noise ratio by an average of 13 times relative to filtered backprojection reconstructions. The relative decomposition error after denoising was 21%. To further improve material decomposition accuracy in future work, we also developed a model of the spectral distortions caused by PCXD imaging using known spectra from radioactive isotopes (109Cd, 133Ba). In future work, we plan to combine this model with the proposed denoising algorithm, enabling material decomposition with higher sensitivity and accuracy.
Spectral deblurring: an algorithm for high-resolution, hybrid spectral CT
D. P. Clark, C. T. Badea
We are developing a hybrid, dual-source micro-CT system based on the combined use of an energy integrating (EID) x-ray detector and a photon counting x-ray detector (PCXD). Due to their superior spectral resolving power, PCXDs have the potential to reduce radiation dose and to enable functional and molecular imaging with CT. In most current PCXDs, however, spatial resolution and field of view are limited by hardware development and charge sharing effects. To address these problems, we propose spectral deblurring—a relatively simple algorithm for increasing the spatial resolution of hybrid, spectral CT data. At the heart of the algorithm is the assumption that the underlying CT data is piecewise constant, enabling robust recovery in the presence of noise and spatial blur by enforcing gradient sparsity. After describing the proposed algorithm, we summarize simulation experiments which assess the trade-offs between spatial resolution, contrast, and material decomposition accuracy given realistic levels of noise. When the spatial resolution between imaging chains has a ratio of 5:1, spectral deblurring results in a 52% increase in the material decomposition accuracy of iodine, gadolinium, barium, and water vs. linear interpolation. For a ratio of 10:1, a realistic representation of our hybrid imaging system, a 52% improvement was also seen. Overall, we conclude that the performance breaks down around high frequency and low contrast structures. Following the simulation experiments, we apply the algorithm to ex vivo data acquired in a mouse injected with an iodinated contrast agent and surrounded by vials of iodine, gadolinium, barium, and water.
Performance comparison between static and dynamic cardiac CT on perfusion quantitation and patient classification tasks
Michael Bindschadler, Dimple Modgil, Kelley R. Branch, et al.
Cardiac CT acquisitions for perfusion assessment can be performed in a dynamic or static mode. In this simulation study, we evaluate the relative classification and quantification performance of these modes for assessing myocardial blood flow (MBF). In the dynamic method, a series of low dose cardiac CT acquisitions yields data on contrast bolus dynamics over time; these data are fit with a model to give a quantitative MBF estimate. In the static method, a single CT acquisition is obtained, and the relative CT numbers in the myocardium are used to infer perfusion states. The static method does not directly yield a quantitative estimate of MBF, but these estimates can be roughly approximated by introducing assumed linear relationships between CT number and MBF, consistent with the ways such images are typically visually interpreted. Data obtained by either method may be used for a variety of clinical tasks, including 1) stratifying patients into differing categories of ischemia and 2) using the quantitative MBF estimate directly to evaluate ischemic disease severity. Through simulations, we evaluate the performance on each of these tasks. The dynamic method has very low bias in MBF estimates, making it particularly suitable for quantitative estimation. At matched radiation dose levels, ROC analysis demonstrated that the static method, with its high bias but generally lower variance, has superior performance in stratifying patients, especially for larger patients.
Tomosynthesis
icon_mobile_dropdown
Methods to mitigate data truncation artifacts in multi-contrast tomosynthesis image reconstructions
Differential phase contrast imaging is a promising new image modality that utilizes the refraction rather than the absorption of x-rays to image an object. A Talbot-Lau interferometer may be used to permit differential phase contrast imaging with a conventional medical x-ray source and detector. However, the current size of the gratings fabricated for these interferometers are often relatively small. As a result, data truncation image artifacts are often observed in a tomographic acquisition and reconstruction. When data are truncated in x-ray absorption imaging, the methods have been introduced to mitigate the truncation artifacts. However, the same strategy to mitigate absorption truncation artifacts may not be appropriate for differential phase contrast or dark field tomographic imaging. In this work, several new methods to mitigate data truncation artifacts in a multi-contrast imaging system have been proposed and evaluated for tomosynthesis data acquisitions. The proposed methods were validated using experimental data acquired for a bovine udder as well as several cadaver breast specimens using a benchtop system at our facility.
Feasibility study of the diagnosis and monitoring of cystic fibrosis in pediatric patients using stationary digital chest tomosynthesis
Marci Potuzko, Jing Shan, Caleb Pearce, et al.
Digital chest tomosynthesis (DCT) is a 3D imaging modality which has been shown to approach the diagnostic capability of CT, but uses only one-tenth the radiation dose of CT. One limitation of current commercial DCT is the mechanical motion of the x-ray source which prolongs image acquisition time and introduces motion blurring in images. By using a carbon nanotube (CNT) x-ray source array, we have developed a stationary digital chest tomosynthesis (s- DCT) system which can acquire tomosynthesis images without mechanical motion, thus enhancing the image quality. The low dose and high quality 3D image makes the s-DCT system a viable imaging tool for monitoring cystic fibrosis (CF) patients. The low dose is especially important in pediatric patients who are both more radiosensitive and have a longer lifespan for radiation symptoms to develop. The purpose of this research is to evaluate the feasibility of using s-DCT as a faster, lower dose means for diagnosis and monitoring of CF in pediatric patients. We have created an imaging phantom by injecting a gelatinous mucus substitute into porcine lungs and imaging the lungs from within an anthropomorphic hollow chest phantom in order to mimic the human conditions of a CF patient in the laboratory setting. We have found that our s-DCT images show evidence of mucus plugging in the lungs and provide a clear picture of the airways in the lung, allowing for the possibility of using s- DCT to supplement or replace CT as the imaging modality for CF patients.
Segmentation methods for breast vasculature in dual-energy contrast-enhanced digital breast tomosynthesis
Kristen C. Lau, Hyo Min Lee, Tanushriya Singh, et al.
Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) uses an iodinated contrast agent to image the three-dimensional breast vasculature. The University of Pennsylvania has an ongoing DE CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 post-contrast). DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. Temporal subtraction of the post-contrast DE images from the pre-contrast DE image is performed to analyze iodine uptake. Our previous work investigated image registration methods to correct for patient motion, enhancing the evaluation of vascular kinetics. In this project we investigate a segmentation algorithm which identifies blood vessels in the breast from our temporal DE subtraction images. Anisotropic diffusion filtering, Gabor filtering, and morphological filtering are used for the enhancement of vessel features. Vessel labeling methods are then used to distinguish vessel and background features successfully. Statistical and clinical evaluations of segmentation accuracy in DE-CBT images are ongoing.
Initial clinical evaluation of stationary digital breast tomosynthesis
Jabari Calliste, Andrew W. Tucker, Emily Gidcumb, et al.
Full field digital mammography (FFDM) has been the gold standard for mammography. It detects the presence, distribution, and morphology of microcalcifications (MCs), helping predict malignancy. Digital breast tomosynthesis (DBT) has overcome some limitations of FFDM such as poor sensitivity, specificity, and positive predictive values, due to superimposition of tissue, especially in dense breasts. Current DBT systems move an x-ray tube in either continuous (CM), or step-and-shoot motion (SSM). These systems are less effective than FFDM in MC detection due to lower spatial resolution. Motion of the x-ray source and system mechanical instability cause image blur. The image quality is further affected by patient motion due to the relatively long scan time. We developed a stationary DBT (s-DBT) system using a carbon nanotube (CNT) X-ray source array. The CNT array is electronically controlled, rapidly acquiring projection images over a large angular span, with zero tube motion. No source motion, coupled with a large angular span, results in improved in-plane and depth resolution. Using physical phantoms and human specimens, this system demonstrated higher spatial resolution than CM DBT. The objective of this study is to compare the diagnostic clinical performance of s-DBT to that of FFDM. Under UNC’s IRB regulations, 100 patients with breast lesions are being recruited and imaged with both modalities. A reader study will compare the diagnostic accuracy of the modalities. We have successfully imaged the first 30 patients. Initial results indicate that s-DBT alone produces comparable MC sharpness, and increased lesion conspicuity compared to FFDM.
The impact of breast structure on lesion detection in breast tomosynthesis
Nooshin Kiarashi, Loren W. Nolte, Joseph Y. Lo, et al.
Virtual clinical trials (VCT) can be carefully designed to inform, orient, or potentially replace clinical trials. The focus of this study was to demonstrate the capability of the sophisticated tools that can be used in the design, implementation, and performance analysis of VCTs, through characterization of the effect of background tissue density and heterogeneity on the detection of irregular masses in digital breast tomosynthesis. Twenty breast phantoms from the extended cardiactorso (XCAT) family, generated based on dedicated breast computed tomography of human subjects, were used to extract a total of 2173 volumes of interest (VOI) from simulated tomosynthesis images. Five different lesions, modeled after human subject tomosynthesis images, were embedded in the breasts, for a total of 6×2173 VOIs with and without lesions. Effects of background tissue density and heterogeneity on the detection of the lesions were studied by implementing a doubly composite hypothesis signal detection theory paradigm with location known exactly, lesion known exactly, and background known statistically. The results indicated that the detection performance as measured by the area under the receiver operating characteristic curve (ROC) deteriorated as density was increased, yielding findings consistent with clinical studies. The detection performance varied substantially across the twenty breasts. Furthermore, the log-likelihood ratio under H0 and H1 seemed to be affected by background tissue density and heterogeneity differently. Considering background tissue variability can change the outcomes of a VCT and is hence of crucial importance. The XCAT breast phantoms can address this concern by offering realistic modeling of background tissue variability based on a wide range of human subjects.
Circular tomosynthesis for neuro perfusion imaging on an interventional C-arm
Bernhard E. Claus, David A. Langan, Omar Al Assad, et al.
There is a clinical need to improve cerebral perfusion assessment during the treatment of ischemic stroke in the interventional suite. The clinician is able to determine whether the arterial blockage was successfully opened but is unable to sufficiently assess blood flow through the parenchyma. C-arm spin acquisitions can image the cerebral blood volume (CBV) but are challenged to capture the temporal dynamics of the iodinated contrast bolus, which is required to derive, e.g., cerebral blood flow (CBF) and mean transit time (MTT). Here we propose to utilize a circular tomosynthesis acquisition on the C-arm to achieve the necessary temporal sampling of the volume at the cost of incomplete data. We address the incomplete data problem by using tools from compressed sensing and incorporate temporal interpolation to improve our temporal resolution. A CT neuro perfusion data set is utilized for generating a dynamic (4D) volumetric model from which simulated tomo projections are generated. The 4D model is also used as a ground truth reference for performance evaluation. The performance that may be achieved with the tomo acquisition and 4D reconstruction (under simulation conditions, i.e., without considering data fidelity limitations due to imaging physics and imaging chain) is evaluated. In the considered scenario, good agreement between the ground truth and the tomo reconstruction in the parenchyma was achieved.
Poster Session
icon_mobile_dropdown
NSECT sinogram sampling optimization by normalized mutual information
Rodrigo S. Viana, Miguel A Galarreta-Valverde, Choukri Mekkaoui, et al.
Neutron Stimulated Emission Computed Tomography (NSECT) is an emerging noninvasive imaging technique that measures the distribution of isotopes from biological tissue using fast-neutron inelastic scattering reaction. As a high-energy neutron beam illuminates the sample, the excited nuclei emit gamma rays whose energies are unique to the emitting nuclei. Tomographic images of each element in the spectrum can then be reconstructed to represent the spatial distribution of elements within the sample using a first generation tomographic scan. NSECT's high radiation dose deposition, however, requires a sampling strategy that can yield maximum image quality under a reasonable radiation dose. In this work, we introduce an NSECT sinogram sampling technique based on the Normalized Mutual Information (NMI) of the reconstructed images. By applying the Radon Transform on the ground-truth image obtained from a carbon-based synthetic phantom, different NSECT sinogram configurations were simulated and compared by using the NMI as a similarity measure. The proposed methodology was also applied on NSECT images acquired using MCNP5 Monte Carlo simulations of the same phantom to validate our strategy. Results show that NMI can be used to robustly predict the quality of the reconstructed NSECT images, leading to an optimal NSECT acquisition and a minimal absorbed dose by the patient.
Feasibility study of dose reduction in digital breast tomosynthesis using non-local denoising algorithms
The main purpose of this work is to study the ability of denoising algorithms to reduce the radiation dose in Digital Breast Tomosynthesis (DBT) examinations. Clinical use of DBT is normally performed in “combo-mode”, in which, in addition to DBT projections, a 2D mammogram is taken with the standard radiation dose. As a result, patients have been exposed to radiation doses higher than used in digital mammography. Thus, efforts to reduce the radiation dose in DBT examinations are of great interest. However, a decrease in dose leads to an increased quantum noise level, and related decrease in image quality. This work is aimed at addressing this problem by the use of denoising techniques, which could allow for dose reduction while keeping the image quality acceptable. We have studied two “state of the art” denoising techniques for filtering the quantum noise due to the reduced dose in DBT projections: Non-local Means (NLM) and Block-matching 3D (BM3D). We acquired DBT projections at different dose levels of an anthropomorphic physical breast phantom with inserted simulated microcalcifications. Then, we found the optimal filtering parameters where the denoising algorithms are capable of recovering the quality from the DBT images acquired with the standard radiation dose. Results using objective image quality assessment metrics showed that BM3D algorithm achieved better noise adjustment (mean difference in peak signal to noise ratio < 0.1dB) and less blurring (mean difference in image sharpness ~ 6%) than the NLM for the projections acquired with lower radiation doses.
Virtual clinical trials using inserted pathology in clinical images: investigation of assumptions for local glandularity and noise
Alaleh Rashidnasab, Premkumar Elangovan, Alistair Mackenzie, et al.
Virtual clinical trials have been proposed as a viable alternative to clinical trials for testing and comparing the performance of breast imaging systems. One of the main simulation methodologies used in virtual trials employs clinical images of patients in which simulated models of cancer are inserted using a physics-based template multiplication technique. The purpose of this work is to investigate two assumptions commonly considered in this simulation approach: Firstly, given the absence of useful depth information in a clinical situation, an average measure of the local breast glandularity is commonly used as an estimate of the breast composition at the insertion site; secondly, it is also assumed that any change in the relative noise in the image at the insertion site, after insertion of a mass, is negligible. In order to test the validity of these assumptions, spheres representing idealised masses and anthropomorphic computational breast phantoms with perfect prior knowledge of local tissue composition and distribution were used. Results from several region of interest (ROI) insertions demonstrated a lack of variation obtained in contrast with insertion depth using the template multiplication insertion method as compared to the true depth-wise variation contrast values obtained from voxel replacement in a heterogeneous phantom. It was also found that the amount of noise is underestimated by insertion of spherical masses using template multiplication method by 8% - 29% compared to voxel replacement for the test conditions. This resulted in up to 12% variation in contrast-to-noise-ratio (CNR) values between template multiplication and voxel replacement methods.
Region of interest processing for iterative reconstruction in x-ray computed tomography
Felix K. Kopp, Radin A. Nasirudin, Kai Mei, et al.
The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.
Improving low-dose cardiac CT images using 3D sparse representation based processing
Luyao Shi, Yang Chen, Limin Luo
Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.
Complete optical stack modeling for CMOS-based medical x-ray detectors
Alexander S. Zyazin, Inge M. Peters
We have developed a simulation tool for modeling the performance of CMOS-based medical x-ray detectors, based on the Monte Carlo toolkit GEANT4. Following the Fujita-Lubberts-Swank approach recently reported by Star-Lack et al., we calculate modulation transfer function MTF(f), noise power spectrum NPS(f) and detective quantum efficiency DQE(f) curves. The complete optical stack is modeled, including scintillator, fiber optic plate (FOP), optical adhesive and CMOS image sensor. For critical parts of the stack, detailed models have been developed, taking into account their respective microstructure. This includes two different scintillator types: Gd2O2S:Tb (GOS) and CsI:Tl. The granular structure of the former is modeled using anisotropic Mie scattering. The columnar structure of the latter is introduced into calculations directly, using the parameterization capabilities of GEANT4. The underlying homogeneous CsI layer is also incorporated into the model as well as the optional reflective layer on top of the scintillator screen or the protective polymer top coat. The FOP is modeled as an array of hexagonal bundles of fibers. The simulated CMOS stack consists of layers of Si3N4 and SiO2 on top of a silicon pixel array. The model is validated against measurements of various test detector structures, using different x-ray spectra (RQA5 and RQA-M2), showing good match between calculated and measured MTF(f) and DQE(f) curves.
Incorporating corrections for the head-holder and compensation filter when calculating skin dose during fluoroscopically guided interventions
The skin dose tracking system (DTS) that we developed provides a color-coded illustration of the cumulative skin dose distribution on a 3D graphic of the patient during fluoroscopic procedures for immediate feedback to the interventionist. To improve the accuracy of dose calculation, we now have incorporated two additional important corrections (1) for the holder used to immobilize the head in neuro-interventions and (2) for the built-in compensation filters used for beam equalization. Both devices have been modeled in the DTS software so that beam intensity corrections can be made. The head-holder is modeled as two concentric hemi-cylindrical surfaces such that the path length between those surfaces can be determined for rays to individual points on the skin surface. The head-holder on the imaging system we used was measured to attenuate the primary x-rays by 10 to 20% for normal incidence, and up to 40% at non-normal incidence. In addition, three compensation filters of different shape are built into the collimator apparatus and were measured to have attenuation factors ranging from 58% to 99%, depending on kVp and beam filtration. These filters can translate and rotate in the beam and their motion is tracked by the DTS using the digital signal from the imaging system. When it is determined that a ray to a given point on the skin passes through the compensation filter, the appropriate attenuation correction is applied. These corrections have been successfully incorporated in the DTS software to provide a more accurate determination of skin dose.
An attempt to estimate out-of-plane lung nodule elongation in tomosynthesis images
Artur Chodorowski, Jonathan Arvidsson, Christina Söderman, et al.
In chest tomosynthesis (TS) the most commonly used reconstruction methods are based on Filtered Back Projection (FBP) algorithms. Due to the limited angular range of x-ray projections, FBP reconstructed data is typically associated with a low spatial resolution in the out-of-plane dimension. Lung nodule measures that depend on depth information such as 3D shape and volume are therefore difficult to estimate. In this paper the relation between features from FBP reconstructed lung nodules and the true out-of-plane nodule elongation is investigated and a method for estimating the out-of-plane nodule elongation is proposed. In order to study these relations a number of steps that include simulation of spheroidal-shaped nodules, insertion into synthetic data volumes, construction of TS-projections and FBP-reconstruction were performed. In addition, the same procedure was used to simulate nodules and insert them into clinical chest TS projection data. The reconstructed nodule data was then investigated with respect to in-plane diameter, out-of-plane elongation, and attenuation coefficient. It was found that the voxel value in each nodule increased linearly with nodule elongation, for nodules with a constant attenuation coefficient. Similarly, the voxel value increased linearly with in-plane diameter. These observations indicate the possibility to predict the nodule elongation from the reconstructed voxel intensity values. Such a method would represent a quantitative approach to chest tomosynthesis that may be useful in future work on volume and growth rate estimation of lung nodules.
A wire scanning based method for geometric calibration of high resolution CT system
Ruijie Jiang, Guang Li, Ning Gu, et al.
This paper is about geometric calibration of the high resolution CT (Computed Tomography) system. Geometric calibration refers to the estimation of a set of parameters that describe the geometry of the CT system. Such parameters are so important that a little error of them will degrade the reconstruction images seriously, so more accurate geometric parameters are needed in the higher-resolution CT systems. But conventional calibration methods are not accurate enough for the current high resolution CT system whose resolution can reach sub-micrometer or even tens of nanometers. In this paper, we propose a new calibration method which has higher accuracy and it is based on the optimization theory. The superiority of this method is that we build a new cost function which sets up a relationship between the geometrical parameters and the binary reconstruction image of a thin wire. When the geometrical parameters are accurate, the cost function reaches its maximum value. In the experiment, we scanned a thin wire as the calibration data and a thin bamboo stick as the validation data to verify the correctness of the proposed method. Comparing with the image reconstructed with the geometric parameters calculated by using the conventional calibration method, the image reconstructed with the parameters calculated by our method has less geometric artifacts, so it can verify that our method can get more accurate geometric calibration parameters. Although we calculated only one geometric parameter in this paper, the geometric artifacts are still eliminated significantly. And this method can be easily generalized to all the geometrical parameters calibration in fan-beam or cone-beam CT systems.
Reduction of iodinated contrast medium in CT: feasibility study
Radin A. Nasirudin, Kai Mei, Felix K. Kopp, et al.
In CT, the magnitude of enhancement is proportional to the amount of contrast medium (CM) injected. However, high doses of iodinated CM pose health risks, ranging from mild side effects to serious complications such as contrast-induced nephropathy (CIN). This work presents a method that enables the reduction of CM dosage, without affecting the diagnostic image quality. The technique proposed takes advantage of the additional spectral information provided by photon-counting CT systems. In the first step, we apply a material decomposition technique on the projection data to discriminate iodine from other materials. Then, we estimated the noise of the decomposed image by calculating the Cramér-Rao lower bound of the parameter estimator. Next, we iteratively reconstruct the iodine-only image by using the decomposed image and the estimation of noise as an input into a maximum-likelihood iterative reconstruction algorithm. Finally, we combine the iodine-only image with the original image to enhance the contrast of low iodine concentrations. The resulting reconstructions show a notably improved contrast in the final images. Quantitatively, the combined image has a significantly improved CNR, while the measured concentrations are closer to the actual concentrations of the iodine. The preliminary results from our technique show the possibility of reducing the clinical dosage of iodine, without affecting the diagnostic image quality.
Physics-based modeling of computed tomography systems
We present a theoretical framework describing projections obtained from computed tomography systems considering physics of each component consisting of the systems. The projection model mainly consists of the attenuation of x-ray photons through objects including x-ray scatter and the detection of attenuated/scattered x-ray photons at pixel detector arrays. X-ray photons are attenuated by the Beers-Lambert law and scattered by using the Klein-Nishina formula. The cascaded signal-transfer model for the detector includes x-ray photon detection and light photon conversion/spreading in scintillators, light photon detection in photodiodes, and the addition of electronic noise quanta. On the other hand, image noise is considered by re-distributing the pixel signals in pixel-by-pixel ways at each image formation stage using the proper distribution functions. Instead of iterating the ray tracing over each energy bin in the x-ray spectrum, we first perform the ray tracing for an object only considering the thickness of each component. Then, we assign energy-dependent linear attenuation coefficients to each component in the projected images. This approach reduces the computation time by a factor of the number of energy bins in the x-ray spectrum divided by the number of components in the object compared with the conventional ray-tracing method. All the methods developed in this study are validated in comparisons with the measurements or the Monte Carlo simulations.
A novel CT-FFR method for the coronary artery based on 4D-CT image analysis and structural and fluid analysis
K. Hirohata, A. Kano, A. Goryu, et al.
Non invasive fractional flow reserve derived from CT coronary angiography (CT-FFR) has to date been typically performed using the principles of fluid analysis in which a lumped parameter coronary vascular bed model is assigned to represent the impedance of the downstream coronary vascular networks absent in the computational domain for each coronary outlet. This approach may have a number of limitations. It may not account for the impact of the myocardial contraction and relaxation during the cardiac cycle, patient-specific boundary conditions for coronary artery outlets and vessel stiffness. We have developed a novel approach based on 4D-CT image tracking (registration) and structural and fluid analysis, to address these issues. In our approach, we analyzed the deformation variation of vessels and the volume variation of vessels, primarily from 70% to 100% of cardiac phase, to better define boundary conditions and stiffness of vessels. We used a statistical estimation method based on a hierarchical Bayes model to integrate 4D-CT measurements and structural and fluid analysis data. Under these analysis conditions, we performed structural and fluid analysis to determine pressure, flow rate and CT-FFR. The consistency of this method has been verified by a comparison of 4D-CTFFR analysis results derived from five clinical 4D-CT datasets with invasive measurements of FFR. Additionally, phantom experiments of flexible tubes with/without stenosis using pulsating pumps, flow sensors and pressure sensors were performed. Our results show that the proposed 4D-CT-FFR analysis method has the potential to accurately estimate the effect of coronary artery stenosis on blood flow.
NVIDIA OptiX ray-tracing engine as a new tool for modelling medical imaging systems
Jakub Pietrzak, Krzysztof Kacperski, Marek Cieślar
The most accurate technique to model the X- and gamma radiation path through a numerically defined object is the Monte Carlo simulation which follows single photons according to their interaction probabilities. A simplified and much faster approach, which just integrates total interaction probabilities along selected paths, is known as ray tracing. Both techniques are used in medical imaging for simulating real imaging systems and as projectors required in iterative tomographic reconstruction algorithms. These approaches are ready for massive parallel implementation e.g. on Graphics Processing Units (GPU), which can greatly accelerate the computation time at a relatively low cost. In this paper we describe the application of the NVIDIA OptiX ray-tracing engine, popular in professional graphics and rendering applications, as a new powerful tool for X- and gamma ray-tracing in medical imaging. It allows the implementation of a variety of physical interactions of rays with pixel-, mesh- or nurbs-based objects, and recording any required quantities, like path integrals, interaction sites, deposited energies, and others. Using the OptiX engine we have implemented a code for rapid Monte Carlo simulations of Single Photon Emission Computed Tomography (SPECT) imaging, as well as the ray-tracing projector, which can be used in reconstruction algorithms. The engine generates efficient, scalable and optimized GPU code, ready to run on multi GPU heterogeneous systems. We have compared the results our simulations with the GATE package. With the OptiX engine the computation time of a Monte Carlo simulation can be reduced from days to minutes.
Feasibility of ray- and pixel-driven projector/back-projector in linear motion tomosynthesis
Sunghoon Choi, Seungwan Lee, Young-Jin Lee, et al.
Algorithmic system modeling which includes a geometric motion of source, phantom, and detector for reconstructing the tomographic images is well-known in medical imaging field. Especially in a digital X-ray tomosynthesis system (DTS) which scans an object in limited angle, not a full 360-degree, an accurate system modeling should be derived to reconstruct an excellent cross sectional image. In this study, we analytically modeled a ray-driven forward projector and a pixel-driven back-projector. We firstly acquired forward projected images of a computerized Shepp-Logan phantom over an ±20° angular range using ray-driven projector. On top of that, we reconstructed the analytically scanned phantom using pixel-driven back-projector based on a conventional filtered back-projection (FBP) in tomosynthesis. We evaluated root-mean-square errors (RMSEs) and horizontal profiles of normalized pixel values in the reconstructed axial cross sectional images. The results indicated that pixel-driven back-projector combined with ray-driven projector showed low RMSEs of 0.25, 0.49, 0.80, 1.46, and 0.94 among five different regions-of-interests (ROIs). Illustrated horizontal profiles of normalized pixel values between the referenced phantom and reconstructed object showed similar values, which demonstrated that both ray- and pixel-driven projector/back-projector could be utilized in linear motion DTS.
A rapid parallelization of cone-beam projection and back-projection operator based on texture fetching interpolation
Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.
Using digital subtraction in computer simulated images as a tool to aid the visual detection of masked lesions in dense breasts
Homero Schiabel, Luciana T. Guimarães, Maria A. Z. Sousa
This work proposes a simulation model involving subtraction of digital mammography images obtained at different X-ray beam levels of energy to aid the detection of breast malignant lesions. Absorption coefficients behavior of 3 main structures of clinical interest – adipose tissue, fiber glandular tissue and the typical carcinoma – as a function of the beam energy from a Mo X-ray tube was the basis to develop a computer simulation of the possible acquired images. The simulation has considered a typical compressed breast with 4.5cm in thickness, and variations of the carcinoma and glandular tissues thicknesses - 0.4 up to 2.0cm and 4.1 to 2.5cm, respectively - were evaluated as a function of the photons mean energy - 14 up to 25 keV, in the typical mammography energy range. Results have shown that: (a) if the carcinoma thickness is over 0.4cm, its detection may be feasible even masked by fiber tissue with exposures in the range of 19 to 25 keV; (b) for masked carcinoma with thickness in the range of 0.4-2.0cm, the proposed procedure can enhance it in the image resulting from the digital subtraction between images obtained at 14 and at 22 keV. Therefore such results indicate that this simulation procedure can be a useful tool in aiding the identification of possible missed malignant lesions which could not be detected in the typical exam, mainly considering dense breasts.
Optimized magnetic resonance diffusion protocol for ex-vivo whole human brain imaging with a clinical scanner
Benoit Scherrer, Onur Afacan, Aymeric Stamm, et al.
Diffusion-weighted magnetic resonance imaging (DW-MRI) provides a novel insight into the brain to facilitate our understanding of the brain connectivity and microstructure. While in-vivo DW-MRI enables imaging of living patients and longitudinal studies of brain changes, post-mortem ex-vivo DW-MRI has numerous advantages. Ex-vivo imaging benefits from greater resolution and sensitivity due to the lack of imaging time constraints; the use of tighter fitting coils; and the lack of movement artifacts. This allows characterization of normal and abnormal tissues with unprecedented resolution and sensitivity, facilitating our ability to investigate anatomical structures that are inaccessible in-vivo. This also offers the opportunity to develop today novel imaging biomarkers that will, with tomorrow’s MR technology, enable improved in-vivo assessment of the risk of disease in an individual. Post-mortem studies, however, generally rely on the fixation of specimen to inhibit tissue decay which starts as soon as tissue is deprived from its blood supply. Unfortunately, fixation of tissues substantially alters tissue diffusivity profiles. In addition, ex-vivo DW-MRI requires particular care when packaging the specimen because the presence of microscopic air bubbles gives rise to geometric and intensity image distortion. In this work, we considered the specific requirements of post-mortem imaging and designed an optimized protocol for ex-vivo whole brain DW-MRI using a human clinical 3T scanner. Human clinical 3T scanners are available to a large number of researchers and, unlike most animal scanners, have a bore diameter large enough to image a whole human brain. Our optimized protocol will facilitate widespread ex-vivo investigations of large specimen.
Convolution-based estimation of organ dose in tube current modulated CT
Xiaoyu Tian, W. Paul Segars, R. L. Dixon, et al.
Among the various metrics that quantify radiation dose in computed tomography (CT), organ dose is one of the most representative quantities reflecting patient-specific radiation burden.1 Accurate estimation of organ dose requires one to effectively model the patient anatomy and the irradiation field. As illustrated in previous studies, the patient anatomy factor can be modeled using a library of computational phantoms with representative body habitus.2 However, the modeling of irradiation field can be practically challenging, especially for CT exams performed with tube current modulation. The central challenge is to effectively quantify the scatter irradiation field created by the dynamic change of tube current. In this study, we present a convolution-based technique to effectively quantify the primary and scatter irradiation field for TCM examinations. The organ dose for a given clinical patient can then be rapidly determined using the convolution-based method, a patient-matching technique, and a library of computational phantoms. 58 adult patients were included in this study (age range: 18–70 y.o., weight range: 60–180 kg). One computational phantom was created based on the clinical images of each patient. Each patient was optimally matched against one of the remaining 57 computational phantoms using a leave-one-out strategy. For each computational phantom, the organ dose coefficients (CTDIvol-normalized organ dose) under fixed tube current were simulated using a validated Monte Carlo simulation program. Such organ dose coefficients were multiplied by a scaling factor, (CTDIvol )organ, convolution that quantifies the regional irradiation field. The convolution-based organ dose was compared with the organ dose simulated from Monte Carlo program with TCM profiles explicitly modeled on the original phantom created based on patient images. The estimation error was within 10% across all organs and modulation profiles for abdominopelvic examination. This strategy enables prospective and retrospective patient-specific dose estimation without the need of Monte Carlo simulation.
Personalized low dose CT via variable kVp
Hui Wang, Yannan Jin, Yangyang Yao, et al.
Computerized Tomography (CT) is a powerful radiographic imaging technology but the health risk due to the exposure of x-ray radiation has drawn wide concern. In this study, we propose to use kVp modulation to reduce the radiation dose and achieve the personalized low dose CT. Two sets of simulation are performed to demonstrate the effectiveness of kVp modulation and the corresponding calibration. The first simulation used the helical body phantom (HBP) that is an elliptical water cylinder with high density bone inserts. The second simulation uses the NCAT phantom to emulate the practical use of kVp modulation approach with region of interest (ROI) selected in the cardiac region. The kVp modulation profile could be optimized view by view based on the knowledge of patient attenuation. A second order correction is applied to eliminate the beam hardening artifacts. To simplify the calibration process, we first generate the calibration vectors for a few representative spectra and then acquire other calibration vectors with interpolation. The simulation results demonstrate the beam hardening artifacts in the images with kVp modulation can be eliminated with proper beam hardening correction. The results also show that the simplification of calibration did not impair the image quality: the calibration with the simplified and the complete vectors both eliminate the artifacts effectively and the results are comparable. In summary, this study demonstrates the feasibility of kVp modulation and gives a practical way to calibrate the high order beam hardening artifacts.
Dosimetry for spectral molecular imaging of small animals with MARS-CT
Noémie Ganet, Nigel Anderson, Stephen Bell, et al.
The Medipix All Resolution Scanner (MARS) spectral CT is intended for small animal, pre-clinical imaging and uses an x-ray detector (Medipix) operating in single photon counting mode. The MARS system provides spectrometric information to facilitate differentiation of tissue types and bio-markers. For longitudinal studies of disease models, it is desirable to characterise the system’s dosimetry. This dosimetry study is performed using three phantoms each consisting of a 30 mm diameter homogeneous PMMA cylinder simulating a mouse. The imaging parameters used for this study are derived from those used for gold nanoparticle identification in mouse kidneys. Dosimetry measurement are obtained with thermo-luminescent Lithium Fluoride (LiF:CuMgP) detectors, calibrated in terms of air kerma and placed at different depths and orientations in the phantoms. Central axis TLD air kerma rates of 17.2 (± 0.71) mGy/min and 18.2 (± 0.75) mGy/min were obtained for different phantoms and TLD orientations. Validation measurements were acquired with a pencil ionization chamber, giving an air-kerma rate of 20.3 (±1) mGy/min and an estimated total air kerma of 81.2 (± 4) mGy for a 720 projection acquisition. It is anticipated that scanner design improvements will significantly decrease future dose requirements. The procedures developed in this work will be used for further dosimetry calculations when optimizing image acquisition for the MARS system as it undergoes development towards human clinical applications.
Patient specific tube current modulation for CT dose reduction
Yannan Jin, Zhye Yin, Yangyang Yao, et al.
Radiation exposure during CT imaging has drawn growing concern from academia, industry as well as the general public. Sinusoidal tube current modulation has been available in most commercial products and used routinely in clinical practice. To further exploit the potential of tube current modulation, Sperl et al. proposed a Computer-Assisted Scan Protocol and Reconstruction (CASPAR) scheme [6] that modulates the tube current based on the clinical applications and patient specific information. The purpose of this study is to accelerate the CASPAR scheme to make it more practical for clinical use and investigate its dose benefit for different clinical applications. The Monte Carlo simulation in the original CASPAR scheme was substituted by the dose reconstruction to accelerate the optimization process. To demonstrate the dose benefit, we used the CATSIM package generate the projection data and perform standard FDK reconstruction. The NCAT phantom at thorax position was used in the simulation. We chose three clinical cases (routine chest scan, coronary CT angiography with and without breast avoidance) and compared the dose level with different mA modulation schemes (patient specific, sinusoidal and constant mA) with matched image quality. The simulation study of three clinical cases demonstrated that the patient specific mA modulation could significantly reduce the radiation dose compared to sinusoidal modulation. The dose benefits depend on the clinical application and object shape. With matched image quality, for chest scan the patient specific mA profile reduced the dose by about 15% compared to the sinusoid mA modulation; for the organ avoidance scan the dose reduction to the breast was over 50% compared to the constant mA baseline.
A real-time skin dose tracking system for biplane neuro-interventional procedures
A biplane dose-tracking system (Biplane-DTS) that provides a real-time display of the skin-dose distribution on a 3D-patient graphic during neuro-interventional fluoroscopic procedures was developed. Biplane-DTS calculates patient skin dose using geometry and exposure information for the two gantries of the imaging system acquired from the digital system bus. The dose is calculated for individual points on the patient graphic surface for each exposure pulse and cumulative dose for both x-ray tubes is displayed as color maps on a split screen showing frontal and lateral projections of a 3D-humanoid graphic. Overall peak skin dose (PSD), FOV-PSD and current dose rates for the two gantries are also displayed. Biplane- TS uses calibration files of mR/mAs for the frontal and lateral tubes measured with and without the table in the beam at the entrance surface of a 20 cm thick PMMA phantom placed 15 cm tube-side of the isocenter. For neuro-imaging, conversion factors are applied as a function of entrance field area to scale the calculated dose to that measured with a Phantom Laboratory head phantom which contains a human skull to account for differences in backscatter between PMMA and the human head. The software incorporates inverse-square correction to each point on the skin and corrects for angulation of the beam through the table. Dose calculated by Biplane DTS and values measured by a 6-cc ionization chamber placed on the head phantom at multiple points agree within a range of -3% to +7% with a standard deviation for all points of less than 3%.
A Monte Carlo study on the effect of the orbital bone to the radiation dose delivered to the eye lens
Andreas Stratis, Guozhi Zhang, Reinhilde Jacobs, et al.
The aim of this work was to investigate the influence of backscatter radiation from the orbital bone and the intraorbital fat on the eye lens dose in the dental CBCT energy range. To this end we conducted three different yet interrelated studies; A preliminary simulation study was conducted to examine the impact of a bony layer situated underneath a soft tissue layer on the amount of backscatter radiation. We compared the Percentage Depth Dose (PDD) curves in soft tissue with and without the bone layer and we estimated the depth in tissue where the decrease in backscatter caused by the presence of the bone is noticeable. In a supplementary study, an eye voxel phantom was designed with the DOSxyznrc code. Simulations were performed exposing the phantom at different x-ray energies sequentially in air, in fat tissue and in realistic anatomy with the incident beam perpendicular to the phantom. Finally, a virtual head phantom was implemented into a validated hybrid Monte Carlo (MC) framework to simulate a large Field of View protocol of a real CBCT scanner and examine the influence of scattered dose to the eye lens during the whole rotation of the paired tube-detector system. The results indicated an increase in the dose to the lens due to the fatty tissue in the surrounding anatomy. There is a noticeable dose reduction close to the bone-tissue interface which weakens with increasing distance from the interface, such that the impact of the orbital bone in the eye lens dose becomes small.
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
John S. Muryn, Ashraf G. Morgan, W. Paul Segars, et al.
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
A numerical investigation for the optimal positions and weighting coefficients of point dose measurements in the weighted CTDI
Jang-Hwan Choi, Dragos Constantin, Rebecca Fahrig
The mean dose over the central phantom plane (i.e., z = 0, dose maximum image) is useful in that it allows us to compare radiation dose levels across different CT scanners and acquisition protocols. The mean dose from a conventional CT scan with table translation is typically estimated by weighted CTDI (CTDIW). However, conventional CTDIW has inconsistent performance, depending on its weighting coefficients ("1/2 and 1/2" or "1/3 and 2/3") and acquisition protocols.

We used a Monte Carlo (MC) model based on Geant4 (GEometry ANd Tracking) to generate dose profiles in the central plane of the CTDI phantom. MC simulations were carried out for three different sizes of z-collimator and different tube voltages (80, 100, or 120 kVp), a tube current of 80 mA, and an exposure time of 25 ms.

We derived optimal weighting coefficients by taking the integral of the radial dose profiles. The first-order linear equation and the quadratic equation were used to fit the dose profiles along the radial direction perpendicular to the central plane, and the fitted profiles were revolved about the Z-axis to compute the mean dose (i.e., total volume under the fitted profiles/the central plane area). The integral computed using the linear equation resulted in the same equation as conventional CTDIW, and the integral computed using the quadratic equation resulted in a new CTDIW (CTDIMW) that incorporates different weightings ("2/3 and 1/3") and the middle dose point instead of the central dose point.

Compared to the results of MC simulations, our new CTDIMW showed less error than the previous CTDIW methods by successfully incorporating the curvature of the dose profiles regardless of acquisition protocols. Our new CTDIMW will also be applicable to the AAPM-ICRU phantom, which has a middle dose point.
3D dosimetry estimation for selective internal radiation therapy (SIRT) using SPECT/CT images: a phantom study
Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium – 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.
A comparison of mammographic systems for different breast thicknesses using model observer detectability
Nelis Van Peteghem, Elena Salvagnini, Hilde Bosmans, et al.
This work investigated image quality as a function of PMMA thickness on a variety of mammography systems. Image quality was quantified by calculating detectability (d’) using a non-prewhitening with eye filter model observer (NPWE) from routinely acquired quality control (QC) data of twelve digital radiography (DR) systems. The sample of systems included two mammography devices equipped with the Siemens PRIME upgrade and one system with the Claymount SmartBucky detector. The d’ data were calculated for a 0.1 and 0.25 mm diameter gold discs using images of homogeneous PMMA (thickness from 2 to 7 cm), all from the routinely performed AEC test. The GE Essential systems had the highest d’ values for low thicknesses and the lowest d’ values for high thicknesses. The Hologic Selenia Dimension systems had the most constant detectability curve, ensuring high d’ values at high thicknesses. This was achieved by increasing the mean glandular dose (MGD) at higher thicknesses compared to the other systems. The Siemens PRIME and the Claymount system detectability results were comparable to the standard FFDM systems. Mean glandular dose at 5, 6 and 7 cm PMMA and gold threshold thickness at 5 cm PMMA were also evaluated. The Claymount system had a high (but acceptable) threshold gold thickness (T) compared to the other systems. This was probably caused by the low dose at which this DR detector operates. Results of NPWE d’ and CDMAM analysis showed the same trends.
Influence of DBT reconstruction algorithm on power law spectrum coefficient
Laurence Vancamberg, Ann-Katherine Carton, Ilyes Hadj Abderrahmane, et al.
In breast X-ray images, texture has been characterized by a noise power spectrum (NPS) that has an inverse power-law shape described by its slope β in the log-log domain. It has been suggested that the magnitude of the power-law spectrum coefficient β is related to mass lesion detection performance. We assessed β in reconstructed digital breast tomosynthesis (DBT) images to evaluate its sensitivity to different typical reconstruction algorithms including simple back projection (SBP), filtered back projection (FBP) and a simultaneous iterative reconstruction algorithm (SIRT 30 iterations). Results were further compared to the β coefficient estimated from 2D central DBT projections. The calculations were performed on 31 unilateral clinical DBT data sets and simulated DBT images from 31 anthropomorphic software breast phantoms. Our results show that β highly depends on the reconstruction algorithm; the highest β values were found for SBP, followed by reconstruction with FBP, while the lowest β values were found for SIRT. In contrast to previous studies, we found that β is not always lower in reconstructed DBT slices, compared to 2D projections and this depends on the reconstruction algorithm. All β values estimated in DBT slices reconstructed with SBP were larger than β values from 2D central projections. Our study also shows that the reconstruction algorithm affects the symmetry of the breast texture NPS; the NPS of clinical cases reconstructed with SBP exhibit the highest symmetry, while the NPS of cases reconstructed with SIRT exhibit the highest asymmetry.
Intrinsic noise power spectrum for the electronic noise in radiography image detectors
In order to design low-dose imaging systems, the radiography detector should have a good noise performance especially at low incident exposures. The signal-to-noise ratio (SNR) performance at low incident exposures is influenced by the electronic noise of readout circuits as well as the quantum noise of x-rays. Hence, analyzing the electronic noise of a detector is of importance in developing good detectors. However, the SNR value for electronic noise is zero and does not provide any useful information. Observing the standard deviation of the acquired image without exposure may confuse the analysis due to the inconsistent electronic gains of the readout circuits. Hence, it is required to find an appropriate evaluation scheme for the electronic noise. A blind electronic noise evaluation approach, which uses a set of acquired images at various incident exposures, is considered in this paper. We calculate the electronic gain and then derive an intrinsic noise power spectrum (NPS), which is independent of the electronic gain. Furthermore, we can obtain the intrinsic NPS only for the electronic noise. The proposed evaluation schemes are experimentally tested for digital x-ray images, which are obtained from various development prototypes of direct and indirect detectors. It is shown that the proposed schemes can efficiently provide an evaluation for the electronic noise performance.
Noise performance studies of model-based iterative reconstruction (MBIR) as a function of kV, mA and exposure level: Impact on radiation dose reduction and image quality
Daniel Gomez-Cardona, Ke Li, Meghan G. Lubner, et al.
The significance of understanding the noise properties of clinical CT systems is twofold: First, as the diagnostic performance (particularly for the detection of low contrast lesions) is strongly limited by noise, a thorough study of the dependence of image noise on scanning and reconstruction parameters would enable the desired image quality to be achieved with the least amount of radiation dose; Second, a clear understanding of the noise properties of CT systems would allow the limitations in existing CT systems to be identified and improved. The recent introduction of the model-based iterative reconstruction (MBIR) method has introduced strong nonlinearity to clinical CT systems and violated the classical relationship between CT noise properties and CT system parameters, therefore it is necessary to perform a comprehensive study on the noise properties of MBIR. The purpose of this study was to systematically study the dependence of the noise magnitude and noise texture of MBIR on x-ray tube potential (kV), tube current (mA), and radiation dose level. It has been found that the noise variance σ2 of MBIR has relaxed dependence on kV and mA, which can be described as power-law relationships as σ2 ∝kV−1 and σ2 ∝ mA−0.4, respectively. The shape of the noise power spectrum (NPS) demonstrated a strong dependence on kV and mA, but it remained constant as long as the radiation dose level was the same. These semi-empirical relationships can be potentially used to guide the optimal selection of kV and mA when prescribing CT scans with the maximal dose reduction.
Directional MTF measurement using sphere phantoms for a digital breast tomosynthesis system
The digital breast tomosynthesis (DBT) has been widely used as a diagnosis imaging modality of breast cancer because of potential for structure noise reduction, better detectability, and less breast compression. Since 3D modulation transfer function (MTF) is one of the quantitative metrics to assess the spatial resolution of medical imaging systems, it is very important to measure 3D MTF of the DBT system to evaluate the resolution performance. In order to do that, Samei et al. used sphere phantoms and applied Thornton’s method to the DBT system. However, due to the limitation of Thornton’s method, the low frequency drop, caused by the limited data acquisition angle and reconstruction filters, was not measured correctly. To overcome this limitation, we propose a Richardson-Lucy (RL) deconvolution based estimation method to measure the directional MTF. We reconstructed point and sphere objects using FDK algorithm within a 40⁰ data acquisition angle. The ideal 3D MTF is obtained by taking Fourier transform of the reconstructed point object, and three directions (i.e., fx-direction, fy-direction, and fxy-direction) of the ideal 3D MTF are used as a reference. To estimate the directional MTF, the plane integrals of the reconstructed and ideal sphere object were calculated and used to estimate the directional PSF using RL deconvolution technique. Finally, the directional MTF was calculated by taking Fourier transform of the estimated PSF. Compared to the previous method, the proposed method showed a good agreement with the ideal directional MTF, especially at low frequency regions.
Comparison of methods for quantitative evaluation of endoscopic distortion
Quanzeng Wang, Kurt Castro, Viraj N. Desai, et al.
Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope’s image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.
An experimental study of the accuracy in measurement of modulation transfer function using an edge method
Dong-Hoon Lee, Ye-seul Kim, Hye-Suk Park, et al.
Image evaluation is necessary in digital radiography (DR) which is widely used in medical imaging. Among parameters of image evaluation, modulation transfer function (MTF) is the important factor in the field of medical imaging and necessary to obtain detective quantum efficiency (DQE) which represents overall performance of the detector signal-to-noise ratio. However, the accurate measurement of MTF is still not easy because of geometric effect, electric noise, quantum noise, and truncation error. Therefore, in order to improve accuracy of MTF, four experimental methods were tested in this study such as changing the tube current, applying smoothing method in edge spread function (ESF), adjusting line spread function (LSF) range, and changing tube angle. Our results showed that MTF’s fluctuation was decreased by high tube current and smoothing method. However, tube current should not exceed detector saturation and smoothing in ESF causes a distortion in ESF and MTF. In addition, decreasing LSF range diminished fluctuation and the number of sampling in MTF and high tube angle generates degradation in MTF. Based on these results, excessively low tube current and the smoothing method should be avoided. Also, optimal range of LSF considering reduction of fluctuation and the number of sampling in MTF was necessary and precise tube angle is essential to obtain an accurate MTF. In conclusion, our results demonstrated that accurate MTF can be acquired.
Physical performance testing of digital breast tomosynthesis
Takao Kuwabara, Kenji Yoshikawa
Digital breast tomosynthesis has become accepted in clinical use. It is important to physically evaluate a system to ensure that it is working at full performance. Non-linear reconstruction processing is proposed to improve interpretation of clinical images by enhancing the minute contrasts of breast tissue while suppressing metal artifacts. Because existing measuring methods assume a linear system, physical evaluation applied to images reconstructed with non-linear processing may result in unnatural values. We investigated the influence of different reconstruction methods on physical evaluations. We suggest using images reconstructed by back projection processing without a filter to ensure the device performance directly.
Iterative CT reconstruction with small pixel size: distance-driven forward projector versus Joseph's
K. Hahn, U. Rassner, H. C. Davidson, et al.
Over the last few years, iterative reconstruction methods have become an important research topic in x-ray CT imaging. This effort is motivated by increasing evidence that such methods may enable significant savings in terms of dose imparted to the patient. Conceptually, iterative reconstruction methods involve two important ingredients: the statistical model, which includes the forward projector, and a priori information in the image domain, which is expressed using a regularizer. Most often, the image pixel size is chosen to be equal (or close) to the detector pixel size (at field-of-view center). However, there are applications for which a smaller pixel size is desired. In this investigation, we focus on reconstruction with a pixel size that is twice smaller than the detector pixel size. Using such a small pixel size implies a large increase in computational effort when using the distance-driven method for forward projection, which models the detector size. On the other hand, the more efficient method of Joseph will create imbalances in the reconstruction of each pixel, in the sense that there will be large differences in the way each projection contributes to the pixels. The purpose of this work is to evaluate the impact of these imbalances on image quality in comparison with utilization of the distance-driven method. The evaluation involves computational effort, bias and noise metrics, and LROC analysis using human observers. The results show that Joseph's method largely remains attractive.
Application of the fractal Perlin noise algorithm for the generation of simulated breast tissue
Magnus Dustler, Predrag Bakic, Hannie Petersson, et al.
Software breast phantoms are increasingly seeing use in preclinical validation of breast image acquisition systems and image analysis methods. Phantom realism has been proven sufficient for numerous specific validation tasks. A challenge is the generation of suitably realistic small-scale breast structures that could further improve the quality of phantom images. Power law noise follows the noise power characteristics of breast tissue, but may not sufficiently represent certain (e.g., non-Gaussian) properties seen in clinical breast images. The purpose of this work was to investigate the utility of fractal Perlin noise in generating more realistic breast tissue through investigation of its power spectrum and visual characteristics. Perlin noise is an algorithm that creates smoothly varying random structures of an arbitrary frequency. Through the use of a technique known as fractal noise or fractional Brownian motion (fBm), octaves of noise with different frequency are combined to generate coherent noise with a broad frequency range. fBm is controlled by two parameters – lacunarity and persistence – related to the frequency and amplitude of successive octaves, respectively. Average noise power spectra were calculated and beta parameters estimated in sample volumes of fractal Perlin noise with different combinations of lacunarity and persistence. Certain combinations of parameters resulted in noise volumes with beta values between 2 and 3, corresponding to reported measurements in real breast tissue. Different combinations of parameters resulted in different visual appearances. In conclusion, Perlin noise offers a flexible tool for generating breast tissue with realistic properties.
Investigating materials for breast nodules simulation by using segmentation and similarity analysis of digital images
Paula N. Siqueira, Karem D. Marcomini, Maria A. Z. Sousa, et al.
The task of identifying the malignancy of nodular lesions on mammograms becomes quite complex due to overlapped structures or even to the granular fibrous tissue which can cause confusion in classifying masses shape, leading to unnecessary biopsies. Efforts to develop methods for automatic masses detection in CADe (Computer Aided Detection) schemes have been made with the aim of assisting radiologists and working as a second opinion. The validation of these methods may be accomplished for instance by using databases with clinical images or acquired through breast phantoms. With this aim, some types of materials were tested in order to produce radiographic phantom images which could characterize a good enough approach to the typical mammograms corresponding to actual breast nodules. Therefore different nodules patterns were physically produced and used on a previous developed breast phantom. Their characteristics were tested according to the digital images obtained from phantom exposures at a LORAD M-IV mammography unit. Two analysis were realized the first one by the segmentation of regions of interest containing the simulated nodules by an automated segmentation technique as well as by an experienced radiologist who has delineated the contour of each nodule by means of a graphic display digitizer. Both results were compared by using evaluation metrics. The second one used measure of quality Structural Similarity (SSIM) to generate quantitative data related to the texture produced by each material. Although all the tested materials proved to be suitable for the study, the PVC film yielded the best results.
Acoustic characterization of polyvinyl chloride and self-healing silicone as phantom materials
Phantoms are physical constructs used in procedure planning, training, medical imaging research, and machine calibration. Depending on the application, the material a phantom is made out of is very important. With ultrasound imaging, phantom materials used need to have similar acoustic properties, specifically speed of sound and attenuation, as a specified tissue. Phantoms used with needle insertion require a material with a similar tensile strength as tissue and, if possible, the ability to self heal increasing its overall lifespan. Soft polyvinyl chloride (PVC) and silicone were tested as possible needle insertion phantom materials. Acoustic characteristics were determined using a time of flight technique, where a pulse was passed through a sample contained in a water bath. The speed of sound and attenuation were both determined manually and through spectral analysis. Soft PVC was determined to have a speed of sound of approximately 1395 m/s and attenuation of 0.441 dB/cm (at 1 MHz). For the silicone mixture, the respective speed of sound values was within a range of 964.7 m/s and 1250.0 m/s with an attenuation of 0.547 dB/cm (at 1 MHz).
SPECT reconstruction using DCT-induced tight framelet regularization
Jiahan Zhang, Si Li, Yuesheng Xu, et al.
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Robust iterative image reconstruction for breast CT by use of projection differentiation
David N. Kraemer, Erin G. Roth, Emil Y. Sidky, et al.
Image reconstruction algorithms for breast CT must deal with truncated projections and high noise levels. Recently, we have been investigating a design of iterative image reconstruction algorithms that employ a differentiation filter on the projection data and estimated projections. The extra processing step can potentially reduce the impact of artifacts due to projection truncation in addition to enhancing edges in the reconstructed volumes. The edge enhancement can improve visibility of various tissue structures. Previously, this idea has been incorporated in an approximate solver of the associated optimization problem. In the present work, we present reconstructed volumes with clinical breast CT data, which result from accurate solution of this optmization problem. Furthermore, we employ singular value decomposition (SVD) to help determine filter parameters and to interpret the properties of the reconstructed volumes.
Adapted fan-beam volume reconstruction for stationary digital breast tomosynthesis
Gongting Wu, Christine Inscoe, Jabari Calliste, et al.
Digital breast tomosynthesis (DBT) provides 3D images which remove tissue overlapping and enables better cancer detection. Stationary DBT (s-DBT) uses a fixed X-ray source array to eliminate image blur associated with the x-ray tube motion and provides better image quality as well as faster scanning speed. For limited angle tomography, it is known that iterative reconstructions generally produces better images with fewer artifacts. However classical iterative tomosynthesis reconstruction methods are considerably slower than the filtered back-projection (FBP) reconstruction. The linear x-ray source array used in s-DBT enables a computationally more efficient volume reconstruction using adapted fan beam slice sampling, which transforms the 3-D cone beam reconstruction to a series of 2-D fan beam slice reconstructions. In this paper, we report the first results of the adapted fan-beam volume reconstruction (AFVR) for the s-DBT system currently undergoing clinical trial at UNC, using a simultaneous algebraic reconstruction technique (SART). An analytic breast phantom is used to quantitatively analyze the performance of the AFVR. Image quality of a CIRS biopsy phantom reconstructed using the AFVR method are compared to that using FBP algorithm with a commercial package. Our results show a significant reduction in memory usage and an order of magnitude speed increase in reconstructing speed using AFVR compared to that of classical 3-D cone beam reconstruction. We also observed that images reconstructed by AFVR with SART had a better sharpness and contrast compared to that using FBP. Preliminary results on patient images demonstrates the improved detectability of the s-DBT system over the mammography. By utilizing parallel computing with graphics processing unit (GPU), it is expected that the AFVR method will enable iterative reconstruction technique to be practical for clinical applications.
Adaptive nonlocal means-based regularization for statistical image reconstruction of low-dose X-ray CT
To reduce radiation dose in X-ray computed tomography (CT) imaging, one of the common strategies is to lower the milliampere-second (mAs) setting during projection data acquisition. However, this strategy would inevitably increase the projection data noise, and the resulting image by the filtered back-projection (FBP) method may suffer from excessive noise and streak artifacts. The edge-preserving nonlocal means (NLM) filtering can help to reduce the noise-induced artifacts in the FBP reconstructed image, but it sometimes cannot completely eliminate them, especially under very low-dose circumstance when the image is severely degraded. To deal with this situation, we proposed a statistical image reconstruction scheme using a NLM-based regularization, which can suppress the noise and streak artifacts more effectively. However, we noticed that using uniform filtering parameter in the NLM-based regularization was rarely optimal for the entire image. Therefore, in this study, we further developed a novel approach for designing adaptive filtering parameters by considering local characteristics of the image, and the resulting regularization is referred to as adaptive NLM-based regularization. Experimental results with physical phantom and clinical patient data validated the superiority of using the proposed adaptive NLM-regularized statistical image reconstruction method for low-dose X-ray CT, in terms of noise/streak artifacts suppression and edge/detail/contrast/texture preservation.
Performance evaluation of a novel high performance pinhole array detector module using NEMA NU-4 image quality phantom for four head SPECT Imaging
Tasneem Rahman, Murat Tahtali, Mark R. Pickering
Radiolabeled tracer distribution imaging of gamma rays using pinhole collimation is considered promising for small animal imaging. The recent availability of various radiolabeled tracers has enhanced the field of diagnostic study and is simultaneously creating demand for high resolution imaging devices. This paper presents analyses to represent the optimized parameters of a high performance pinhole array detector module using two different characteristics phantoms. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were executed to assess the performance of a four head SPECT system incorporated with pinhole array collimators. The system is based on a pixelated array of NaI(Tl) crystals coupled to an array of position sensitive photomultiplier tubes (PSPMTs). The detector module was simulated to have 48 mm by 48 mm active area along with different pinhole apertures on a tungsten plate. The performance of this system has been evaluated using a uniform shape cylindrical water phantom along with NEMA NU-4 image quality (IQ) phantom filled with 99mTc labeled radiotracers. SPECT images were reconstructed where activity distribution is expected to be well visualized. This system offers the combination of an excellent intrinsic spatial resolution, good sensitivity and signal-to-noise ratio along with high detection efficiency over an energy range between 20-160 keV. Increasing number of heads in a stationary system configuration offers increased sensitivity at a spatial resolution similar to that obtained with the current SPECT system design with four heads.
A mathematical approach to image reconstruction on dual-energy computed tomography
Sungwhan Kim, Chi Young Ahn, Sung-Ho Kang, et al.
In this paper, we provide a mathematical approach to reconstruct the Compton scatter and photo-electronic coefficients using the dual-energy CT system. The proposed imaging method is based on the mean value theorem to handle the non-linear integration coming from the polychromatic energy based CT scan system. We show a numerical simulation result for the validation of the proposed algorithm
Statistical model based iterative reconstruction in myocardial CT perfusion: exploitation of the low dimensionality of the spatial-temporal image matrix
Time-resolved CT imaging methods play an increasingly important role in clinical practice, particularly, in the diagnosis and treatment of vascular diseases. In a time-resolved CT imaging protocol, it is often necessary to irradiate the patients for an extended period of time. As a result, the cumulative radiation dose in these CT applications is often higher than that of the static CT imaging protocols. Therefore, it is important to develop new means of reducing radiation dose for time-resolved CT imaging. In this paper, we present a novel statistical model based iterative reconstruction method that enables the reconstruction of low noise time-resolved CT images at low radiation exposure levels. Unlike other well known statistical reconstruction methods, this new method primarily exploits the intrinsic low dimensionality of time-resolved CT images to regularize the reconstruction. Numerical simulations were used to validate the proposed method.
Statistical iterative reconstruction for multi-contrast x-ray micro-tomography
S. Allner, A. Velroyen, A. Fehringer, et al.
Scanning times have always been an important issue in x-ray micro-tomography. To reach high-quality reconstructions the exposure times for each projection can be very long due to small detector pixel sizes and limited flux of x-ray sources. In addition, the required number of projections is a factor which limits a reduction of exposure beyond a certain level. This applies particularly to grating-based phase-contrast computed tomography (PCCT), as several images per projection have to be acquired in order to obtain absorption, phase and dark-field information. In this work we qualitatively compare statistical iterative reconstruction (SIR) and filtered back-projection (FBP) reconstruction from undersampled projection data based on a formalin-fixated mouse sample measured in a grating-based phase-contrast small-animal scanner. The results from our assessment illustrate that SIR offers not only significantly higher image quality, but also enables high-resolution imaging from severely undersampled data in comparison to the FBP algorithm. Therefore, the application of advanced iterative reconstruction methods in micro-tomography entails major advantages over state-of-the-art FBP reconstruction while offering the opportunity to shorten scan durations via a reduction of exposure time per projection and number of angular views.
Multi-dimensional tensor-based adaptive filter (TBAF) for low dose x-ray CT
Michael Knaup, Sergej Lebedev, Stefan Sawall, et al.
Edge–preserving adaptive filtering within CT image reconstruction is a powerful method to reduce image noise and hence to reduce patient dose. However, highly sophisticated adaptive filters typically comprise many parameters which must be adjusted carefully in order to obtain optimal filter performance and to avoid artifacts caused by the filter. In this work we applied an anisotropic tensor–based adaptive image filter (TBAF) to CT image reconstruction, both as an image–based post–processing step, as well as a regularization step within an iterative reconstruction. The TBAF is a generalization of the filter of reference.1 Provided that the image noise (i.e. the variance) of the original image is known for each voxel, we adjust all filter parameters automatically. Hence, the TBAF can be applied to any individual CT dataset without user interaction. This is a crucial feature for a possible application in clinical routine. The TBAF is compared to a well–established adaptive bilateral filter using the same noise adjustment. Although the differences between both filters are subtle, edges and local structures emerge more clearly in the TBAF filtered images while anatomical details are less affected than by the bilateral filter.
Impact of covariance modeling in dual-energy spectral CT image reconstruction
Yan Liu, Zhou Yu, Yu Zou
Dual-energy computed tomography (DECT) is a recent advancement in CT technology, which can potentially reduce artifacts and provide accurate quantitative information for diagnosis. Recently, statistical iterative reconstruction (SIR) methods were introduced to DECT for radiation dose reduction. The statistical noise modeling of measurement data plays an important role in SIR and impacts on the image quality. Contrary to the conventional CT projection data, of which noise is independent from ray to ray, in spectral CT the basis material sinogram data has strong correlations. In order to analyze the image quality improvement by applying correlated noise model, we compare the effects of two different noise models (i.e., correlated noise model and independent model by ignoring correlations) by analyzing the bias and variance trade-off. The results indicate that in the same bias level, the correlated noise modeling results in up to 20.02% noise reduction compared to the independent noise model. In addition, their impacts to different numerical are also evaluated. The results show that using the non-diagonal covariance matrix in SIR is challenging, where some numerical algorithms such as a direct application of separable paraboloidal surrogates (SPS) cannot converge to the correct results.
Direct composite fillings: an optical coherence tomography and microCT investigation
Meda L. Negrutiu, Cosmin Sinescu D.D.S., Mugurel V. Borlea, et al.
The treatment of carious lesions requires removal of affected dental tissue thus creating cavities that are to be filled with dedicated materials. There are several methods known which are used to assess the quality of direct dental restorations, but most of them are invasive. Optical tomographic techniques are of particular importance in the medical imaging field, because these techniques can provide non-invasive diagnostic images. Using an en-face version of OCT, we have recently demonstrated real time thorough evaluation of quality of dental fillings. The major aim of this study was to analyses the optical performance of adhesives modified with zirconia particles in different concentrations in order to improve the contrast of OCT imaging of the interface between the tooth structure, adhesive and composite resin. The OCT investigations were validated by micro CT using synchrotron radiation. The OCT Swept Source is a valuable investigation tool for the clinical evaluation of class II direct composite restorations. The unmodified adhesive layer shows poor contrast on regular OCT investigations. Adding zirconia particles to the adhesive layer provides a better scattering which allows a better characterization and quantification of direct restorations.
A clinical evaluation of total variation-Stokes image reconstruction strategy for low-dose CT imaging of the chest
Yan Liu, Hao Zhang, William Moore M.D., et al.
One hundred “normal-dose” computed tomography (CT) studies of the chest (i.e., 1,160 projection views, 120kVp, 100mAs) data sets were acquired from the patients who were scheduled for lung biopsy at Stony Brook University Hospital under informed consent approved by our Institutional Review Board. To mimic low-dose CT imaging scenario (i.e., sparse-view scan), sparse projection views were evenly extracted from the total 1,160 projections of each patient and the total radiation dose was reduced according to how many sparse views were selected. A standard filtered backprojection (FBP) algorithm was applied to the 1160 projections to produce reference images for comparison purpose. In the low-dose scenario, both the FBP and total variation-stokes (TVS) algorithms were applied to reconstruct the corresponding low-dose images. The reconstructed images were evaluated by an experienced thoracic radiologist against the reference images. Both the low-dose reconstructions and the reference images were displayed on a 4- megapixel monitor in soft tissue and lung windows. The images were graded by a five-point scale from 0 to 4 (0, nondiagnostic; 1, severe artifact with low confidence; 2, moderate artifact or moderate diagnostic confidences; 3, mild artifact or high confidence; 4, well depicted without artifacts). Quantitative evaluation measurements such as standard deviations for different tissue types and universal quality index were also studied and reported for the results. The evaluation concluded that the TVS can reduce the view number from 1,160 to 580 with slightly lower scores as the reference, resulting in a dose reduction to close 50%.
CBCT reconstruction via a penalty combining total variation and its higher-degree term
Nanbo Sun, Tao Sun, Jing Wang, et al.
Penalized weighted least-squares (PWLS) iterative algorithm with a total variation penalty (PWLS-TV) has shown potential to improve cone-beam CT (CBCT) image quality, particularly in suppressing noise and preserving edges. However, it sometimes suffers from the well-known staircase effect, which produces piece-wise constant areas in images. In order to remove the staircase effect, there is an increasing interest in replacing TV by higher-order derivative operations such as Hessian. Unfortunately, Hessian tends to blur the edges in the reconstruction results. In this study, we proposed a new penalty, namely the TV-H penalty, which combines the TV penalty and the Hessian penalty for CBCT reconstruction. The TV-H penalty retains some of the most favorable properties of the TV penalty like suppressing noise and preserving edges and has a better ability in preserving the structures of gradual intensity transition in images. The penalized weighted least-squares (PWLS) criterion with the majorization-minimization (MM) approach was used to minimize the objective function. Two simulated digital phantoms were used to compare the performance of TV, Hessian penalty and TV-H penalties. Our experiments indicated that the TV-H penalty outperformed the TV penalty and the Hessian penalty.
Limited angle C-arm tomosynthesis reconstruction algorithms
In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (∓20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.
Hessian Schatten-norm regularization for CBCT image reconstruction using fast iterative shrinkage-thresholding algorithm
Statistical iterative reconstruction in Cone-beam computed tomography (CBCT) uses prior knowledge to form different kinds of regularization terms. The total variation (TV) regularization has shown state-of-the-art performance in suppressing noises and preserving edges. However, it produces the well-known staircase effect. In this paper, a method that involves second-order differential operators was employed to avoid the staircase effect. The ability to avoid staircase effect lies in that higher-order derivatives can avoid over-sharpening the regions of smooth intensity transitions. Meanwhile, a fast iterative shrinkage-thresholding algorithm was used for the corresponding optimization problem. The proposed Hessian Schatten norm-based regularization keeps lots of favorable properties of TV, such as translation and scale invariant, with getting rid of the staircase effect that appears in TV-based reconstructions. The experiments demonstrated the outstanding ability of the proposed algorithm over TV method especially in suppressing the staircase effect.
Absorption imaging performance in a future Talbot-Lau interferometer based breast imaging system
A grating-based x-ray multi-contrast imaging system integrates a source grating G0, a diffraction grating G1, and an analyzer grating G2 into a conventional x-ray imaging system to generate images with three contrast mechanisms: absorption contrast, differential phase contrast, and dark field contrast. To facilitate the potential translation of this multi-contrast imaging system into a clinical setting, our group has developed several single-shot data acquisition methods to eliminate the necessity of the time-consuming phase stepping procedure. These methods have enabled us to acquire multi-contrast images with the same data acquisition time currently used for absorption imaging. One of the proposed methods is the use a staggered G2 grating. In this work, we propose to incorporate this staggered G2 grating into a state-of-the-art breast tomosynthesis imaging system to generate tomosynthesis images with three contrast mechanisms. The introduction of this staggered G2 grating will reject scatter and thus improve image contrast at the detector plane, but it will also absorb some x-ray photons reaching detector, thus increasing noise and reducing the contrast to noise ratio (CNR). Therefore, a key technical question is whether the CNR and dose efficiency can be maintained for absorption imaging after the introduction of this staggered G2 grating. In this paper, both the CNR and scatter-to-primary ratio (SPR) of absorption imaging were investigated with Monte Carlo simulations for a variety of staggered G2 grating designs.
Comparison of CT scatter rejection effectiveness using antiscatter grids and energy-discriminating detectors
Erica M. Cherry, Rebecca Fahrig
A potential application for energy-discriminating detectors (EDD) is scatter rejection in CT. If paired with a monoenergetic source, EDDs can identify scattered photons by their reduced energy relative to primary photons. However, it is unknown how the scatter rejection of an EDD compares with that of an antiscatter grid. In this study, the scatter rejection efficiency of energy-integrating detectors (EIDs) with antiscatter grids was compared with that of EDDs. Monte Carlo simulations were performed to generate projection images of head and body-sized cylindrical water phantoms in a typical clinical CT scanner geometry and a non-traditional geometry in which antiscatter grids would be impossible to install. Eight different detectors were used: four EIDs with 1D antiscatter grids of different heights (between 1 mm and 20 mm) and four EDDs with different energy bin sizes (between 0.1 keV and 10 keV.) Different source energy spectra were also investigated. Scatter to primary ratio (SPR) was calculated for each setup. The results showed that most antiscatter grid setups outperformed EDD setups. In the traditional CT geometry, the EDD with a 0.1 keV energy bin size produced slightly better SPR than a 5-mm-tall antiscatter grid, but the more realistic 1 keV energy bin EDD outperformed only a 1-mm-tall grid. However, the EDDs significantly reduced the SPR in the non-traditional geometry in which it was impossible to install antiscatter grids. The results suggest that EDDs are unlikely to outperform antiscatter grids in scatter rejection but could be useful when antiscatter grids are impossible to install.
Evaluation of the effective focal spot size of x-ray tubes by utilizing the edge response analysis
Evaluation of the effective focal spot size of X-ray tube has been made utilizing the slit or the pin-hole camera, but is not widely used in a daily practice due to the need of specialized tools. The author proposes a simplified method in which only a metal edge and a digital detector are used, together with a process of removing detector blur inherently associated with the adoption of such a detector. The evaluation was made through the OTF (Optical Transfer Function) measurements by using the edge response analysis. Through the whole study, the use of OTF instead of MTF (Modulation Transfer Function) was essential in order to stay within the linear systems theory framework, at cost of handling complex functions. Evaluation steps were as follows; 1. The inherent OTF of the detector (OTFdet) was measured by acquiring an image of the edge being closely contacted to the detector. 2. The second OTF (OTFmulti) was measured with the edge placed apart from the detector so as to implement 2 times geometrical magnification of the edge. OTFmulti is the product of OTFdet and the focal spot OTF (OTFfocus). 3. OTFfocus was obtained by calculating OTFmulti / OTFdet, thus removing the detector blur completely. 4. The LSF of the focal spot was obtained through the inverse Fourier transform of OTFfocus. The resultant LSFfocus was assured to be a real function due to the fact that original LSFdet and LSFmulti were both real functions. Preliminary results well matched those obtained by the pinhole camera.
Model based predictive design of post patient collimation for whole body CT scanners
Prakhar Prakash, John Boudry
Scatter presents as a significant source of image artifacts in cone beam CT (CBCT) and considerable effort has been devoted to measuring the magnitude and influence of scatter. Scatter management includes both rejection and correction approaches, with anti-scatter grids (ASGs) commonly employed as a scatter rejection strategy. This work employs a Geant41,2 driven Monte Carlo model to investigate the impact of different ASG designs on scatter rejection performance across a range of scanner coverage along the patient axis. Scatter rejection is quantified in terms of scatter to primary ratio (SPR). One-dimensional (1D) ASGs (grid septa running parallel to patient axis) are compared across a range of septa height, septa width and septa material. Results indicate for a given septa width and patient coverage, SPR decreases with septa height but demonstrates diminishing returns for larger height values. For shorter septa heights, higher Z materials (e.g., Tungsten) exhibit superior scatter rejection to relatively lower Z materials (e.g., Molybdenum). For taller septa heights, the material difference is not as significant. SPR has a relatively weak dependence on septa width, with thicker septa giving lower SPR values at a given scanner coverage. The results are intended to serve as guide for designing post patient collimation for whole body CT scanners. Since taller grids with high Z materials pose a significant manufacturing cost, it is necessary to evaluate optimal ASG designs to minimize material and machining costs and to meet scatter rejection specifications at given patient coverage.
Measurements and simulations of scatter imaging as a simultaneous adjunct for screening mammography
Katie Kern, Laila Hassan, Lubna Peerzada, et al.
X-ray coherent scatter is dependent upon the molecular structure of the scattering material and hence allows differentiation between tissue types with potentially much higher contrast than conventional absorption-based radiography. Coherent-scatter computed tomography has been used to produce images based on the x-ray scattering properties of the tissue. However, the geometry for CT imaging requires a thin fan beam and multiple projections and is incommensurate with screening mammography. In this work we demonstrate progress in a developing a system using a wide slot beam and simple anti-scatter grid which is adequate to differentiate between scatter peaks to remove the fat background from the coherent scatter image. Adequate intensity in the coherent scatter image can be achieved at the dose commonly used for screening mammography to detect carcinoma surrogates as small as 2 mm in diameter. This technique would provide an inexpensive, low dose, simultaneous adjunct to conventional screening mammography to provide a localized map of tissue type that could be overlaid on the conventional transmission mammogram. Comparisons between phantom measurements and Monte Carlo simulations show good agreement, which allowed for detailed examination of the visibility of carcinoma under realistic conditions.
Prospective gated chest tomosynthesis using CNT X-ray source array
Jing Shan, Laurel Burk, Gongting Wu, et al.
Chest tomosynthesis is a low-dose 3-D imaging modality that has been shown to have comparable sensitivity as CT in detecting lung nodules and other lung pathologies. We have recently demonstrated the feasibility of stationary chest tomosynthesis (s-DCT) using a distributed CNT X-ray source array. The technology allows acquisition of tomographic projections without moving the X-ray source. The electronically controlled CNT x-ray source also enables physiologically gated imaging, which will minimize image blur due to the patient’s respiration motion. In this paper, we investigate the feasibility of prospective gated chest tomosynthesis using a bench-top s-DCT system with a CNT source array, a high- speed at panel detector and realistic patient respiratory signals captured using a pressure sensor. Tomosynthesis images of inflated pig lungs placed inside an anthropomorphic chest phantom were acquired at different respiration rate, with and without gating for image quality comparison. Metal beads of 2 mm diameter were placed on the pig lung for quantitative measure of the image quality. Without gating, the beads were blurred to 3:75 mm during a 3 s tomosynthesis acquisition. When gated to the end of the inhalation and exhalation phase the detected bead size reduced to 2:25 mm, much closer to the actual bead size. With gating the observed airway edges are sharper and there are more visible structural details in the lung. Our results demonstrated the feasibility of prospective gating in the s-DCT, which substantially reduces image blur associated with lung motion.
Anti-scatter grid artifact elimination for high resolution x-ray imaging CMOS detectors
Higher resolution in dynamic radiological imaging such as angiography is increasingly being demanded by clinicians; however, when standard anti-scatter grids are used with such new high resolution detectors, grid-line artifacts become more apparent resulting in increased structured noise that may overcome the contrast signal improvement benefits of the scatter-reducing grid. Although grid-lines may in theory be eliminated by dividing the image of a patient taken with the grid by a flat-field image taken with the grid obtained prior to the clinical image, unless the remaining additive scatter contribution is subtracted in real-time from the dynamic clinical image sequence before the division by the reference image, severe grid-line artifacts may remain. To investigate grid-line elimination, a stationary Smit Rӧntgen X-ray grid (line density: 70 lines/cm, grid ratio 13:1) was used with both a 75 micron-pixel CMOS detector and a standard 194 micron-pixel flat panel detector (FPD) to image an artery block insert placed in a modified uniform frontal head phantom for a 20 x 20cm FOV (approximately). Contrast and contrast-to-noise ratio (CNR) were measured with and without scatter subtraction prior to grid-line correction. The fixed pattern noise caused by the grid was substantially higher for the CMOS detector compared to the FPD and caused a severe reduction of CNR. However, when the scatter subtraction corrective method was used, the removal of the fixed pattern noise (grid artifacts) became evident resulting in images with improved CNR.
A combination of spatial and recursive temporal filtering for noise reduction when using region of interest (ROI) fluoroscopy for patient dose reduction in image guided vascular interventions with significant anatomical motion
Because x-ray based image-guided vascular interventions are minimally invasive they are currently the most preferred method of treating disorders such as stroke, arterial stenosis, and aneurysms; however, the x-ray exposure to the patient during long image-guided interventional procedures could cause harmful effects such as cancer in the long run and even tissue damage in the short term. ROI fluoroscopy reduces patient dose by differentially attenuating the incident x-rays outside the region-of-interest. To reduce the noise in the dose-reduced regions previously recursive temporal filtering was successfully demonstrated for neurovascular interventions. However, in cardiac interventions, anatomical motion is significant and excessive recursive filtering could cause blur. In this work the effects of three noise-reduction schemes, including recursive temporal filtering, spatial mean filtering, and a combination of spatial and recursive temporal filtering, were investigated in a simulated ROI dose-reduced cardiac intervention.

First a model to simulate the aortic arch and its movement was built. A coronary stent was used to simulate a bioprosthetic valve used in TAVR procedures and was deployed under dose-reduced ROI fluoroscopy during the simulated heart motion. The images were then retrospectively processed for noise reduction in the periphery, using recursive temporal filtering, spatial filtering and a combination of both.

Quantitative metrics for all three noise reduction schemes are calculated and are presented as results. From these it can be concluded that with significant anatomical motion, a combination of spatial and recursive temporal filtering scheme is best suited for reducing the excess quantum noise in the periphery. This new noise-reduction technique in combination with ROI fluoroscopy has the potential for substantial patient-dose savings in cardiac interventions.
Directional information of the simultaneously active x-ray sources and fast CT reconstruction
Sajib Saha, Murat Tahtali, Andrew Lambert, et al.
This paper focuses on minimizing the time requirement for CT capture through an innovative simultaneous X-ray capture method. The concept was presented in previous publications with synthetically sampled data from a synthetic phantom. This paper puts emphasis on real data reconstruction where a physical 3D phantom consisting of simple geometric shapes was used for the experiment. For a successful reconstruction of the physical phantom, precise calibration of the setup is ensured in this work. Targeting better reconstruction from minimal number of iterations, the sparsity prior CT reconstruction algorithm proposed by Saha et al. [11]was adapted to work in conjunction with the simultaneous X-ray capture modality. Along with critical evaluations of the experimental findings, this paper focuses on optimal parameter settings to achieve a given reconstruction resolution.
A study on quality improvement of x-ray imaging of the respiratory-system based on a new image processing technique
Jun Torii, Yuichi Nagai, Tatsuya Horita, et al.
Recently, the double contrast technique in a gastrointestinal examination and the transbronchial lung biopsy in an examination for the respiratory system [1-3] have made a remarkable progress. Especially in the transbronchial lung biopsy, better quality of x-ray fluoroscopic images is requested because this examination is performed under a guidance of x-ray fluoroscopic images. On the other hand, various image processing methods [4] for x-ray fluoroscopic images have been developed as an x-ray system with a flat panel detector [5-7] is widely used. New noise reduction processing, Adaptive Noise Reduction [ANR], was announced in SPIE last year.[8] ANR is a new image processing technique which is capable of extracting and reducing noise components regardless of moving objects in fluoroscopy images. However, for further enhancement of noise reduction effect in clinical use, it was used in combination with a recursive filter, which is a time axis direction filter. Due to this, the recursive filter generated image lags when there are moving objects in the fluoroscopic images, and these image lags sometimes became hindrance in performing smooth bronchoscopy. This is because recursive filters reduce noise by adding multiple fluoroscopy images. Therefore, we have developed new image processing technique, Motion Tracking Noise Reduction [MTNR] for decreasing image lags as well as noise. This ground-breaking image processing technique detects global motion in images with high accuracy, determines the pixels to track the motion, and applies a motion tracking-type time filter. With this, image lags are removed remarkably while realizing the effective noise reduction. In this report, we will explain the effect of MTNR by comparing the performance of MTNR images [MTNR] and ANR + Recursive filter-applied images [ANR + Recursive filter].
Anatomy-based transmission factors for technique optimization in portable chest x-ray
Christopher L. Liptak, Deborah Tovey, William P. Segars, et al.
Portable x-ray examinations often account for a large percentage of all radiographic examinations. Currently, portable examinations do not employ automatic exposure control (AEC). To aid in the design of a size-specific technique chart, acrylic slabs of various thicknesses are often used to estimate x-ray transmission for patients of various body thicknesses. This approach, while simple, does not account for patient anatomy, tissue heterogeneity, and the attenuation properties of the human body. To better account for these factors, in this work, we determined x-ray transmission factors using computational patient models that are anatomically realistic. A Monte Carlo program was developed to model a portable x-ray system. Detailed modeling was done of the x-ray spectrum, detector positioning, collimation, and source-to-detector distance. Simulations were performed using 18 computational patient models from the extended cardiac-torso (XCAT) family (9 males, 9 females; age range: 2-58 years; weight range: 12-117 kg). The ratio of air kerma at the detector with and without a patient model was calculated as the transmission factor. Our study showed that the transmission factor decreased exponentially with increasing patient thickness. For the range of patient thicknesses examined (12-28 cm), the transmission factor ranged from approximately 21% to 1.9% when the air kerma used in the calculation represented an average over the entire imaging field of view. The transmission factor ranged from approximately 21% to 3.6% when the air kerma used in the calculation represented the average signals from two discrete AEC cells behind the lung fields. These exponential relationships may be used to optimize imaging techniques for patients of various body thicknesses to aid in the design of clinical technique charts.
Low dose scatter correction for digital chest tomosynthesis
Christina R. Inscoe, Gongting Wu, Jing Shan, et al.
Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.
Signal uniformity of mammography systems and its impact on test results from contrast detail phantoms
M. Kaar, F. Semturs, J. Hummel, et al.
Technical quality assurance (TQA) procedures for mammography systems usually include tests with a contrast-detail phantom. These phantoms contain multiple objects of varying dimensions arranged on a flat body. Exposures of the phantom are then evaluated by an observer, either human or software.

One well-known issue of this method is that dose distribution is not uniform across the image area of any mammography system, mainly due to the heel effect. The purpose of this work is to investigate to what extent image quality differs across the detector plane.

We analyze a total of 320 homogeneous mammography exposures from 32 radiology institutes. Systems of different models and manufacturers, both computed radiography (CR) and direct radiography (DR) are included. All images were taken from field installations operated within the nationwide Austrian mammography screening program, which includes mandatory continuous TQA.

We calculate signal-to-noise ratios (SNR) for 15 regions of interest arranged to cover the area of the phantom. We define the 'signal range' of an image and compare this value categorized by technologies.

We found the deviations of SNR greater in anterior-posterior than in lateral direction. SNR ranges are significantly higher for CR systems than for DR systems.
Signal and noise analysis of flat-panel sandwich detectors for single-shot dual-energy x-ray imaging
We have developed a novel sandwich-style single-shot (single-kV) detector by stacking two indirect-conversion flat-panel detectors for preclinical mouse imaging. In the sandwich detector structure, extra noise due to the direct x-ray absorption in photodiode arrays is inevitable. We develop a simple cascaded linear-systems model to describe signal and noise propagation in the flat-panel sandwich detector considering direct x-ray interactions. The noise-power spectrum (NPS) and detective quantum efficiency (DQE) obtained from the front and rear detectors are analyzed by using the cascaded-systems model. The NPS induced by the absorption of direct x-ray photons that are unattenuated within the photodiode layers is white in the spatial-frequency domain like the additive readout noise characteristic; hence that is harmful to the DQE at higher spatial frequencies at which the number of secondary quanta lessens. The model developed in this study will be useful for determining the optimal imaging techniques with sandwich detectors and their optimal design.
Exposure dose reduction for the high energy spectrum in the photon counting mammography: simulation study based on Japanese breast glandularity and thickness
Naoko Niwa, Misaki Yamazaki, Yoshie Kodera, et al.
Recently, digital mammography with a photon counting silicon detector has been developed. With the aim of reducing the exposure dose, we have proposed a new mammography system that uses a cadmium telluride series photon counting detector. In addition, we also propose to use a high energy X-ray spectrum with a tungsten anode. The purpose of this study was assessed that the effectiveness of the high X-ray energy spectrum in terms of image quality using a Monte Carlo simulation. The proposed photon counting system with the high energy X-ray is compared to a conventional flat panel detector system with a Mo/Rh spectrum. The contrast-to-noise ratio (CNR) is calculated from simulation images with the use of breast phantoms. The breast model phantoms differed by glandularity and thickness, which were determined from Japanese clinical mammograms. We found that the CNR values were higher in the proposed system than in the conventional system. The number of photons incident on the detector was larger in the proposed system, so that the noise values was lower in comparison with the conventional system. Therefore, the high energy spectrum yielded the same CNR as using the conventional spectrum while allowing a considerable dose reduction to the breast.
Construction of realistic liver phantoms from patient images using 3D printer and its application in CT image quality assessment
Shuai Leng, Lifeng Yu, Thomas Vrieze, et al.
The purpose of this study is to use 3D printing techniques to construct a realistic liver phantom with heterogeneous background and anatomic structures from patient CT images, and to use the phantom to assess image quality with filtered back-projection and iterative reconstruction algorithms. Patient CT images were segmented into liver tissues, contrast-enhanced vessels, and liver lesions using commercial software, based on which stereolithography (STL) files were created and sent to a commercial 3D printer. A 3D liver phantom was printed after assigning different printing materials to each object to simulate appropriate attenuation of each segmented object. As high opacity materials are not available for the printer, we printed hollow vessels and filled them with iodine solutions of adjusted concentration to represent enhance levels in contrast-enhanced liver scans. The printed phantom was then placed in a 35×26 cm oblong-shaped water phantom and scanned repeatedly at 4 dose levels. Images were reconstructed using standard filtered back-projection and an iterative reconstruction algorithm with 3 different strength settings. Heterogeneous liver background were observed from the CT images and the difference in CT numbers between lesions and background were representative for low contrast lesions in liver CT studies. CT numbers in vessels filled with iodine solutions represented the enhancement of liver arteries and veins. Images were run through a Channelized Hotelling model observer with Garbor channels and ROC analysis was performed. The AUC values showed performance improvement using the iterative reconstruction algorithm and the amount of improvement increased with strength setting.
A comparison of material decomposition techniques for dual-energy CT colonography
Radin A. Nasirudin, Rie Tachibana, Janne J. Näppi, et al.
In recent years, dual-energy computed tomography (DECT) has been widely used in the clinical routine due to improved diagnostics capability from additional spectral information. One promising application for DECT is CT colonography (CTC) in combination with computer-aided diagnosis (CAD) for detection of lesions and polyps. While CAD has demonstrated in the past that it is able to detect small polyps, its performance is highly dependent on the quality of the input data. The presence of artifacts such as beam-hardening and noise in ultra-low-dose CTC may severely degrade detection performances of small polyps. In this work, we investigate and compare virtual monochromatic images, generated by image-based decomposition and projection-based decomposition, with respect to CAD performance. In the image-based method, reconstructed images are firstly decomposed into water and iodine before the virtual monochromatic images are calculated. On the contrary, in the projection-based method, the projection data are first decomposed before calculation of virtual monochromatic projection and reconstruction. Both material decomposition methods are evaluated with regards to the accuracy of iodine detection. Further, the performance of the virtual monochromatic images is qualitatively and quantitatively assessed. Preliminary results show that the projection-based method does not only have a more accurate detection of iodine, but also delivers virtual monochromatic images with reduced beam hardening artifacts in comparison with the image-based method. With regards to the CAD performance, the projection-based method yields an improved detection performance of polyps in comparison with that of the image-based method.
Conditional-likelihood approach to material decomposition in spectral absorption-based or phase-contrast CT
Material decomposition in absorption-based X-ray CT imaging suffers certain inefficiencies when differentiating among soft tissue materials. To address this problem, decomposition techniques turn to spectral CT, which has gained popularity over the last few years. Although proven to be more effective, such techniques are primarily limited to the identification of contrast agents and soft and bone-like materials. In this work, we introduce a novel conditional likelihood, material-decomposition method capable of identifying any type of material objects scanned by spectral CT. The method takes advantage of the statistical independence of spectral data to assign likelihood values to each of the materials on a pixel-by-pixel basis. It results in likelihood images for each material, which can be further processed by setting certain conditions or thresholds, to yield a final material-diagnostic image. The method can also utilize phase-contrast CT (PCI) data, where measured absorption and phase-shift information can be treated as statistically independent datasets. In this method, the following cases were simulated: (i) single-scan PCI CT, (ii) spectral PCI CT, (iii) absorption-based spectral CT, and (iv) single-scan PCI CT with an added tumor mass. All cases were analyzed using a digital breast phantom; although, any other objects or materials could be used instead. As a result, all materials were identified, as expected, according to their assignment in the digital phantom. Materials with similar attenuation or phase-shift values (e.g., glandular tissue, skin, and tumor masses) were especially successfully when differentiated by the likelihood approach.
Model based iterative reconstruction IMR gives possibility to evaluate thinner slice thicknesses than conventional iterative reconstruction iDose4: a phantom study
Marie-Louise Aurumskjöld, Kristina Ydström, Anders Tingberg, et al.
Computed tomography (CT) is one of the most important modalities in a radiological department, which produces images with high diagnostic confidence, but in some cases contributes to a high radiation dose to the patient. The radiation dose can be reduced by the use of advanced image reconstruction algorithms. This study was done on a Philips Brilliance iCT with iterative reconstruction iDose4 and model-based iterative reconstruction IMR. The purpose was to investigate the effect on the image quality with thin slice images reconstructed with IMR, compared to standard slice thickness reconstructed with iDose4. Objective measurements of noise and contrast-to-noise ratio were performed using an image quality phantom, an anthropomorphic phantom and clinical cases. Subjective evaluations of low-contrast resolution were performed by observers using an image quality phantom. IMR gives strong noise reduction and enhanced low-contrast and thereby enable selection of thinner slice thickness. Objective evaluation of image noise shows that thin slices reconstructed with IMR provides lower noise than thicker slice images reconstructed with iDose4. With IMR the slice thickness is of less importance for the noise. With thinner slices the partial volume artefacts becomes less pronounced. In conclusion, we have shown that IMR enables reduction of the slice thickness and at the same time maintain or even reduce the noise level compared to iDose4 reconstruction with standard slice thickness. This will subsequently result in an improvement of image quality for images reconstructed with IMR.
Evaluation of imaging characteristics in CTDI phantom size on contrast imaging
Pil-Hyun Jeon, Won-Hyung Lee, Seong-Su Jeon, et al.
Recently, there have been several physics and clinical studies on the use of lower tube potentials in CT imaging, with the purpose of improving image quality or further reducing radiation dose. We investigated an experimental study using a series of different sized, polymethyl methacrylate (PMMA) phantoms, demonstrating the potential strategy for dose reduction and to distinguish component of plaque by imaging their energy responses using CT. We investigated the relationship between different sizes of cylinderic PMMA-equivalent phantoms with diameter of 12, 16, 20, 24, and 32 cm and used contrast at various tube voltages (80, 100, 120, and 140 kVp) using a 16–detector row CT scanner. The contrast represented CT numbers as different materials for the water, calcium chloride, and iodine. Phantom insertions also allow quantitative measures of image noise, contrast, contrast-to-noise ratio (CNR) and figure of merit (FOM). When evaluating FOM, it was found that the lower kVp provided the better CNR. An experimental study was performed to demonstrate reduced dose for both dose efficient and practical feasibility for different patient sizes and diagnostic tasks by relating achievable CNR and the volume CT dose index (CTDIvol). The use of spectra optimized to the specific application could provide further improvements of distinguishing iodine, calcium and plaque component for patient size. The purpose of this study was to evaluate variations in image noise and contrast using different tube potentials in a CTDI phantom on contrast imaging.
Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA
James Hermus, Charles Mistretta, Timothy P. Szczykutowicz
In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.
Region-of-interest cone beam computed tomography (ROI CBCT) with a high resolution CMOS detector
A. Jain, H. Takemoto, M. D. Silver, et al.
Cone beam computed tomography (CBCT) systems with rotational gantries that have standard flat panel detectors (FPD) are widely used for the 3D rendering of vascular structures using Feldkamp cone beam reconstruction algorithms. One of the inherent limitations of these systems is limited resolution (<;3 lp/mm). There are systems available with higher resolution but their small FOV limits them to small animal imaging only. In this work, we report on region-of-interest (ROI) CBCT with a high resolution CMOS detector (75 μm pixels, 600 μm HR-CsI) mounted with motorized detector changer on a commercial FPD-based C-arm angiography gantry (194 μm pixels, 600 μm HL-CsI). A cylindrical CT phantom and neuro stents were imaged with both detectors. For each detector a total of 209 images were acquired in a rotational protocol. The technique parameters chosen for the FPD by the imaging system were used for the CMOS detector. The anti-scatter grid was removed and the incident scatter was kept the same for both detectors with identical collimator settings. The FPD images were reconstructed for the 10 cm x10 cm FOV and the CMOS images were reconstructed for a 3.84 cm x 3.84 cm FOV. Although the reconstructed images from the CMOS detector demonstrated comparable contrast to the FPD images, the reconstructed 3D images of the neuro stent clearly showed that the CMOS detector improved delineation of smaller objects such as the stent struts (~70 μm) compared to the FPD. Further development and the potential for substantial clinical impact are suggested.
Volume-of-interest reconstruction from severely truncated data in dental cone-beam CT
Zheng Zhang, Budi Kusnoto D.D.S., Xiao Han, et al.
As cone-beam computed tomography (CBCT) has gained popularity rapidly in dental imaging applications in the past two decades, radiation dose in CBCT imaging remains a potential, health concern to the patients. It is a common practice in dental CBCT imaging that only a small volume of interest (VOI) containing the teeth of interest is illuminated, thus substantially lowering imaging radiation dose. However, this would yield data with severe truncations along both transverse and longitudinal directions. Although images within the VOI reconstructed from truncated data can be of some practical utility, they often are compromised significantly by truncation artifacts. In this work, we investigate optimization-based reconstruction algorithms for VOI image reconstruction from CBCT data of dental patients containing severe truncations. In an attempt to further reduce imaging dose, we also investigate optimization-based image reconstruction from severely truncated data collected at projection views substantially fewer than those used in clinical dental applications. Results of our study show that appropriately designed optimization-based reconstruction can yield VOI images with reduced truncation artifacts, and that, when reconstructing from only one half, or even one quarter, of clinical data, it can also produce VOI images comparable to that of clinical images.
Implementation of interior micro-CT on a carbon nanotube dynamic micro-CT scanner for lower radiation dose
Micro-CT is a high-resolution volumetric imaging tool that provides imaging evaluations for many preclinical applications. However, the relatively high cumulative radiation dose from micro-CT scans could lead to detrimental influence on the experimental outcomes or even the damages of specimens. Interior micro-computed tomography (micro- CT) produces exact tomographic images of an interior region-of-interest (ROI) embedded within an object from truncated projection data. It holds promises for many biomedical applications with significantly reduced radiation doses. Here, we present our first implementation of an interior micro-CT system using a carbon nanotube (CNT) field-emission microfocus x-ray source. The system has two modes – interior micro-CT mode and global micro-CT mode, which is realized with a detachable x-ray beam collimator at the source side. The interior mode has an effective field-of-view (FOV) of about 10mm in diameter, while for the global mode the FOV is about 40mm in diameter. We acquired CT data in these two modes from a mouse-sized phantom, and compared the reconstructed image qualities and the associated radiation exposures. Interior ROI reconstruction was achieved by using our in-house developed reconstruction algorithm. Overall, interior micro-CT demonstrated comparable image quality to the conventional global micro-CT. Radiation doses measured by an ion chamber show that interior micro-CT yielded significant dose reduction (up to 83%).
Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm
In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.
A new multi-planar reconstruction method using voxel based beamforming for 3D ultrasound imaging
Hyunseok Ju, Jinbum Kang, Ilseob Song, et al.
For multi-planar reconstruction in 3D ultrasound imaging, direct and separable 3D scan conversion (SC) have been used for transforming the ultrasound data acquired in the 3D polar coordinate system to the 3D Cartesian coordinate system. These 3D SC methods can visualize an arbitrary plane for 3D ultrasound volume data. However, they suffer from blurring and blocking artifacts due to resampling during SC. In this paper, a new multi-planar reconstruction method based on voxel based beamforming (VBF) is proposed for reducing blurring and blocking artifacts. In VBF, unlike direct and separable 3D SC, each voxel on an arbitrary imaging plane is directly reconstructed by applying the focusing delay to radio-frequency (RF) data so that the blurring and blocking artifacts can be removed. From the phantom study, the proposed VBF method showed the higher contrast and less blurring compared to the separable and direct 3D SC methods. This result is consistent with the measured information entropy contrast (IEC) values, i.e., 98.9 vs. 42.0 vs. 47.9, respectively. In addition, the 3D SC methods and VBF method were implemented on a high-end GPU by using CUDA programming. The execution times for the VBF and direct 3D SC methods are 1656.1ms, 1633.3ms and 1631.4ms, which are I/O bounded. These results indicate that the proposed VBF method can improve image quality of 3D ultrasound B-mode imaging by removing blurring and blocking artifacts associated with 3D scan conversion and show the feasibility of pseudo-real-time operation.
Non-invasive thermal IR detection of breast tumor development in vivo
Jason R. Case, Madison A. Young, D. Dréau, et al.
Lumpectomy coupled with radiation therapy and/or chemotherapy comprises the treatment of breast cancer for many patients. We are developing an enhanced thermal IR imaging technique that can be used in real-time to guide tissue excision during a lumpectomy. This novel enhanced thermal imaging method is a combination of IR imaging (8- 10 μm) and selective heating of blood (~0.5 °C) relative to surrounding water-rich tissue using LED sources at low powers. Post-acquisition processing of these images highlights temporal changes in temperature and is sensitive to the presence of vascular structures. In this study, fluorescent and enhanced thermal imaging modalities were used to estimate breast cancer tumor volumes as a function of time in 19 murine subjects over a 30-day study period. Tumor volumes calculated from fluorescent imaging follow an exponential growth curve for the first 22 days of the study. Cell necrosis affected the tumor volume estimates based on the fluorescent images after Day 22. The tumor volumes estimated from enhanced thermal imaging show exponential growth over the entire study period. A strong correlation was found between tumor volumes estimated using fluorescent imaging and the enhanced IR images, indicating that enhanced thermal imaging is capable monitoring tumor growth. Further, the enhanced IR images reveal a corona of bright emission along the edges of the tumor masses. This novel IR technique could be used to estimate tumor margins in real-time during surgical procedures.
Slice profile distortions in single slice continuously moving table MRI
Saikat Sengupta, David S. Smith, E. Brian Welch
Continuously Moving Table (CMT) MRI is a rapid imaging technique that allows scanning of extended fields of view (FOVs) such as the whole-body in a single continuous scan.1 A highly efficient approach to CMT MRI is single slice imaging, where data are continuously acquired from a single axial slice at isocenter with concurrent movement of the patient table.2 However, the continuous motion of the scanner table and supply of fresh magnetization into the excited slice can introduce deviations in the slice magnetization profile. The goal of this work is to investigate and quantify the distortion in the slice profile in CMT MRI. CMT MRI with a table speed of 20 mm/s was implemented on a 3 Tesla whole-body MRI scanner, with continuous radial data acquisition. Simulations were performed to characterize the transient and steady state slice profiles and magnetization effects. Simulated slice profiles were compared to actual slice profile measurements performed in the scanner. Both simulations and experiments revealed an asymmetric slice profile characterized by a skew towards the lagging edge of the moving table, in contrast to the nominal profiles associated with scanning a stationary object. The true excited slice width (FWHM) and pitch of the acquisition was observed to be dependent on table velocity, with larger table speeds resulting in larger slice profile deviations from the nominal shape.
Investigation of optimal acquisition time of myocardial perfusion scintigraphy using cardiac focusing-collimator
Arisa Niwa, Shinji Abe, Naotoshi Fujita, et al.
Recently myocardial perfusion SPECT imaging acquired using the cardiac focusing-collimator (CF) has been developed in the field of nuclear cardiology. Previously we have investigated the basic characteristics of CF using physical phantoms. This study was aimed at determining the acquisition time for CF that enables to acquire the SPECT images equivalent to those acquired by the conventional method in 201TlCl myocardial perfusion SPECT. In this study, Siemens Symbia T6 was used by setting the torso phantom equipped with the cardiac, pulmonary, and hepatic components. 201TlCl solution were filled in the left ventricular (LV) myocardium and liver. Each of CF, the low energy high resolution collimator (LEHR), and the low medium energy general purpose collimator (LMEGP) was set on the SPECT equipment. Data acquisitions were made by regarding the center of the phantom as the center of the heart in CF at various acquisition times. Acquired data were reconstructed, and the polar maps were created from the reconstructed images. Coefficient of variation (CV) was calculated as the mean counts determined on the polar maps with their standard deviations. When CF was used, CV was lower at longer acquisition times. CV calculated from the polar maps acquired using CF at 2.83 min of acquisition time was equivalent to CV calculated from those acquired using LEHR in a 180°acquisition range at 20 min of acquisition time.
Modeling CZT/CdTe x-ray photon-counting detectors
Andrey Makeev, Miesher Rodrigues, Gin-Chung Wang, et al.
Software for modeling x-ray signals, as detected by a semiconductor radiation detector, has been developed. We model a generic signal generation/collection/processing sequence using Monte Carlo and finite-element analysis software. The suggested framework will allow one to simulate x-ray pulse-height spectrum, various triggering schemes, and can be used for detector optimization.
Statistical bias in material decomposition in low photon statistics region
We show that in material decomposition, statistical bias exists in the low photon regime due to non-linearity including but not limited to the log operation and polychromatic measurements. As new scan methods divide the total number of photons into an increasing number of measurements (e.g., energy bins, projection paths) and as developers seek to reduce radiation dose, the number of photons per measurement will decrease and estimators should be robust against bias at low photon counts. We study bias as a function of total flux and spectral spread, which provides insight when parameters like material thicknesses, number of energy bins, and number of projection views change. We find that the bias increases with lower photon counts, wide spectrum, with more number of energy bins and more projection views. Our simulation, with ideal photon counting detectors, show biases up to 2.4 % in basis material images. We propose a bias correction method in projection space that uses a multi dimensional look up table. With the correction, the relative bias in CT images is within 0.5 ± 0.17%.
Reducing the formation of image artifacts during spectroscopic micro-CT acquisitions
Marcus Zuber, Thomas Koenig, Rubaiya Hussain, et al.
Spectroscopic micro-computed tomography using photon counting detectors is a technology that promises to deliver material-specific images in pre-clinical research. Inherent to such applications is the need for a high spatial resolution, which can only be achieved with small focal spot sizes in the micrometer range. This limits the achievable x-ray fluxes and implies long acquisitions easily exceeding one hour, during which it is paramount to maintain a constant detector response. Given that photon-counting detectors are delicate systems, with each pixel hosting advanced analog and digital circuitry, this can represent a challenging task. In this contribution, we illustrate our findings on how to reduce image artifacts in computed tomography reconstructions under these conditions, using a Medipix3RX detector featuring a cadmium telluride sensor. We find that maintaining a constant temperature is a prerequisite to guarantee energy threshold stability. More importantly, we identify varying sensor leakage currents as a significant source to artifact formation. We show that these leakage currents can render the corresponding images unusable if the ambient temperature fluctuates, as caused by an air conditioning, for example. We conclude with demonstrating the necessity of an adjustable leakage current compensation.
Investigation of a one-step spectral CT reconstruction algorithm for direct inversion into basis material images
Photon-counting detectors with pulse-height analysis have shown promise for improved spectral CT imaging. This study investigated a novel spectral CT reconstruction method that directly estimates basis-material images from the measured energy-bin data (i.e., ‘one-step’ reconstruction). The proposed algorithm can incorporate constraints to stabilize the reconstruction and potentially reduce noise. The algorithm minimizes the error between the measured energy-bin data and the data estimated from the reconstructed basis images. A total variation (TV) constraint was also investigated for additional noise reduction. The proposed one-step algorithm was applied to simulated data of an anthropomorphic phantom with heterogeneous tissue composition. Reconstructed water, bone, and gadolinium basis images were compared for the proposed one-step algorithm and the conventional ‘two-step’ method of decomposition followed by reconstruction. The unconstrained algorithm provided a 30% to 60% reduction in noise standard deviation compared to the two-step algorithm. The fTV =0.8 constraint provided a small reduction in noise (∼ 1%) compared to the unconstrained reconstruction. Images reconstructed with the fTV =0.5 constraint demonstrated 77% to 94% standard deviation reduction compared to the two-step reconstruction, however with increased blurring. There were no significant differences in the mean values reconstructed by the investigated algorithms. Overall, the proposed one-step spectral CT reconstruction algorithm provided three-material-decomposition basis images with reduced noise compared to the conventional two-step approach. When using a moderate TV constraint factor (fTV = 0.8), a 30%-60% reduction in noise standard deviation was achieved while preserving the edge profile for this simulated phantom.
A photon counting detector model based on increment matrices to simulate statistically correct detector signals
Sebastian Faby, Joscha Maier, David Simons M.D., et al.
We present a novel increment matrix concept to simulate the correlations in an energy–selective photon counting detector. Correlations between the energy bins of neighboring detector pixels are introduced by scattered and fluorescence photons, together with the broadening of the induced charge clouds as they travel towards the electrodes, leading to charge sharing. It is important to generate statistically correct detector signals for the different energy bins to be able to realistically assess the detector’s performance in various tasks, e.g. material decomposition. Our increment matrix concept describes the counter increases in neighboring pixels on a single event level. Advantages of our model are the fact that much less random numbers are required than simulating single photons and that the increment matrices together with their probabilities have to be generated only once and can be stored for later use. The different occurring increment matrix sets and the corresponding probabilities are simulated using an analytic model of the photon–matter–interactions based on the photoelectric effect and Compton scattering, and the charge cloud drift, featuring thermal diffusion and Coulomb expansion of the charge cloud. The results obtained with this model are evaluated in terms of the spectral response for different detector geometries and the resulting energy bin sensitivity. Comparisons to published measured data and a parameterized detector model show both a good qualitative and quantitative agreement. We also studied the resulting covariance of reconstructed energy bin images.
Photon-counting CT: modeling and compensating of spectral distortion effects
Jochen Cammin, Steffen Kappler, Thomas Weidinger, et al.
Spectral computed tomography (CT) with photon-counting detectors (PCDs) has the potential to substantially advance diagnostic CT imaging by reducing image noise and dose to the patient, by improving contrast and tissue specificity, and by enabling molecular and functional imaging. However, the current PCD technology is limited by two main factors: imperfect energy measurement (spectral response effects, SR) and count rate non-linearity (pulse pileup effects, PP, due to detector deadtimes) resulting in image artifacts and quantitative inaccuracies for material specification. These limitations can be lifted with image reconstruction algorithms that compensate for both SR and PP. A prerequisite for this approach is an accurate model of the count losses and spectral distortions in the PCD. In earlier work we developed a cascaded SR-PP model and evaluated it using a physical PCD. In this paper we show the robustness of our approach by modifying the cascaded SR-PP model for a faster PCD with smaller pixels and a different pulse shape. We compare paralyzable and non-paralyzable detector models. First, the SR-PP model is evaluated at low and high count rates using two sets of attenuators. Then, the accuracy of the compensation is evaluated by estimating the thicknesses of three basis functions.
On filtration for high-energy phase-contrast x-ray imaging
Christian Riess, Ashraf Mohamed, Waldo Hinshaw, et al.
Phase-sensitive x-ray imaging promises unprecedented soft-tissue contrast and resolution. However, several practical challenges have to be overcome when using the setup in a clinical environment. The system design that is currently closest to clinical use is the grating-based Talbot-Lau interferometer (GBI).1-3

The requirements for patient imaging are low patient dose, fast imaging time, and high image quality. For GBI, these requirements can be met most successfully with a narrow energy width, high- ux spectrum. Additionally, to penetrate a human-sized object, the design energy of the system has to be well above 40 keV. To our knowledge, little research has been done so far to investigate optimal GBI filtration at such high x-ray energies.

In this paper, we study different filtration strategies and their impact on high-energy GBI. Specifically, we compare copper filtration at low peak voltage with equal-absorption, equal-imaging time K-edge filtration of spectra with higher peak voltage under clinically realistic boundary conditions. We specifically focus on a design energy of 59 keV and investigate combinations of tube current, peak voltage, and filtration that lead to equal patient absorption. Theoretical considerations suggest that the K edge of tantalum might provide a transmission pocket at around 59 keV, yielding a well-shaped spectrum. Although one can observe a slight visibility benefit when using tungsten or tantalum filtration, experimental results indicate that visibility benefits most from a low x-ray tube peak voltage.
Single-step, quantitative x-ray differential phase contrast imaging using spectral detection in a coded aperture setup
In this abstract we describe the first non-interferometric x-ray phase contrast imaging (PCI) method that uses only a single-measurement step to retrieve with quantitative accuracy absorption, phase and differential phase. Our approach is based on utilizing spectral information from photon counting spectral detectors in conjunction with a coded aperture PCI setting to simplify the x-ray “phase problem” to a one-step method. The method by virtue of being single-step with no motion of any component for a given projection image has significantly high potential to overcome the barriers currently faced by PCI.
Practicable phase contrast techniques with large spot sources
X-ray phase contrast can offer improved contrast in soft tissue imaging at clinical energies. To generate phase contrast in a clinical setting without the need for precisely aligned gratings and multiple exposures has traditionally required the use of specialized sources capable of producing x-ray spots on the order of 10 μm in diameter which necessarily require lengthy exposures due to the low intensity produced. We demonstrate results from two systems capable of overcoming this limitation. In the first, a polycapillary optic is employed to focus a typical clinical source to produce a small secondary source of the size required for phase contrast imaging. In the second, a grid of relatively large pitch is used along with Fourier processing to generate a phase contrast image using a large spot size source.
Statistical estimation of the directional dependency of subject in visibility-contrast imaging with the x-ray Talbot-Lau interferometer
Conventional x-ray image is formed by absorption contrast due to attenuation of x-ray intensity. In recent years, the phase-contrast method has highlighted, in which image contrast is decided according to phase shift of x-rays transmitted through an object. The phase-contrast method is excellent for visualization of soft tissue, which is difficult to visualize using conventional x-ray imaging. The Talbot-Lau interferometer using phase-contrast method was developed by Konica Minolta, Inc. There are images of three types can be obtained in the Talbot-Lau interferometer, i.e. absorption image, differential phase-contrast image, and visibility-contrast image. Visibility-contrast image reflects reduction of coherence due to the object’s structures. Its well-known feature is the contrast due to the x-ray small-angle scattering. In addition, in the visibility-contrast image, the relationship between the signal intensity and the direction of the subject’s structure has been analyzed. Talbot-Lau interferometer only detected the phase shift along the periodic direction of grating due to use the one-dimensional grating. In this study, we focused on how the signal intensity was affected by the direction of the subject structure, and analyzed the edge signal of the subject. We imaged acrylic, glass and aluminum cylinder with the Talbot-Lau interferometer by rotating from 0 degree to 90 degrees with respect to the periodic direction of the grating, and measured their edge signal. Moreover, we statistically estimated the angular function from edge signals of cylinder and compared with our previous study’s method. They correspond with high accuracy and this study warrant accuracy of our previous study’s method.
The quantitative evaluation of the correlation between the magnification and the visibility-contrast value
Talbot-Lau interferometer, which consists of a conventional x-ray tube, an x-ray detector, and three gratings arranged between them, is a new x-ray imaging system using phase-contrast method for excellent visualization of soft tissue. So, it is expected to be applied to an imaging method for soft tissue in the medical field, such as mammograms. The visibility-contrast image, which is one of the reconstruction images using Talbot-Lau interferometer, is known that the visibility-contrast reflects reduction of coherence that is caused from the x-ray small-angle scattering and the x-ray refraction due to the object’s structures. Both phenomena were not distinguished when we evaluated the visibility signal quantitatively before. However, we consider that we should distinguish both phenomena to evaluate it quantitatively. In this study, to evaluate how much the magnification affect the visibility signal, we investigated the variability rate of the visibility signal between the object-position in the height of 0 cm to 50 cm from the diffraction grating in each case of examining the scattering signal and the refraction signal. We measured the edge signal of glass sphere to examine the scattering signal and the internal signal of glass sphere and some kinds of sheet to examine the refraction signal. We can indicate the difference of the variability rate between the edge signal and the internal signal. We tried to propose the estimation method using magnification.
Complex dark-field contrast in grating-based x-ray phase contrast imaging
Yi Yang, Xiangyang Tang
Without assuming that the sub-pixel microstructures of an object to be imaged distribute in space randomly, we investigate the influence of the object’s microstructures on grating-based x-ray phase contrast imaging. Our theoretical analysis and 3D computer simulation study based on the paraxial Fresnel-Kirchhoff theory show that the existing dark-field contrast can be generalized into a complex dark-field contrast in a way such that its imaginary part quantifies the effect of the object’s sub-pixel microstructures on the phase of intensity oscillations. A method based on the phase-attenuation duality that holds for soft tissues to be imaged at high x-ray energies is proposed to retrieve the imaginary part of the complex dark-field contrast for imaging. In comparison to the existing dark-field contrast, the imaginary part of complex dark-field contrast exhibits significantly stronger selectivity on the shape of the object’s sub-pixel microstructures. Thus the x-ray imaging corresponding to the imaginary part of complex dark-field contrast can provide additional and complementary information to that corresponding to the attenuation contrast, phase contrast and the existing dark-field contrast.
Improving depth resolution in digital breast tomosynthesis by iterative image reconstruction
Erin G. Roth, David N. Kraemer, Emil Y. Sidky, et al.
Digital breast tomosynthesis (DBT) is currently enjoying tremendous growth in its application to screening for breast cancer. This is because it addresses a major weakness of mammographic projection imaging; namely, a cancer can be hidden by overlapping fibroglandular tissue structures or the same normal structures can mimic a malignant mass. DBT addresses these issues by acquiring few projections over a limited angle scanning arc that provides some depth resolution. As DBT is a relatively new device, there is potential to improve its performance significantly with improved image reconstruction algorithms. Previously, we reported a variation of adaptive steepest descent - projection onto convex sets (ASD-POCS) for DBT, which employed a finite differencing filter to enhance edges for improving visibility of tissue structures and to allow for volume-of-interest reconstruction. In the present work we present a singular value decomposition (SVD) analysis to demonstrate the gain in depth resolution for DBT afforded by use of the finite differencing filter.
Physical characterization of photon-counting tomosynthesis
Karl Berggren, Mats Lundqvist, Björn Cederström, et al.
Tomosynthesis is emerging as a next generation technology in mammography. Combined with photon-counting detectors with the ability for energy discrimination, a novel modality is enabled — spectral tomosynthesis. Further advantages of photon-counting detectors in the context of tomosynthesis include elimination of electronic noise, efficient scatter rejection (in some geometries) and no lag. Fourier-based linear-systems analysis is a well-established method for optimizing image quality in two-dimensional x-ray systems. The method has been successfully adapted to threedimensional imaging, including tomosynthesis, but several areas need further investigation. This study focuses on two such areas: 1) Adaption of the methodology to photon-counting detectors, and 2) violation of the shift-invariance and stationarity assumptions in non-cylindrical geometries. We have developed a Fourier-based framework to study the image quality in a photon-counting tomosynthesis system, assuming locally linear, stationary, and shift-invariant system response. The framework includes a cascaded-systems model to propagate the modulation-transfer function (MTF) and noise-power spectrum (NPS) through the system. The model was validated by measurements of the MTF and NPS. High degrees of non-shift invariance and non-stationarity were observed, in particular for the depth resolution as the angle of incidence relative the reconstruction plane varied throughout the imaging volume. The largest effects on image quality in a given point in space were caused by interpolation from the inherent coordinate system of the x-rays to the coordinate system that was used for reconstruction. This study is part of our efforts to fully characterize the spectral tomosynthesis system, we intend to extend the model further to include the detective-quantum efficiency, observer modelling, and spectral effects.
Metal artifact reduction in tomosynthesis imaging
Zhaoxia Zhang, Ming Yan, Kun Tao, et al.
The utility of digital tomosynthesis has been shown for many clinical scenarios including post orthopedic surgery applications. However, two kinds of metal artifacts can influence diagnosis: undershooting and ripple. In this paper, we describe a novel metal artifact reduction (MAR) algorithm to reduce both of these artifacts within the filtered backprojection framework. First, metal areas that are prone to cause artifacts are identified in the raw projection images. These areas are filled with values similar to those in the local neighborhood. During the filtering step, the filled projection is free of undershooting due to the resulting smooth transition near the metal edge. Finally, the filled area is fused with the filtered raw projection data to recover the metal. Since the metal areas are recognized during the back projection step, anatomy and metal can be distinguished - reducing ripple artifacts. Phantom and clinical experiments were designed to quantitatively and qualitatively evaluate the algorithms. Based on phantom images with and without metal implants, the Artifact Spread Function (ASF) was used to quantify image quality in the ripple artifact area. The tail of the ASF with MAR decreases from in-plane to out-of-plane, implying a good artifact reduction, while the ASF without MAR remains high over a wider range. An intensity plot was utilized to analyze the edge of undershooting areas. The results illustrate that MAR reduces undershooting while preserving the edge and size of the metal. Clinical images evaluated by physicists and technologists agree with these quantitative results to further demonstrate the algorithm’s effectiveness.
Distance driven back projection image reconstruction in digital tomosysthesis
In this paper, distance driven (DD) back projection image reconstruction was investigated for digital tomosysthesis. Digital tomosysthesis is an imaging technique to produce three dimensional information of the object with low radiation dosage. This paper is our new study of DD back projection for image reconstruction in digital tomosysthesis. Since DD considers that the image pixel and detector cell have width, the convolution operation is used to calculate DD coefficients. The approximation characteristics of some other methods such as ray driven method (RD) can be avoided. A computer simulation result of DD with Maximum Likelihood Expectation Maximization (MLEM) of tomosysthesis reconstruction algorithm was studied. The sequence of projection images were simulated with 25 projections and a total view angle of 48 degrees. DD with MLEM reconstruction results were demonstrated. Line profile along x direction was used to evaluate DD and RD methods. Compared with RD, the computation time in DD with MLEM to provide the reconstruction results was shorter, since the main loop of DD is over x-y plane intercepts, not over the image pixels or detectors cells. In clinical applications, both the accuracy and computation speed of implementation condition are necessary requirements. DD back projection may satisfy the required conditions.
Detection of lung nodules in chest digital tomosynthesis (CDT): effects of the different angular dose distribution
Chest digital tomosynthesis (CDT) is a recently introduced new imaging modality for better detection of high- and smallcontrast lung nodules compared to conventional X-ray radiography. In CDT system, several projection views need to be acquired with limited angular range. The acquisition of insufficient number of projection data can degrade the reconstructed image quality. This image degradation easily affected by acquisition parameters such as angular dose distribution, number of projection views and reconstruction algorithm. To investigate the imaging characteristics, we evaluated the impact of the angular dose distribution on image quality by simulation studies with Geant4 Application for Tomographic Emission (GATE). We designed the different angular dose distribution conditions. The results showed that the contrast-to-noise ratio (CNR) improves when exposed the higher dose at central projection views than peripheral views. While it was found that increasing angular dose distribution at central views improved lung nodule detectability, although both peripheral regions slightly suffer from image noise due to low dose distribution. The improvements of CNR by using proposed image acquisition technique suggest possible directions for further improvement of CDT system for lung nodule detection with high quality imaging capabilities.
Evaluation of effective dose with chest digital tomosynthesis system using Monte Carlo simulation
Chest digital tomosynthesis (CDT) system has recently been introduced and studied. This system offers the potential to be a substantial improvement over conventional chest radiography for the lung nodule detection and reduces the radiation dose with limited angles. PC-based Monte Carlo program (PCXMC) simulation toolkit (STUK, Helsinki, Finland) is widely used to evaluate radiation dose in CDT system. However, this toolkit has two significant limits. Although PCXMC is not possible to describe a model for every individual patient and does not describe the accurate X-ray beam spectrum, Geant4 Application for Tomographic Emission (GATE) simulation describes the various size of phantom for individual patient and proper X-ray spectrum. However, few studies have been conducted to evaluate effective dose in CDT system with the Monte Carlo simulation toolkit using GATE.

The purpose of this study was to evaluate effective dose in virtual infant chest phantom of posterior-anterior (PA) view in CDT system using GATE simulation. We obtained the effective dose at different tube angles by applying dose actor function in GATE simulation which was commonly used to obtain the medical radiation dosimetry. The results indicated that GATE simulation was useful to estimate distribution of absorbed dose. Consequently, we obtained the acceptable distribution of effective dose at each projection. These results indicated that GATE simulation can be alternative method of calculating effective dose in CDT applications.
Optimization of a coded aperture coherent scatter spectral imaging system for medical imaging
Coherent scatter X-ray imaging is a technique that provides spatially-resolved information about the molecular structure of the material under investigation, yielding material-specific contrast that can aid medical diagnosis and inform treatment. In this study, we demonstrate a coherent-scatter imaging approach based on the use of coded apertures (known as coded aperture coherent scatter spectral imaging1, 2) that enables fast, dose-efficient, high-resolution scatter imaging of biologically-relevant materials. Specifically, we discuss how to optimize a coded aperture coherent scatter imaging system for a particular set of objects and materials, describe and characterize our experimental system, and use the system to demonstrate automated material detection in biological tissue.
Concept and setup for intraoperative imaging of tumorous tissue via Attenuated Total Reflection spectrosocopy with Quantum Cascade Lasers
Florian B. Geiger, Martin Koerdel, Anton Schick, et al.
A major challenge in tumor surgery is the differentiation between normal and malignant tissue. Since an incompletely resected tumor easily leads to recidivism, the gold standard is to remove malignant tissue with a sufficient safety margin and send it to pathology for examination with patho-histological techniques (rapid section diagnosis). This approach, however, exhibits several disadvantages: The removal of additional tissue (safety margin) means additional stress to the patient; the correct interpretation of proper tumor excision relies on the pathologist’s experience and the waiting time between resection and pathological result can be more than 45 minutes. This last aspect implies unnecessary occupation of cost-intensive operating room staff as well as longer anesthesia for the patient. Various research groups state that hyperspectral imaging in the mid-infrared, especially in the so called "fingerprint region", allows spatially resolved discrimination between normal and malignant tissue. All these experiments, though, took place in a laboratory environment and were conducted on dried, ex vivo tissue and on a microscopic scale. It is therefore our aim to develop a system incorporating the following properties: Intraoperatively and in vivo applicable, measurement time shorter than one minute, based on mid infrared spectroscopy, providing both spectral and spatial information and no use of external fluorescence markers. Theoretical assessment of different concepts and experimental studies show that a setup based on a tunable Quantum Cascade Laser and Attenuated Total Reflection seems feasible for in vivo tissue discrimination via imaging. This is confirmed by experiments with a first demonstrator.
Scatter-free breast imaging using a monochromator coupled to a pixellated spectroscopic detector
F. H. Green, M. C. Veale, M. D. Wilson, et al.
This project uses the combination of a spectroscopic detector and a monochromator to produce scatter free images for use in mammography. Reducing scatter is vital in mammography, where typical structures have either low contrast or small dimensions. The typical method to reduce scatter is the anti-scatter grid, which has the drawback of absorbing a fraction of the primary beam as well as scattered radiation. An increase in the dose is then required in order to compensate. Compton-scattered X-rays have lower energy than the primary beam. When using a monochromatic beam and a spectroscopic detector the scattered beam will appear at lower energies than the primary beam in the detected spectrum. Therefore if the spectrum of the detected X-rays is available, the scattered component can be windowed out of the spectrum, essentially producing a scatter free image. The monochromator used in this study is made from a Highly Orientated Pyrolytic Graphite (HOPG) crystal with a mosaic spread of 0.4°±0.1°. The detector is a pixellated spectroscopic detector that is made from a 2 cm x 2 cm x 0.1 cm CdTe crystal with a pixel pitch of 250 μm and an energy resolution of 0.8 keV at 59.5 keV. This work presents the characterisation of the monochromator and initial imaging data. The work shows a contrast increase of 20% with the removal of the low energy Compton scattered X-rays.
Comparison of two CDMAM generations with respect to dose sensitivity
Johann Hummel, Marcus Kaar, Marianne Floor, et al.
A contrast-detail phantom like the CDMAM phantom (Artinis Medical Systems, Zetten, NL) is suggested by the ’European protocol for the quality control of the physical and technical aspects of mammography screening’ to evaluate image quality of digital mammography systems. In a recent paper the commonly used CDMAM 3.4 was evaluated according to its dose sensitivity in comparison to other phantoms. The successor phantom (CDMAM 4.0) features other disc diameters and thicknesses that were adapted to be more closely to the image quality which can be found in modern mammography systems. It seems to be obvious to compare this two generations of phantoms with respect to a potential improvement. The time-current product was varied within a range of clinically used values (40-160 mAs). Image evaluation was performed using the automatic evaluation software provided by Artinis. The relative dose sensitivity was compared in dependence of different diameters. Additionally, the IQFinv parameter, which averages over the diameters was computed to get a more global conclusion. We found that the dose is of a considerable smoother dependence with the CMDAM 4.0 phantom. Also the IQFinv parameter shows a more linear behaviour than with the CDMAM 3.4. As the automatic evaluation shows different results on the two phantoms, conversion factors from automatic to human readouts have to be adapted consequently.
Dose and image quality measurements for contrast-enhanced dual energy mammography systems
J. M. Oduko, P. Homolka, V. Jones, et al.
The results of patient dose surveys of two contrast-enhanced dual energy mammography systems are presented, showing mean glandular doses for both low and high energy components of the exposures. For one system the distribution of doses is of an unusual pattern, very different from that normally measured in patient dose surveys. The contribution of the high energy component of the exposure to the total is shown to be about 20% of that of the low energy component for this system. It is about 33% for the other system, for which the distribution of doses is similar to previously published surveys . A phantom containing disks with a range of different iodine content was used, with tissue-equivalent materials, to investigate the properties of one dual energy system. The iodine signal difference to noise ratio is suggested as a measure of image quality. It was found to remain practically constant as phantom thickness was varied, and increased only slowly (with a power relationship) as air kerma increased. Other measurements showed good reproducibility of the iodine signal difference, and that it was proportional to iodine concentration in the phantom. The iodine signal difference was found to be practically the same for a wide range of phantom thickness and glandularity.
Method for inserting noise in digital mammography to simulate reduction in radiation dose
Lucas R. Borges, Helder C. R. de Oliveira, Polyana F. Nunes, et al.
The quality of clinical x-ray images is closely related to the radiation dose used in the imaging study. The general principle for selecting the radiation is ALARA (“as low as reasonably achievable”). The practical optimization, however, remains challenging. It is well known that reducing the radiation dose increases the quantum noise, which could compromise the image quality. In order to conduct studies about dose reduction in mammography, it would be necessary to acquire repeated clinical images, from the same patient, with different dose levels. However, such practice would be unethical due to radiation related risks. One solution is to simulate the effects of dose reduction in clinical images. This work proposes a new method, based on the Anscombe transformation, which simulates dose reduction in digital mammography by inserting quantum noise into clinical mammograms acquired with the standard radiation dose. Thus, it is possible to simulate different levels of radiation doses without exposing the patient to new levels of radiation. Results showed that the achieved quality of simulated images generated with our method is the same as when using other methods found in the literature, with the novelty of using the Anscombe transformation for converting signal-independent Gaussian noise into signal-dependent quantum noise.
Relationship between radiation dose and reduced X-ray sensitivity surrounding breast region using CR stimulable phosphor plate for mammography
Computed radiography (CR) systems use a photostimulable phosphor plate (imaging plate ; IP) as a sensor for digital mammography. In clinical mammography, breast is almost exposed same region of IP, and therefor, direct x-ray regions surrounding suffer from reduced x-ray sensitivity. Consequently, the difference in x-ray sensitivity between the breast regions and the unattenuated x-ray region was obtained. However, radiation dose quantity that reduces x-ray sensitivity is not known. In this study, we imaged a breast phantom under fixed conditions, and subsequently, we investigated the pixel value differences between the breast region and the unattenuated x-ray regions. We measured the entrance air-kerma using 550 sensing elements of glass dosimeter, 22x25 lines, that were placed at the surface of the cassette including the IP. In order to measure the x-ray sensitivity, pre- and post-exposure breast phantom images were acquired after 500, 1,000, 1,350, and 1,500 trials. The pixel values were measured at four points; in the breast region and in the unattenuated x-ray region. The ratio of these pixel values was compared with the cumulative exposure dose. The ratio was nearly constant until 1,000 trials, but a significant reduction was observed after 1,350 trials. Further, in the image obtained after 1,500th trials, the shape of breast phantom could be observed. This image supports the fact that the x-ray sensitivity was lowered in the unattenuated x-ray region. The difference in the pixel value between the breast region and the unattenuated x-ray region was obtained over 1,000 exposures at 100,000 mAs.
Physics of a novel magnetic resonance and electrical impedance combination for breast cancer diagnosis
Maria Kallergi, John J. Heine, Ernest Wollin
A new technique is proposed and experimentally validated for breast cancer detection and diagnosis. The technique combines magnetic resonance with electrical impedance measurements and has the potential to increase the specificity of magnetic resonance mammography (MRM) thereby reducing false positive biopsy rates. The new magnetic resonance electrical impedance mammography (MREIM) adds a time varying electric field during a supplementary sequence to a standard MRM examination with an apparatus that is “invisible” to the patient. The applied electric field produces a current that creates an additional magnetic field with a component aligned with the bore magnetic field that can alter the native signal in areas of higher electrical conductivity. The justification for adding the electric field is that the electrical conductivity of cancerous breast tissue is approximately 3-40 times higher than normal breast tissue and, hence, conductivity of malignant tissue represents a known clinical disease biomarker. In a pilot study with custom-made phantoms and experimental protocols, it was demonstrated that MREIM can produce, as theoretically predicted, a detectable differential signal in areas of higher electrical conductivity (tumor surrogate regions); the evidence indicates that the differential signal is produced by the confluence of two different effects at full image resolution without gadolinium chelate contrast agent injection, without extraneous reconstruction techniques, and without cumbersome multi-positioned patient electrode configurations. This paper describes the theoretical model that predicts and explains the observed experimental results that were also confirmed by simulation studies.
Dual-energy (MV/kV) CT with probabilistic attenuation mapping for IGRT applications
Erik Pearson, Xiaochuan Pan, Charles Pelizzari
Imaging plays an important role in the delivery of external beam radiation therapy. It is used to confirm the setup of the patient and to ensure accurate targeting and delivery of the therapeutic radiation dose. Most modern linear accelerators come equipped with a flat panel detector opposite the MV source as well as an independent kV imaging system, typically mounted perpendicular to the MV beam. kV imaging provides superior soft tissue contrast and is typically lower dose to the patient than MV imaging, however it can suffer from artifacts caused by metallic objects such as implants and immobilization devices. In addition to being less artifact prone, MV imaging also provides a direct measure of the attenuation for the MV beam which is useful for computing the therapeutic dose distributions. Furthermore either system requires a large angular coverage, which is slow for large linear accelerators. We present a method for reconstructing tomographic images from data acquired at multiple x-ray beam energies using a statistical model of inherent physical properties of the imaged object. This approach can produce image quality superior to traditional techniques in the case of limited measurement data (angular sampling range, sampling density or truncated projections) and/or conditions in which the lower energy image would typically suffer from corrupting artifacts such as the presence of metals in the object. Both simulation and real data results are shown.
Interventional C-arm tomosynthesis for vascular imaging: initial results
David A. Langan, Bernhard E. H. Claus, Omar Al Assad, et al.
As percutaneous endovascular procedures address more complex and broader disease states, there is an increasing need for intra-procedure 3D vascular imaging. In this paper, we investigate C-Arm 2-axis tomosynthesis (“Tomo”) as an alternative to C-Arm Cone Beam Computed Tomography (CBCT) for workflow situations in which the CBCT acquisition may be inconvenient or prohibited. We report on our experience in performing tomosynthesis acquisitions with a digital angiographic imaging system (GE Healthcare Innova 4100 Angiographic Imaging System, Milwaukee, WI). During a tomo acquisition the detector and tube each orbit on a plane above and below the table respectively. The tomo orbit may be circular or elliptical, and the tomographic half-angle in our studies varied from approximately 16 to 28 degrees as a function of orbit period. The trajectory, geometric calibration, and gantry performance are presented. We overview a multi-resolution iterative reconstruction employing compressed sensing techniques to mitigate artifacts associated with incomplete data reconstructions. In this work, we focus on the reconstruction of small high contrast objects such as iodinated vasculature and interventional devices. We evaluate the overall performance of the acquisition and reconstruction through phantom acquisitions and a swine study. Both tomo and comparable CBCT acquisitions were performed during the swine study thereby enabling the use of CBCT as a reference in the evaluation of tomo vascular imaging. We close with a discussion of potential clinical applications for tomo, reflecting on the imaging and workflow results achieved.
Usefulness of an energy-binned photon-counting x-ray detector for dental panoramic radiographs
Tatsumasa Fukui, Akitoshi Katsumata D.D.S., Koichi Ogawa, et al.
A newly developed dental panoramic radiography system is equipped with a photon-counting semiconductor detector. This photon-counting detector acquires transparent X-ray beams by dividing them into several energy bands. We developed a method to identify dental materials in the patient’s teeth by means of the X-ray energy analysis of panoramic radiographs. We tested various dental materials including gold alloy, dental amalgam, dental cement, and titanium. The results of this study suggest that X-ray energy scattergram analysis could be used to identify a range of dental materials in a patient’s panoramic radiograph.