Aviation security x-ray detection challenges
Author(s):
T. Harvey
Show Abstract
In this paper, a review of the background and some drivers are provided for X-ray screening for aviation security. Some of the key considerations are highlighted along with impacts of the image-based approaches and signature approaches. The role of information theory is discussed along with some recent work that may influence the technical direction by posing the question: “what measurements, parameters and metrics should be considered in future system design?” A path forward should be based on information theory, however electronic machines will likely interface with humans and be dollar-cost driven, so ultimately solutions must consider additional parameters other than only technical performance factors.
Detecting liquid threats with x-ray diffraction imaging (XDi) using a hybrid approach to navigate trade-offs between photon count statistics and spatial resolution
Author(s):
Sondre Skatter;
Sebastian Fritsch;
Jens-Peter Schlomka
Show Abstract
The performance limits were explored for an X-ray Diffraction based explosives detection system for baggage scanning. This XDi system offers 4D imaging that comprises three spatial dimensions with voxel sizes in the order of ~(0.5cm)3, and one spectral dimension for material discrimination. Because only a very small number of photons are observed for an individual voxel, material discrimination cannot work reliably at the voxel level. Therefore, an initial 3D reconstruction is performed, which allows the identification of objects of interest. Combining all the measured photons that scattered within an object, more reliable spectra are determined on the object-level. As a case study we looked at two liquid materials, one threat and one innocuous, with very similar spectral characteristics, but with 15% difference in electron density. Simulations showed that Poisson statistics alone reduce the material discrimination performance to undesirable levels when the photon counts drop to 250. When additional, uncontrolled variation sources are considered, the photon count plays a less dominant role in detection performance, but limits the performance also for photon counts of 500 and higher. Experimental data confirmed the presence of such non-Poisson variation sources also in the XDi prototype system, which suggests that the present system can still be improved without necessarily increasing the photon flux, but by better controlling and accounting for these variation sources. When the classification algorithm was allowed to use spectral differences in the experimental data, the discrimination between the two materials improved significantly, proving the potential of X-ray diffraction also for liquid materials.
Snapshot full-volume coded aperture x-ray diffraction tomography
Author(s):
Joel A. Greenberg;
David J. Brady
Show Abstract
X-ray diffraction tomography (XRDT) is a well-established technique that makes it possible to identify the material composition of an object throughout its volume. We show that using coded apertures to structure the measured scatter signal gives rise to a family of imaging architectures than enables snapshot XRDT in up to 4-dimensions. We consider pencil, fan, and cone beam snapshot XRDT and show results from both experimental and simulation-based studies. We find that, while lower-dimensional systems typically result in higher imaging fidelity, higher-dimensional systems can perform adequately for a specific task at orders of magnitude faster scan times.
Absorption-phase duality in structured illumination transport of intensity (TIE) phase imaging
Author(s):
Yunhui Zhu
Show Abstract
Robust phase retrieval is obtained by using structured illumination with the Transport of Intensity Equation (SI-TIE). By imposing a proportional relationship between the attenuation coefficient and the refractive index (known as the phase-absorption duality), we are allowed to reformulate the SI-TIE propagation equation to probably address both the transmission and diffraction signals using only a single shot of intensity measurement. The correlation between the phase and attenuation fixes the low frequency instability, resulting in robust phase imaging with enhanced sensitivity.
Multi-view coded aperture coherent scatter tomography
Author(s):
Andrew D. Holmgren;
Ikenna Odinaka;
Joel A. Greenberg;
David J. Brady
Show Abstract
We use coded apertures and multiple views to create a compressive coherent scatter computed tomography (CSCT) system. Compared with other CSCT systems, we reduce object dose and scan time. Previous work on coded aperture tomography resulted in a resolution anisotropy that caused poor or unusable momentum transfer resolution in certain cases. Complimentary and multiple views resolve the resolution issues, while still providing the ability to perform snapshot tomography by adding sources and detectors.
Information-theoretic analysis of x-ray scatter and phase architectures for anomaly detection
Author(s):
David Coccarelli;
Qian Gong;
Razvan-Ionut Stoian;
Joel A. Greenberg;
Michael E. Gehm;
Yuzhang Lin;
Liang-Chih Huang;
Amit Ashok
Show Abstract
Conventional performance analysis of detection systems confounds the effects of the system architecture (sources, detectors, system geometry, etc.) with the effects of the detection algorithm. Previously, we introduced an information-theoretic approach to this problem by formulating a performance metric, based on Cauchy-Schwarz mutual information, that is analogous to the channel capacity concept from communications engineering. In this work, we discuss the application of this metric to study novel screening systems based on x-ray scatter or phase. Our results show how effective use of this metric can impact design decisions for x-ray scatter and phase systems.
Phase and coherent scatter imaging for improved discrimination of low-density materials
Author(s):
Jonathan C. Petruccelli;
Danielle Hayden;
Sean Starr-Baier;
Sajjad Tahir;
Mahboob Ur Rehman;
Laila Hassan;
C. A. MacDonald
Show Abstract
Phase contrast and coherent scatter imaging have the potential to improve the detection of materials of interest in x ray screening. While attenuation is dependent on atomic number, phase is highly dependent on electron density, and thus offers an additional discriminant. A major limitation of phase imaging has been the required spatial coherence of the xray illumination, which typically requires a small (10-50 μm) source or multiple images captured with precision gratings, both of which present challenges for high throughput image acquisition. An alternative approach uses a single coarse mesh. This significantly relaxes the source spot size requirement, improving acquisition times and allows near-real-time phase extraction using Fourier processing of the acquired images. Diffraction signatures provide a third approach which yields another set of information to identify materials. Specific angles characteristic of target materials are selected through broad slot apertures for rapid throughput. Depth information can be extracted from stereoscopic imaging using multiple slots. A system capable of simultaneous phase, coherent scatter, and absorption imaging was constructed. Discrimination of materials on the basis of both phase and coherent scatter signatures is demonstrated.
CT dual-energy decomposition into x-ray signatures Rho-e and Z-e
Author(s):
Harry E. Martz;
Issac M. Seetho;
Kyle E. Champley;
Jerel A. Smith;
Stephen G. Azevedo
Show Abstract
In a recent journal article [IEEE Trans. Nuc. Sci., 63(1), 341-350, 2016], we introduced a novel method that decomposes dual-energy X-ray CT (DECT) data into electron density (ρe) and a new effective-atomic-number called Ze in pursuit of system-independent characterization of materials. The Ze of a material, unlike the traditional Zeff, is defined relative to the actual X-ray absorption properties of the constituent atoms in the material, which are based on published X-ray cross sections. Our DECT method, called SIRZ (System-Independent ρe, Ze), uses a set of well-known reference materials and an understanding of the system spectral response to produce accurate and precise estimates of the X-ray-relevant basis variables (ρe, Ze) regardless of scanner or spectra in diagnostic energy ranges (30 to 200 keV). Potentially, SIRZ can account for and correct spectral changes in a scanner over time and, because the system spectral response is included in the technique, additional beam-hardening correction is not needed. Results show accuracy (<3%) and precision (<2%) values that are much better than prior methods on a wide range of spectra. In this paper, we will describe how to convert DECT system output into (ρe, Ze) features and we present our latest SIRZ results compared with ground truth for a set of materials.
High precision, medium flux rate CZT spectroscopy for coherent scatter imaging
Author(s):
Joel A. Greenberg;
Mehadi Hassan;
David J. Brady;
Kris Iniewski
Show Abstract
CZT detectors are primary candidates for many next-generation X-ray imaging systems. These detectors are typically operated in either a high precision, low flux spectroscopy mode or a low precision, high flux photon counting mode. We demonstrate a new detector configuration that enables operation in a high precision, medium flux spectroscopy mode, which opens the potential for a variety of new applications in medical imaging, non-destructive testing and baggage scanning. In particular, we describe the requirements of a coded aperture coherent scattering X-ray system that can perform fast imaging with accurate material discrimination.
Information-theoretic analysis of x-ray photoabsorption based threat detection system for check-point
Author(s):
Yuzhang Lin;
Genevieve G. Allouche;
James Huang;
Amit Ashok;
Qian Gong;
David Coccarelli;
Razvan-Ionut Stoian;
Michael E. Gehm
Show Abstract
In this work we present an information-theoretic framework for a systematic study of checkpoint x-ray systems using photoabsorption measurements. Conventional system performance analysis of threat detection systems confounds the effect of the system architecture choice with the performance of a threat detection algorithm. However, our system analysis approach enables a direct comparison of the fundamental performance limits of disparate hardware architectures, independent of the choice of a specific detection algorithm. We compare photoabsorptive measurements from different system architectures to understand the affect of system geometry (angular views) and spectral resolution on the fundamental limits of the system performance.
High frame-rate real-time x-ray imaging of in situ high-velocity rifle bullets
Author(s):
Lawrence J. D'Aries;
Stuart R. Miller;
Rob Robertson;
Bipin Singh;
Vivek V. Nagarkar
Show Abstract
High frame-rate imaging is a valuable tool for non-destructive evaluation (NDE) as well as for ballistic impact studies (terminal ballistics), in-flight projectile imaging, studies of exploding ordnance and characterization of other high-speed phenomena. Current imaging systems exist for these studies, however, none have the ability to do in-barrel characterization (in-bore ballistics) to image kinetics of the moving projectile BEFORE it exits the barrel.
The system uses an intensified high-speed CMOS camera coupled to a specially designed scintillator to serve as the X-ray detector. The X-ray source is a sequentially fired portable pulsed unit synchronized with the detector integration window and is able to acquire 3,600 frames per second (fps) with mega-pixel spatial resolution and up to 500,000 fps with reduced pixel resolution. This paper will discuss our results imaging .30 caliber bullets traveling at ~1,000 m/s while still in the barrel. Information on bullet deformation, pitch, yaw and integrity are the main goals of this experimentation. Planned future upgrades for imaging large caliber projectiles will also be discussed.
Shape threat detection via adaptive computed tomography
Author(s):
Ahmad Masoudi;
Ratchaneekorn Thamvichai;
Mark A. Neifeld
Show Abstract
X-ray Computed Tomography (CT) is used widely for screening purposes. Conventional x-ray threat detection systems employ image reconstruction and segmentation algorithms prior to making threat/no-threat decisions. We find that in many cases these pre-processing steps can degrade detection performance. Therefore in this work we will investigate methods that operate directly on the CT measurements. We analyze a fixed-gantry system containing 25 x-ray sources and 2200 photon counting detectors. We present a new method for improving threat detection performance. This new method is a so-called greedy adaptive algorithm which at each time step uses information from previous measurements to design the next measurement. We utilize sequential hypothesis testing (SHT) in order to derive both the optimal "next measurement" and the stopping criterion to insure a target probability of error Pe. We find that selecting the next x-ray source according to such a greedy adaptive algorithm, we can reduce Pe by a factor of 42.4× relative to the conventional measurement sequence employing all 25 sources in sequence.
Performance analysis of model based iterative reconstruction with dictionary learning in transportation security CT
Author(s):
Eri Haneda;
Jiajia Luo;
Ali Can;
Sathish Ramani;
Lin Fu;
Bruno De Man
Show Abstract
In this study, we implement and compare model based iterative reconstruction (MBIR) with dictionary learning (DL) over MBIR with pairwise pixel-difference regularization, in the context of transportation security. DL is a technique of sparse signal representation using an over complete dictionary which has provided promising results in image processing applications including denoising,
1 as well as medical CT reconstruction.
2 It has been previously reported that DL produces promising results in terms of noise reduction and preservation of structural details, especially for low dose and few-view CT acquisitions.
2
A distinguishing feature of transportation security CT is that scanned baggage may contain items with a wide range of material densities. While medical CT typically scans soft tissues, blood with and without contrast agents, and bones, luggage typically contains more high density materials (i.e. metals and glass), which can produce severe distortions such as metal streaking artifacts. Important factors of security CT are the emphasis on image quality such as resolution, contrast, noise level, and CT number accuracy for target detection. While MBIR has shown exemplary performance in the trade-off of noise reduction and resolution preservation, we demonstrate that DL may further improve this trade-off. In this study, we used the KSVD-based DL
3 combined with the MBIR cost-minimization framework and compared results to Filtered Back Projection (FBP) and MBIR with pairwise pixel-difference regularization. We performed a parameter analysis to show the image quality impact of each parameter. We also investigated few-view CT acquisitions where DL can show an additional advantage relative to pairwise pixel difference regularization.
Model-based reconstruction for x-ray diffraction imaging
Author(s):
Venkatesh Sridhar;
Sherman J. Kisner;
Sondre Skatter;
Charles A. Bouman
Show Abstract
In this paper, we propose a novel 4D model-based iterative reconstruction (MBIR) algorithm for low-angle scatter X-ray Diffraction (XRD) that can substantially increase the SNR. Our forward model is based on a Poisson photon counting model that incorporates a spatial point-spread function, detector energy response and energy-dependent attenuation correction. Our prior model uses a Markov random field (MRF) together with a reduced spectral bases set determined using non-negative matrix factorization. We demonstrate the effectiveness of our method with real data sets.
2.5D dictionary learning based computed tomography reconstruction
Author(s):
Jiajia Luo;
Haneda Eri;
Ali Can;
Sathish Ramani;
Lin Fu;
Bruno De Man
Show Abstract
A computationally efficient 2.5D dictionary learning (DL) algorithm is proposed and implemented in the model- based iterative reconstruction (MBIR) framework for low-dose CT reconstruction. MBIR is based on the minimization of a cost function containing data-fitting and regularization terms to control the trade-off between data-fidelity and image noise. Due to the strong denoising performance of DL, it has previously been considered as a regularizer in MBIR, and both 2D and 3D DL implementations are possible. Compared to the 2D case, 3D DL keeps more spatial information and generates images with better quality although it requires more computation. We propose a novel 2.5D DL scheme, which leverages the computational advantage of 2D-DL, while attempting to maintain reconstruction quality similar to 3D-DL. We demonstrate the effectiveness of this new 2.5D DL scheme for MBIR in low-dose CT.
By applying the 2D DL method in three different orthogonal planes and calculating the sparse coefficients accordingly, much of the 3D spatial information can be preserved without incurring the computational penalty of the 3D DL method. For performance evaluation, we use baggage phantoms with different number of projection views. In order to quantitatively compare the performance of different algorithms, we use PSNR, SSIM and region based standard deviation to measure the noise level, and use the edge response to calculate the resolution. Experimental results with full view datasets show that the different DL based algorithms have similar performance and 2.5D DL has the best resolution. Results with sparse view datasets show that 2.5D DL outperforms both 2D and 3D DL in terms of noise reduction. We also compare the computational costs, and 2.5D DL shows strong advantage over 3D DL in both full-view and sparse-view cases.
Extraction and classification of 3D objects from volumetric CT data
Author(s):
Samuel M. Song;
Junghyun Kwon;
Austin Ely;
John Enyeart;
Chad Johnson;
Jongkyu Lee;
Namho Kim;
Douglas P. Boyd
Show Abstract
We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials.
The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies.
Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.
Tackling the x-ray cargo inspection challenge using machine learning
Author(s):
Nicolas Jaccard;
Thomas W. Rogers;
Edward J. Morton;
Lewis D. Griffin
Show Abstract
The current infrastructure for non-intrusive inspection of cargo containers cannot accommodate exploding com-merce volumes and increasingly stringent regulations. There is a pressing need to develop methods to automate parts of the inspection workflow, enabling expert operators to focus on a manageable number of high-risk images. To tackle this challenge, we developed a modular framework for automated X-ray cargo image inspection. Employing state-of-the-art machine learning approaches, including deep learning, we demonstrate high performance for empty container verification and specific threat detection. This work constitutes a significant step towards the partial automation of X-ray cargo image inspection.
CT reconstruction via denoising approximate message passing
Author(s):
Alessandro Perelli;
Michael A. Lexa;
Ali Can;
Mike E. Davies
Show Abstract
In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.
Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications
Author(s):
Carl Bosch;
Soysal Degirmenci;
Jason Barlow;
Assaf Mesika;
David G. Politte;
Joseph A. O'Sullivan
Show Abstract
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Rapid GPU-based simulation of x-ray transmission, scatter, and phase measurements for threat detection systems
Author(s):
Qian Gong;
David Coccarelli;
Razvan-Ionut Stoian;
Joel Greenberg;
Esteban Vera;
Michael Gehm
Show Abstract
To support the statistical analysis of x-ray threat detection, we developed a very high-throughput x-ray modeling framework based upon GPU technologies and have created three different versions focusing on transmission, scatter, and phase. The simulation of transmission imaging is based on a deterministic photo-absorption approach. This initial transmission approach is then extended to include scatter effects that are computed via the Born approximation. For phase, we modify the transmission framework to propagate complex ray amplitudes rather than radiometric quantities. The highly-optimized NVIDIA OptiX API is used to implement the required ray-tracing in all frameworks, greatly speeding up code execution. In addition, we address volumetric modeling of objects via a hierarchical representation structure of triangle-mesh-based surface descriptions. We show that the x-ray transmission and phase images of complex 3D models can be simulated within seconds on a desktop computer, while scatter images take approximately 30-60 minutes as a result of the significantly greater computational complexity.
Domain and range decomposition methods for coded aperture x-ray coherent scatter imaging
Author(s):
Ikenna Odinaka;
Yan Kaganovsky;
Joseph A. O'Sullivan;
David G. Politte;
Andrew D. Holmgren;
Joel A. Greenberg;
Lawrence Carin;
David J. Brady
Show Abstract
Coded aperture X-ray coherent scatter imaging is a novel modality for ascertaining the molecular structure of an object. Measurements from different spatial locations and spectral channels in the object are multiplexed through a radiopaque material (coded aperture) onto the detectors. Iterative algorithms such as penalized expectation maximization (EM) and fully separable spectrally-grouped edge-preserving reconstruction have been proposed to recover the spatially-dependent coherent scatter spectral image from the multiplexed measurements. Such image recovery methods fall into the category of domain decomposition methods since they recover independent pieces of the image at a time. Ordered subsets has also been utilized in conjunction with penalized EM to accelerate its convergence. Ordered subsets is a range decomposition method because it uses parts of the measurements at a time to recover the image. In this paper, we analyze domain and range decomposition methods as they apply to coded aperture X-ray coherent scatter imaging using a spectrally-grouped edge-preserving regularizer and discuss the implications of the increased availability of parallel computational architecture on the choice of decomposition methods. We present results of applying the decomposition methods on experimental coded aperture X-ray coherent scatter measurements. Based on the results, an underlying observation is that updating different parts of the image or using different parts of the measurements in parallel, decreases the rate of convergence, whereas using the parts sequentially can accelerate the rate of convergence.
Figures of merit for optimizing imaging systems on joint estimation/detection tasks
Author(s):
Eric Clarkson
Show Abstract
Previously published work on joint estimation/detection tasks has focused on the area under the Estimation Receiver Operating Characteristic (EROC) curve as a figure of merit for these tasks in imaging. A brief discussion of this concept and the corresponding ideal observer is included here, but the main focus is on three new approaches for system optimization on these joint tasks. One of these approaches is a generalization of Shannon Task Specific Information (TSI) to this setting. The form of this TSI is used to show that a system optimized for the joint task will not in general be optimized for the detection task alone. Another figure of merit for these joint tasks is the Bayesian Risk, where a cost is assigned to all detection outcomes and to the estimation errors, and then averaged over all sources of randomness in the object ensemble and the imaging system. The ideal observer in this setting, which minimizes the risk, is shown to be the same as the ideal EROC observer, which maximizes the area under the EROC curve. It is also shown that scaling the estimation cost function upwards, i.e making the estimation task more important, degrades the performance of this ideal observer on the detection component of the joint task. Finally we generalize these concepts to the idea of Estimation/Detection Information Tradeoff (EDIT) curves which can be used to quantify the tradeof between estimation performance and detection performance in system design.
Information optimal compressive x-ray threat detection
Author(s):
James Huang;
Amit Ashok
Show Abstract
We present an information-theoretic approach to X-ray measurement design for threat detection in passenger bags. Unlike existing X-ray systems that rely of a large number of sequential tomographic projections for threat detection based on 3D reconstruction, our approach exploits the statistical priors on shape/material of items comprising the bag to optimize multiplexed measurements that can be used directly for threat detection without an intermediate 3D reconstruction. Simulation results show that the optimal multiplexed design achieves higher probability of detection for a given false alarm rate and lower probability of error for a range of exposure (photon) budgets, relative to the non-multiplexed measurements. For example, a 99% detection probability is achieved by optimal multiplexed design requiring 4x fewer measurements than non-multiplexed design.
Estimation and detection information trade-off for x-ray system optimization
Author(s):
Johnathan B. Cushing;
Eric W. Clarkson;
Sagar Mandava;
Ali Bilgin
Show Abstract
X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes.
In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes.
The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.
Robust x-ray based material identification using multi-energy sinogram decomposition
Author(s):
Yaoshen Yuan;
Brian Tracey;
Eric Miller
Show Abstract
There is growing interest in developing X-ray computed tomography (CT) imaging systems with improved ability to discriminate material types, going beyond the attenuation imaging provided by most current systems. Dual- energy CT (DECT) systems can partially address this problem by estimating Compton and photoelectric (PE) coefficients of the materials being imaged, but DECT is greatly degraded by the presence of metal or other materials with high attenuation. Here we explore the advantages of multi-energy CT (MECT) systems based on photon-counting detectors. The utility of MECT has been demonstrated in medical applications where photon- counting detectors allow for the resolution of absorption K-edges. Our primary concern is aviation security applications where K-edges are rare. We simulate phantoms with differing amounts of metal (high, medium and low attenuation), both for switched-source DECT and for MECT systems, and include a realistic model of detector energy 0 resolution. We extend the DECT sinogram decomposition method of Ying et al. to MECT, allowing estimation of separate Compton and photoelectric sinograms. We furthermore introduce a weighting based on a quadratic approximation to the Poisson likelihood function that deemphasizes energy bins with low signal. Simulation results show that the proposed approach succeeds in estimating material properties even in high-attenuation scenarios where the DECT method fails, improving the signal to noise ratio of reconstructions by over 20 dB for the high-attenuation phantom. Our work demonstrates the potential of using photon counting detectors for stably recovering material properties even when high attenuation is present, thus enabling the development of improved scanning systems.
Spectral feature variations in x-ray diffraction imaging systems
Author(s):
Scott D. Wolter;
Joel A. Greenberg
Show Abstract
Materials with different atomic or molecular structures give rise to unique scatter spectra when measured by X-ray diffraction. The details of these spectra, though, can vary based on both intrinsic (e.g., degree of crystallinity or doping) and extrinsic (e.g., pressure or temperature) conditions. While this sensitivity is useful for detailed characterizations of the material properties, these dependences make it difficult to perform more general classification tasks, such as explosives threat detection in aviation security. A number of challenges, therefore, currently exist for reliable substance detection including the similarity in spectral features among some categories of materials combined with spectral feature variations from materials processing and environmental factors. These factors complicate the creation of a material dictionary and the implementation of conventional classification and detection algorithms. Herein, we report on two prominent factors that lead to variations in spectral features: crystalline texture and temperature variations. Spectral feature comparisons between materials categories will be described for solid metallic sheet, aqueous liquids, polymer sheet, and metallic, organic, and inorganic powder specimens. While liquids are largely immune to texture effects, they are susceptible to temperature changes that can modify their density or produce phase changes. We will describe in situ temperature-dependent measurement of aqueous-based commercial goods in the temperature range of -20°C to 35°C.
Impact of detector geometry for compressive fan beam snapshot coherent scatter imaging
Author(s):
Mehadi Hassan;
Andrew Holmgren;
Joel A. Greenberg;
Ikenna Odinaka;
David Brady
Show Abstract
Previous realizations of coded-aperture X-ray diffraction tomography (XRDT) techniques based on pencil beams image one line through an object via a single measurement but require raster scanning the object in multiple dimensions. Fan beam approaches are able to image the spatial extent of the object while retaining the ability to do material identification. Building on these approaches we present our system concept and geometry of combining a fan beam with energy sensitive/photon counting detectors and a coded aperture to capture both spatial and spectral information about an object at each voxel. Using our system we image slices via snapshot measurements for four different detector configurations and compare their results.
Partially observable Markov decision processes for risk-based screening
Author(s):
Alex Mrozack;
Xuejun Liao;
Sondre Skatter;
Lawrence Carin
Show Abstract
A long-term goal for checked baggage screening in airports has been to include passenger information, or at least a predetermined passenger risk level, in the screening process. One method for including that information could be treating the checked baggage screening process as a system-of-systems. This would allow for an optimized policy builder, such as one trained using the methodology of partially observable Markov decision processes (POMDP), to navigate the different sensors available for screening. In this paper we describe the necessary steps to tailor a POMDP for baggage screening, as well as results of simulations for specific screening scenarios.
Data sinogram sparse reconstruction based on steering kernel regression and filtering strategies
Author(s):
Miguel A. Marquez;
Edson Mojica;
Henry Arguello
Show Abstract
Computed tomography images have an impact in many applications such as medicine, and others. Recently, compressed sensing-based acquisition strategies have been proposed in order to reduce the x-ray radiation dose. However, these methods lose critical information of the sinogram. In this paper, a reconstruction method of sparse measurements from a sinogram is proposed. The proposed approach takes advantage of the redundancy of similar patches in the sinogram, and estimates a target pixel using a weighted average of its neighbors. Simulation results show that the proposed method obtained a gain up to 2 dB with respect to an l1 minimization algorithm.