The boundary of x-ray and electron tomography
Author(s):
Zachary H. Levine
Show Abstract
Samples a few micrometers in total size offer a challenge to both x-ray and electron tomography. X-ray tomography originated imaging the human body with millimeter resolution, but the resolution has been reduced by over 7 orders of magnitude by the use of synchrotron sources and Fresnel zone plates, leading to an achieved resolution of 20 nm in favorable cases. Further progress may require phase retrieval. Electron tomography originated on very thin samples (perhaps 100 nm thick) but recently samples of over 1 micrometer have been studied with conventional instruments. The study of thicker samples requires understanding tomography in the multiple scattering regime.
Seismic image reconstruction using complex wavelets
Author(s):
Mark A. Miller;
Nick G. Kingsbury;
Richard W. Hobbs
Show Abstract
Marine seismic imaging involves reconstructing subsurface reflectivity from some scattered acoustic data generally observed near the ocean surface. The procedure can be framed as a linearized inverse scattering problem and is often called least-squares migration (LSM). LSM has been shown to be effective in optimizing the reconstruction of subsurface reflectivity, particularly in cases of missing or undersampled data or uneven subsurface illumination.
In standard LSM, the reflectivity model parameters are usually defined as a grid of point scatterers over the area or volume to be migrated. We propose an approach to pre-stack LSM using the Dual Tree Complex Wavelet Transform (DT-CWT) as a basis for the reflectivity.
Wavelet bases have a reputation for decorrelating or diagonalizing a range of non-stationary signals. In LSM, diagonalization of the model space affords a more accurate but practical representation of prior information about the subsurface reflectivity model parameters. The DT-CWT is chosen for its key advantages compared to other wavelet transforms. These include shift invariance, directional selectivity, perfect reconstruction, limited redundancy and efficient computation.
A complex wavelet based LSM algorithm, derived in a Bayesian framework, is presented. Minimization of the least-squares cost function is performed in the wavelet domain rather than the standard reflectivity model domain.
Electrical resistance tomography for real-time mapping of the solid-liquid interface in tanks containing optically opaque fluids
Author(s):
Amar Madupu;
Anindra Mazumdar;
Jinsong Zhang;
David Roelant;
Rajiv Srivastava
Show Abstract
The visualization of settled solid layers in vessels have many applications, of interest here is for facilitating the efficient retrieval of high-level radioactive waste (HLW) from underground storage tanks at Department of Energy sites. Visualization of the solids interface with opaque liquid above can’t be accomplished by regular optical imaging methods and hence our interest in using Electrical Resistance Tomography (ERT). The ideal arrangement for 3-D ERT imaging inside tanks is to use a multiple ring electrode system, which is complex and expensive. This research describes ERT imaging done with a single linear array as a benchmark study to ascertain the viability of its imaging of the interface. Experiments focused upon systematic analysis of many ERT tomograms of two simple settled solids layers (horizontal, 30o) using pulverized kaolin clay (10μdia) and water. Visualization was done using commercial ERT software. Injection current and electrode orientation were the two system parameters varied and analyzed. Reproducibility, accuracy and reliability of this ERT system will be presented.
Implementation and evaluation of the ultrasonic TOF tomography for the NDT of concrete structures
Author(s):
Junghyun Kwon;
Sang-Jin Choi;
Samuel Moon-Ho Song
Show Abstract
In contrast to X-rays, ultrasound propagates along a curved path due to spatial variations in the refraction index of the medium. Thus, for ultrasonic TOF tomography, the propagation path of the ultrasound must be known to correctly reconstruct the slice image. In this paper, we propose a new path determination algorithm, which is essentially a numerical solution of the eikonal equation viewed as a boundary value problem. Due to the curved propagation path of ultrasound, the image reconstruction algorithm takes the algebraic approach, for instance, the ART or the SART. Note that the image reconstruction step requires the propagation path and the paths can be determined only if the image is known. Thus, an iterative approach is taken to solve this apparent dilemma. First, the slice image is initially reconstructed assuming straight propagation paths. Then the paths are computed based on the recently reconstructed image using our path determination algorithm and used to update the reconstructed image. The process of the image reconstruction and the path determination repeats until convergence. This is the approach taken in this paper and it is tested using both a simulation data and a real concrete structure scanned by a mechanical scanner.
Imaging of oscillatory behavior in event-related MEG studies
Author(s):
Dimitrios Pantazis;
Darren L. Weber;
Corby L. Dale;
Thomas E. Nichols;
Gregory V. Simpson;
Richard M. Leahy
Show Abstract
Since event-related components in MEG (magnetoencephalography) studies are often buried in background brain activity and environmental and sensor noise, it is a standard technique for noise reduction to average over multiple stimulus-locked responses or “epochs”. However this also removes event-related changes in oscillatory activity that are not phase locked to the stimulus. To overcome this problem, we combine time-frequency analysis of individual epochs with corticallyconstrained imaging to produce dynamic images of brain activity on the cerebral cortex in multiple time-frequency bands. While the SNR in individual epochs is too low to see any but the strongest components, we average signal power across epochs to find event related components on the cerebral cortex in each frequency band. To determine which of these components are statistically significant within an individual subject, we threshold the cortical images to control for false positives. This involves testing thousands of hypotheses (one per surface element and time-frequency band) for significant experimental effects. To control the number of false positives over all tests, we must therefore apply multiplicity adjustments by controlling the familywise error rate, i.e. the probability of one or more false positive detections across the entire cortex. Applying this test to each frequency band produces a set of cortical images showing significant eventrelated activity in each band of interest. We demonstrate this method in applications to high density MEG studies of visual attention.
Domain decomposition method for diffuse optical tomography
Author(s):
Kiwoon Kwon;
Il-young Son;
Birsen Yazici
Show Abstract
Diffuse optical tomography is modelled as an optimization problem to find the absorption and scattering coefficients that minimize the error between the measured photon density function and the approximated one computed using the coefficients. The problem is composed of two steps: the forward solver to compute the photon density function and its Jacobian (with respect to the coefficients), and the inverse solver to update the coefficients based on the photon density function and its Jacobian attained in the forward solver. The resulting problem is nonlinear and highly ill-posed. Thus, it requires large amount of computation for high quality image. As such, for real time application, it is highly desirable to reduce the amount of computation needed. In this paper, domain decomposition method is adopted to decrease the computation complexity of the problem. Two level multiplicative overlapping domain decomposition method is used to compute the photon density function and its Jacobian at the inner loop and extended to compute the estimated changes in the coefficients in the outer loop. Local convergence for the two-level space decomposition for the outer loop is shown for the case when the variance of the coefficients is small.
Signal recovery from random projections
Author(s):
Emmanuel J. Candes;
Justin K. Romberg
Show Abstract
Can we recover a signal f∈RN from a small number of linear measurements? A series of recent papers developed a collection of results showing that it is surprisingly possible to reconstruct certain types of signals accurately from limited measurements. In a nutshell, suppose that f is compressible in the sense that it is well-approximated by a linear combination of M vectors taken from a known basis Ψ. Then not knowing anything in advance about the signal, f can (very nearly) be recovered from about M log N generic nonadaptive measurements only. The recovery procedure is concrete and consists in solving a simple convex optimization program.
In this paper, we show that these ideas are of practical significance. Inspired by theoretical developments, we propose a series of practical recovery procedures and test them on a series of signals and images which are known to be well approximated in wavelet bases. We demonstrate that it is empirically possible to recover an object from about 3M-5M projections onto generically chosen vectors with an accuracy which is as good as that obtained by the ideal M-term wavelet approximation. We briefly discuss possible implications in the areas of data compression and medical imaging.
Exact 3D cone-beam reconstruction from two short-scans using a C-arm imaging system
Author(s):
Krishnakumar Ramamurthi;
Norbert Strobel;
Jerry L. Prince
Show Abstract
In this paper we present a source path for the purpose of exact cone-beam reconstruction using a C-arm X-ray imaging system. The proposed path consists of two intersecting segments, each of which is a short-scan. Any C-arm capable of a short-scan sweep can thus be used to obtain data on our proposed source path as well, since it only requires an additional sweep on a tilted plane. This tilt can be achieved by either using the propeller axis of mobile C-arms, or the vertical axis of ceiling mounted C-arms. While the individual segments are only capable of exact reconstruction in their mid-plane, we show that the combined path is capable of exact reconstruction within an entire volumetric region. In fact, we show that the
largest sphere that can be captured in the field of view of the C-arm can be exactly reconstructed if the tilt between the planes is at least equal to the cone-angle of the system. For the purpose of
cone-beam inversion we use a generalized cone-beam filtered backprojection algorithm (CB-FBP).
The exactness of this method relies on the design of a set of redundancy weights, which we
explicitly evaluate for the proposed dual short-scan source path.
Prewarping techniques in imaging: applications in nanotechnology and biotechnology
Author(s):
Amyn Poonawala;
Peyman Milanfar
Show Abstract
In all imaging systems, the underlying process introduces undesirable distortions that cause the output signal to be a warped version of the input. When the input to such systems can be controlled, pre-warping techniques can be employed which consist of systematically modifying the input such that it cancels out (or compensates for) the process losses. In this paper, we focus on the mask (reticle) design problem for 'optical micro-lithography', a process similar to photographic printing used for transferring binary circuit patterns onto silicon wafers. We use a pixel-based mask representation and model the above process as a cascade of convolution (aerial image formation) and thresholding (high-contrast recording) operations. The pre-distorted mask is obtained by minimizing the norm of the difference between the 'desired' output image and the 'reproduced' output image. We employ the regularization framework to ensure that the resulting masks are close-to-binary as well as simple and easy to fabricate.
Finally, we provide insight into two additional applications of pre-warping techniques. First is 'e-beam lithography', used for fabricating nano-scale structures, and second is 'electronic visual prosthesis' which aims at providing limited vision to the blind by using a prosthetic retinally implanted chip capable of electrically stimulating the retinal neuron cells.
Ray casting approach for boundary extraction and Fourier shape descriptor characterization
Author(s):
Joel Rosiene;
Xin Liu;
Celina Imielinska
Show Abstract
There are many significant applications of Fourier Shape Descriptor characterization of boundaries of regions in images. Whenever it is desirable to compare two shapes, independent of rotation, starting point, or compensate for magnification, Fourier Shape Descriptors (FSDs) have merits. FSDs have been proposed for the automatic assessment of packaging; to check alignment of objects for automation; and characterize visual objects in video coding, and compare biomedical regions in medical images. This paper presents a technique to parameterize the boundary of the region of interest (ROI) that utilizes the casting of rays from the center of mass of the region of interest outward to points in the image that lie on the edge of the ROI. This is essentially another technique to obtain the R-S parametrization. At each step the process utilizes the sections of the boundary have radii that are a simple function of theta. The procedure then merges these simple boundary sections to create a periodic complex valued function of the boundary parameterized by a parameter s that is not required to be a function of theta. Once the complex periodic sequence is obtained, the Fourier Transform is taken resulting in the corresponding Fourier Shape Descriptors. Since the technique seeks the intersection of a known ray with the boundary (it is not boundary following), the worst-case behavior of the technique is easily calculated making it suitable for real-time applications. The technique is robust to incomplete boundaries of objects, and can be readily extended to three-dimensional datasets (spherical harmonics). The a simpler version of the technique is currently being used in the automatic selection of the axis of symmetry in Magnetic Resonance Images of the brain, and we will demonstrate the application of the technique on these types of datasets, although the technique has general application.
Fast Huber-Markov edge-preserving image restoration
Author(s):
Ruimin Pan;
Stanley J. Reeves
Show Abstract
In general, image restoration problems are ill posed and need to be regularized. For applications such as realtime video, fast restorations are also needed to keep up with the frame rate. Restoration based on 2D FFT's provides a fast implementation assuming a constant regularization term over the image. Unfortunately, this assumption creates significant ringing artifacts on edges as well as blurrier edges in the restored image. On the other hand, shift-variant regularization will reduce edge artifacts and provide better quality but it destroys the structure that makes use of the 2D FFT possible, thus no longer have the computational efficiency of the FFT. In this paper, we use a Bayesian approach-maximum a posteriori (MAP) estimation to compute an estimate of the original image given the blurred image. To avoid the smoothing of edges, shift-variant regularization must be used. The Huber-Markov random field model is applied to preserve the discontinuities on edges. For fast minimization of the above model, a new algorithm involving the Sherman-Morrison matrix inversion lemma is
proposed.
This results in a restored image with good edge preservation and less computation. Experiments show restored images with sharper edges. Convergence is fast, and the computational speed can be improved considerably by breaking the image into subimages.
An efficient multiresolution algorithm for compensating density-dependent media blurring
Author(s):
Suhail S. Saquib;
William T. Vetterling
Show Abstract
The sharpness of a printed image may suffer due to the presence of
material layers above and below the dye layers. These layers
contribute to scattering and surface reflections that make the
degradation in sharpness density-dependent. We present data that
illustrate this effect, and model the phenomenon numerically. A
digital non-linear sharpening filter is proposed to compensate for
this density-dependent blurring. The support and shape of this
filter is constrained to lie in a space spanned by a set of basis
filters that can be computed efficiently. Burt and Adelson's
Laplacian pyramid is used to develop an efficient scale-recursive
algorithm in which the image is decomposed into the high-pass basis images in a fine-to-coarse scale sweep, and the sharpened image along with a local density image is subsequently synthesized by a coarse-to-fine scale sweep using these basis images. The local density image is employed, in combination with a scale dependent gain function, to modulate the high-pass basis images in a space-varying fashion. A robust method is proposed for the estimation of the gain functions directly from measured data. Experimental results demonstrate that the proposed algorithm successfully compensates for media-related density dependent blurring.
Local image registration: an adaptive filtering framework
Author(s):
Gulcin Caner;
Ahmet Murat Tekalp;
Gaurav Sharma;
Wendi Heinzelman
Show Abstract
We present a novel local image registration method based on adaptive
filtering techniques. The proposed method utilizes an adaptive filter
to track smoothly, locally varying changes in the motion field
between the images. Image pixels are traversed following a scanning order established by Hilbert curves to preserve the contiguity in the 2-D image plane. We have performed experiments using both simulated images and real images captured by a digital camera. The proposed adaptive filtering framework has been shown by experimental results to give superior performance compared to global 2-D parametric registration and Lucas-Kanade optical flow technique when the image motion consists of mostly translational motion. The simulation experiments show that the proposed image registration technique can also handle small amounts of rotation, scale and perspectivity in the motion field.
Multichannel image deblurring of raw color components
Author(s):
Mejdi Trimeche;
Dmitry Paliy;
Markku Vehvilainen;
Vladimir Katkovnic
Show Abstract
This paper presents a novel multi-channel image restoration algorithm. The main idea is to develop practical approaches to reduce optical blur from noisy observations produced by the sensor of a camera phone. An iterative deconvolution is applied separately to each color channel directly on the raw data obtained from the camera sensor. We use a modified iterative Landweber algorithm combined with an adaptive denoising technique. The employed adaptive denoising is based on Local Polynomial Approximation (LPA) operating on data windows, which are selected by the rule of Intersection of Confidence Intervals (ICI). In order to avoid false coloring due to independent component filtering in RGB space, we have integrated a novel regularization mechanism that smoothly attenuates the high-pass filtering near saturated regions. Through simulations, it is shown that the proposed filtering is robust with respect to errors in point-spread function (PSF) and approximated noise models. Experimental results show that the proposed processing technique produces significant improvement in perceived image resolution.
Morphological study of cortical surfaces with principal component analysis
Author(s):
Fijoy Vadakkumpadan;
Yunxia Tong;
Yinlong Sun
Show Abstract
Studies in experimental neuroscience have found some evidence showing that the shapes of cortical surfaces of human brains might have certain connection with the neural functioning. This paper presents a morphological study of the cortical surfaces. The work consists of four major elements. First, we collect a sufficient number of 3D MRI datasets of brains that belong to different categories of people. Second, we extract the cortical surfaces from the 3D MRI datasets. Third, we apply statistical analysis to characterize the morphological features of the cortical surfaces. The last component is 3D visualization to illustrate the shapes and characteristics of cortical surfaces in an interactive environment.
Detection of mass tumors in mammograms using SVD subspace analysis
Author(s):
Eugene T. Lin;
Yuxin Liu;
Edward J. Delp III
Show Abstract
In this paper, we propose a new region-based method for detecting mass tumors in digital mammograms. Our method uses principal component analysis (PCA) techniques to reduce the image data into a subspace with significantly reduced dimensionality using an optimal linear transformation. After the transformation, classification in the subspace is performed using a nearest neighbor classifier. We consider the detection of only mass abnormalities in this study. Micro calcifications, spiculated lesions, and other abnormalities are not considered. We implemented our method and achieved a 93% correct detection rate for mass abnormalities in our tests.
Increasing the depth of focus in medical ultrasound B-scan
Author(s):
Yibin Zheng;
Seth D. Silverstein
Show Abstract
Obtaining hight quality ultrasound images at high frame rates has great medical importance, especially in applications where tissue motion is significant (e.g. the beating heart). Dynamic focusing and dynamic apodization can improve image quality significantly, and they have been implemented on the receive beam in state-of-the-art medical ultrasound systems. However implementing dynamic focusing and dynamic apodization on the transmit beam compromises frame rate. We present a novel transmit apodization scheme where a continuum of focal points can be obtained in one transmission, and uniform sensitivity and uniform point spread function can be achieved over very large range without reducing frame rate. Preliminary simulations demonstrate significant promises of the new technique.
Markov chain Monte Carlo method for tracking myocardial borders
Author(s):
Robert Janiczek;
N. Ray;
Scott Thomas Acton;
R. Jack Roy;
Brent A. French;
F. H. Epstein
Show Abstract
Cardiac magnetic resonance studies have led to a greater understanding of the pathophysiology of ischemic heart disease. Manual segmentation of myocardial borders, a major task in the data analysis of these studies, is a tedious and time consuming process subject to observer bias. Automated segmentation reduces the time needed to process studies and removes observer bias. We propose an automated segmentation algorithm that uses an active surface to capture the endo- and epicardial borders of the left ventricle in a mouse heart. The surface is initialized as an ellipsoid corresponding to the maximal gradient inverse of variation (GICOV) value. The GICOV is the mean divided by the normalized standard deviation of the image intensity gradient in the outward normal direction along the surface. The GICOV is maximal when the surface lies along strong, constant gradients. The surface is then evolved until it maximizes the GICOV value subject to shape constraints. The problem is formulated in a Bayesian framework and is implemented using a Markov Chain Monte Carlo technique.
Surface color perception as an inverse problem in biological vision
Author(s):
Laurence T. Maloney;
Huseyin Boyaci;
Katja Doerschner
Show Abstract
The spectral power distribution (SPD) of the light reflected from a matte surface patch in a three-dimensional complex scene depends not only on the surface reflectance of the patch but also on the SPD of the light incident on the patch. When there are multiple light sources in the scene that differ in location, SPD, and spatial extent, the SPD of the incident light depends on the location and the orientation of the patch. Recently, we have examined how well observers can recover surface color in rendered, binocularly-viewed scenes with more than one light source. To recover intrinsic surface color, observers must solve an inverse problem, effectively estimating the light sources present in the scene and the light from each that reaches the surface patch. We will formulate the forward and inverse problems for surface color perception in three-dimensional scenes and present experimental evidence that human observers can solve such problems [1-3]. We will also discuss how human observers estimate the spatial distribution of light sources and their chromaticities from the scene itself.
[1] Boyaci, Doerschner, Maloney (2004), Journal of Vision, 4, 664-679.
[2] Doerschner, Boyaci, Maloney (2004), Journal of Vision, 4, 92-105.
[3] Boyaci, Doerschner, Maloney (2004), AIC’05, submitted.
Regularization model of human binocular vision
Author(s):
Zygmunt Pizlo;
Yunfeng Li;
Moses W Chan
Show Abstract
Binocular reconstruction of a 3D shape is an ill-conditioned inverse problem: in the presence of visual and oculomotor noise the reconstructions based solely on visual data are very unstable. A question, therefore, arises about the nature of a priori constraints that would lead to accurate and stable solutions. Our previous work showed that planarity of contours, symmetry of an object and minimum variance of angles are useful priors in binocular reconstruction of polyhedra. Specifically, our algorithm begins with producing a 3D reconstruction from one retinal image by applying priors. The second image (binocular disparity) is then used to correct the monocular reconstruction. In our current study, we performed psychophysical experiments to test the importance of these priors. The subjects were asked to recognize shapes of 3D polyhedra from unfamiliar views. Hidden edges of the polyhedra were removed. The recognition performance, measured by detectability measure d¢, was high when shapes satisfied regularity constraints, and was low otherwise. Furthermore, the binocular recognition performance was highly correlated with the monocular one. The main aspects of our model will be illustrated by a demo, in which binocular disparity and monocular priors are put in conflict.
Simulating the effect of illumination using color transformations
Author(s):
Maya R. Gupta;
Stephen Upton;
Jayson Bowen
Show Abstract
We investigate design and estimation issues for using the standard color management profile architecture for general custom image enhancement. Color management profiles are a flexible architecture for describing a mapping from an original colorspace to a new colorspace. We investigate use of this same architecture for describing color enhancements that could be defined by a non-technical user using samples of the mapping, just as color management is based on samples of a mapping between an original colorspace and a new colorspace. As an example enhancement, we work with photos of the 24 color patch Macbeth chart under different illuminations, with the goal of defining transformations that would take, for example, a studio D65 image and reproduce it as though it had been taken during a particular sunset. The color management profile architecture includes a look-up-table and interpolation. We concentrate on the estimation of the look-up-table points from minimal number of color enhancement samples (comparing interpolative and extrapolative statistical learning techniques), and evaluate the feasibility of using the color management architecture for custom enhancement definitions.
Bayesian edge-preserving color image reconstruction from color filter array data
Author(s):
Manu Parmar;
Stanley J. Reeves;
Thomas S. Denney Jr.
Show Abstract
Digital still cameras typically use a single optical sensor overlaid with RGB color filters to acquire a scene. Only one of the three primary colors is observed at each pixel and the full color image must be reconstructed (demosaicked) from available data. We consider the problem of demosaicking for images sampled in the commonly used Bayer pattern.
The full color image is obtained from the sampled data as a MAP estimate. To exploit the greater sampling rate in the green channel in defining the presence of edges in the blue and red channels, a Gaussian MRF model that considers the presence of edges in all three color channels is used to define a prior. Pixel values and edge estimates are computed iteratively using an algorithm based on Besag's iterated conditional modes (ICM) algorithm. The reconstruction algorithm iterates alternately to perform edge detection and spatial smoothing. The proposed algorithm is applied to a variety of test images and its performance is quantified by using the CIELAB delta E measure.
A real-time multiresolution algorithm for correcting distortions produced by thermal printers
Author(s):
Suhail S. Saquib;
William T. Vetterling
Show Abstract
As printing proceeds in a thermal printer, heat from previously printed lines of image data accumulates in the print head and alters the thermal state of the heating elements. This fluctuating state of the heating elements manifests itself as a distortion in the printed image. We have modeled the heat diffusion within the thermal printer and the density response of the receiver medium to derive a computationally efficient inverse thermal printer model. In this model, the heat diffusion problem for the moving receiver is simplified by showing that it is equivalent to a stationary medium with lower conductivity. The thermal print head is modeled as having a finite number of discrete layers with differing time constants. The layer temperature updates can be decoupled and are time recursive if expressed in relative rather than absolute temperatures, and this decoupling allows the layers to be updated at multiple spatial and temporal resolutions. The inverse printer model then reduces to an elegant algorithm that comprises three interleaved recursions; namely, absolute temperature propagation from coarse-to-fine scale, energy propagation from fine-to-coarse scale and relative temperature update in time. Experimental results demonstrate that the proposed algorithm successfully corrects the distortion produced by thermal printers.
Multiresolution order-statistic CFAR techniques for radar target detection
Author(s):
Michael F. Rimbert;
Mark R. Bell
Show Abstract
Order-Statistic Constant False-Alarm Rate (OS-CFAR) processing provides an adaptive threshold to distinguish targets from clutter returns in radar detection. In traditional OS-CFAR, ordered statistics from a fixed-size reference window surrounding the cell under test (CUT) provide an estimate of the mean clutter power. We investigate adapting the reference window size as a function of the observed data in order to obtain robust detection performance in nonhomogeneous clutter environments. Goodness-of-fit tests are used to select the adaptive reference window size. Unlike traditional OS-CFAR, computationally e±cient multiscale OS-CFAR based on this approach must be modified to include the CUT in the reference window. The effects of CUT inclusion are investigated. Preliminary results suggest that CUT-inclusive OS-CFAR with adaptive window size performs well in nonhomogeneous clutter environments of varying size. These results point to the feasibility of computationally efficient multi-scale OS-CFAR.
Parametric reconstruction of kinetic PET data with plasma function estimation
Author(s):
Mustafa E. Kamasak;
Charles A. Bouman;
Evan D. Morris;
Ken D. Sauer
Show Abstract
It is often necessary to analyze the time response of a tracer. A common way of analyzing the tracer time response is to use a compartment model and estimate the model parameters. The model parameters are generally physiologically meaningful and called "kinetic parameters". In this paper, we simultaneously estimate both the kinetic parameters at each voxel and the model-based plasma input function directly from the sinogram data. Although the plasma model parameters are not our primary interest, they are required for accurate reconstruction of kinetic parameters. The plasma model parameters are initialized with an image domain method to avoid local minima, and multiresolution optimization is used to perform the required reconstruction. Good initial guesses for the plasma parameters are required for the algorithm to converge to the correct answer. Therefore, we devised a preprocessing step involving clustering of the emission images by temporal characteristics to find a reasonable plasma curve that was consistent with the kinetics of the multiple tissue types. We compare the root mean squared error (RMSE) of the kinetic parameter estimates with the measured (true) plasma input function and with the estimated plasma input function.
Tests using a realistic rat head phantom and a real plasma input function show that we can simultaneously estimate the kinetic parameters of the two-tissue compartment model and plasma input function. The RMSE of the kinetic parameters increased for some parameters and remained the same or decreased for other parameters.
Motion-compensated fully 4D reconstruction of gated cardiac sequences
Author(s):
Erwan Gravier;
Yongyi Yang
Show Abstract
In this paper we investigate the benefits of a spatio-temporal approach for reconstruction of cardiac image sequences. We introduce a temporal prior based on motion-compensation to enforce temporal correlations along the curved trajectories that follow the cardiac motion. The image frames in a sequence are reconstructed simultaneously through maximum a posteriori (MAP) estimation. We evaluated the performance of our algorithm using the 4D gated mathematical cardiac-torso (gMCAT) D1.01 phantom to simulate gated SPECT perfusion imaging with Tc-99m-sestamibi. Our experimental results show that the proposed approach could significantly improve the accuracy of reconstructed images without causing cross-frame blurring that may arise form the cardiac motion.
Recursive estimation methods for tracking of localized perturbations in absorption using diffuse optical tomography
Author(s):
Amine Hamdi;
Eric L. Miller;
David Boas;
Maria Angela Franceschini;
Misha Elena Kilmer
Show Abstract
Analysis of the quasi-sinusoidal temporal signals measured by a Diffuse Optical Tomography (DOT) instrument can be used to determine both quantitative and qualitative characteristics of functional brain activities arising from visual and auditory simulations, motor activities, and cognitive tasks performances. Once the activated regions in the brain are resolved using DOT, the temporal resolution of this modality is such that one can track the spatial evolution (both the location and morphology) of these regions with time. In this paper, we explore a state-estimation approach using Extended Kalman Filters to track the dynamics of functionally activated brain regions. We develop a model to determine the size, shape, location and contrast of an area of activity as a function of time. Under the assumption that previously acquired MRI data has provided us with a segmentation of the brain, we restrict the location of the area of functional activity to the thin, cortical sheet. To describe the geometry of the region, we employ a mathematical model in which the projection of the area of activity onto the plane of the sensors is assumed to be describable by a low dimensional algebraic curve. In this study, we consider in detail the case where the perturbations in optical absorption parameters arising due to activation are confined to independent regions in the cortex layer. We estimate the geometric parameters (axis lengths, rotation angle, center positions) defining the best fit ellipse for the activation area's projection onto the source-detector plane. At a single point in time, an adjoint field-based nonlinear inversion routine is used to extract the activated area's information. Examples of the utility of the method will be shown using synthetic data.
Computer simulation of light scattering and propagation in the imaging process of biological confocal microscopy
Author(s):
Yinlong Sun;
Zhen Huang
Show Abstract
Confocal optical microscopy is one of the most significant advances in optical microscopy in the 20th century and has become a widely accepted tool for biological imaging. This technique can obtain 3D volume information through non-invasive optical sectioning and scanning of 2D confocal planes inside the specimen. In this paper, we conduct a physically based computer simulation of light scattering and propagation in the biological specimen during the imaging process. We implement an efficient Monte Carlo technique to simulate light scattering by biological particles, trace the entire light propagation within the scattering medium, produce fluorescence at the fluorescent dyes, and record light intensity collected at the detector. This study will not only help to verify analytic modeling of light scattering in biological media, but also be useful to improve the design of optical imaging systems.
Maximum likelihood 3D reconstruction of multiple viruses from mixtures of cryo electron microscope images
Author(s):
Junghoon Lee;
Yili Zheng;
Peter C. Doerschuk;
Jinghua Tang;
John E. Johnson
Show Abstract
A statistical model for cryo electron microscopy image formation, a maximum likelihood 3-D reconstruction algorithm, and a parallel computer implementation of the algorithm are described. The essential character of this reconstruction problem is that it concerns 2-D projections of a 3-D object where the projection orientations are unknown and the 2-D projections have SNR less than 1. A key component of a variety of algorithms is integration over projection orientation. A fast algorithm is described for performing such integrations.
Frequency domain simultaneous algebraic reconstruction techniques: algorithm and convergence
Author(s):
Jiong Wang;
Yibin Zheng
Show Abstract
We propose an algebraic reconstruction technique (ART) in the frequency domain for linear imaging problems. This algorithm has the advantage of efficiently incorporating pixel correlations in an a priori image model. First it is shown that the generalized ART algorithm converges to the minimum weighted norm solution, where the weights represent a priori knowledge of the image. Then an implementation in the frequency domain is described. The performance of the new algorithm is demonstrated with a fan beam computed tomography (CT) example. Compared to the traditional ART, the new algorithm offers superior image quality, fast convergence, and moderate complexity.
Incremental matrix orthogonalization with an application to curve fitting
Author(s):
Matthew Harker;
Paul O'Leary;
Paul Zsombor-Murray
Show Abstract
A new method for fitting implicit curves to scattered data is proposed. The method is based on orthogonal matrix projections and singular value decomposition. The incremental aspect of the algorithm deals with each order of data individually in an incrementing manner, whereby a matrix approximation procedure is applied at each level. This determines the fit quality at each step, and hence provides co-linearity detection of each polynomial order. The best implicit polynomial fit of minimal order is provided, which essentially combines object identification and classification with object fitting.
Inversion of flow fields from sensor network data
Author(s):
Animesh Khemka;
Charles A. Bouman;
Mark R. Bell
Show Abstract
We consider the problem of monitoring the concentration and dispersion of pollutants in the atmosphere using a collection of randomly scattered sensors. The sensors are capable of indicating only that the concentration has exceeded a randomly selected threshold and providing this information to a central hub. We consider the case when the dispersion occurs in a general wind velocity field. In this case, the dispersion is modelled by a PDE which in general does not have a closed form solution. We find
the maximum likelihood estimate of the concentration as well as the time and location of the pollutant source. Frechet derivatives are used to optimize the cost function. The wind velocity field is estimated as a nuisance parameter.
New inverse method for simultaneous reconstruction of object buried beneath rough ground and the ground surface structure using SAMM forward model
Author(s):
Reza Firoozabadi;
Eric L. Miller;
Carey M. Rappaport;
Ann W. Morgenthaler
Show Abstract
A new inverse scattering method is presented to estimate both the boundaries of the rough interface separating air and ground and the object buried beneath this rough interface. This method is based on a state-of-the-art forward solver. Simultaneous reconstruction of surface and object boundaries is posed as a linear least squares optimization problem for a parametric representation of the shape of the object as well as the boundary of the interface with a cost function defined by the misfit of modeled to measured data.
We make use of a newly-developed forward model, the Semi-Analytic Mode-Matching method (SAMM) within the context of the inversion procedure where a moderately low-order superposition of cylindrical modes (in 2-D configuration) satisfying the Helmholtz wave equation are used to represent the scattered fields for the object under the rough surface.
The proposed inverse method is a combined analytical-numerical algorithm to decrease the cost function by optimizing the boundary control parameters in an iterative procedure. The shape of the object as well as the interface is defined in low-dimensional parametric forms. The object boundary is modeled by a B-spline curve which is parameterized by a collection of “control points”. Accuracy and reliability of this method is verified by numerical experiments.
Inter-update Metz filtering as regularization for variable block-ART in PET reconstruction
Author(s):
Mustapha Sadki;
Maite Trujillo San-Martin
Show Abstract
Positron Emission Tomography (PET) is a technology that uses short-lived radio nuclides altered by disease and precede changes that can be visualized by cross-sectional imaging. Over the last decade, this technique has become an important clinical tool for detection of tumors, follow-up treatment and drug research, providing an understanding of dynamic physiological processes. Since PET needs improved reconstruction algorithms to facilitate clinical diagnosis, we will investigate an improved iterative algorithm.
Amongst current algorithms applied for PET reconstruction, ART was first proposed as a method of reconstruction from CT projections. With appropriate tuning, the convergence of these algorithms could be very fast indeed. However, the quality of reconstruction using these methods has not been thoroughly investigated. We study a variant of these algorithms.
We present the state of the art, review well-known ART and investigate an optimum dynamically-changing block structure for the not yet fully explored variable-Block ART, which uses jointly the Inter-Update Metz filter for regularization and exploits the full symmetries in PET scanners. This reveals significant acceleration of initial convergence to an acceptable reconstruction of inconsistent cases. To assess the quality and analyze any discrepancy of the reconstructed images, two figures of merit (FOMs) are used to evaluate two 3D Data phantoms acquired on a GE-Advance scanner for high statistics.
Face Player: a qualitative sonification for the visually impaired
Author(s):
Oisin Curran;
Seamus Torpey;
Michael Schukat;
Andy Shearer
Show Abstract
This paper describes the development of a system for mapping the features of the human face to sound. In order to determine how best to express these qualities, magnitude estimation experiments are performed with young visually impaired students. The data dimensions used are overall head ratio (as a size measure) and distance between key facial features. The display dimensions are frequency and tempo.
Approach to reduce the computational image processing requirements for a computer vision system using sensor preprocessing and the Hotelling transform
Author(s):
Thomas R. Schei;
Cameron H. G. Wright;
Daniel J. Pack
Show Abstract
We describe a new development approach to computer vision for a compact, low-power, real-time system whereby we take advantage of preprocessing in a biomimetic vision sensor and a computational strategy using subspace methods and the Hotelling transform in an effort to reduce the computational imaging load. The approach is two-pronged: 1) design the imaging sensor to reduce the computational load as much as possible up front, and 2) employ computational algorithms that efficiently complete the remaining image processing steps needed for computer vision. This strategy works best if the sensor design and the computational algorithm design evolve together as a synergistic, mutually optimized pair. Our system uses the biomimetic “fly-eye” sensor described in previous papers that offers significant preprocessing. However, the format of the image provided by the sensor is not a traditional bitmap and therefore requires innovative computational manipulations to make best use of this sensor. The remaining computational algorithms employ eigenspace object models derived from Principle Component Analysis, and the Hotelling transform to simplify the models. The combination of sensor preprocessing and the Hotelling transform provides an overall reduction in the computational imaging requirements that would allow real-time computer vision in a compact, low-power system.
Subpixel target detection in hyperspectral data using higher order statistics source separation algorithms
Author(s):
Stefan A. Robila
Show Abstract
Hyperspectral data is modeled as an unknown mixture of original features (such as the materials present in the scene). The goal is to find the unmixing matrix and to perform the inversion in order to recover them. Unlike first and second order techniques (such as PCA), higher order statistics (HOS) methods assume the data has nongaussian behavior are able to represent much subtle differences among the original features. The HOS algorithms transform the data such that the result components are uncorrelated and their nongaussianity is maximized (the resulting components are statistical independent). Subpixel targets in a natural background can be seen as anomalies of the image scene. They expose a strong nongaussian behavior and correspond to independent components leading to their detection when HOS techniques are employed. The methods start by preprocessing the hyperspectral image through centering and sphering. The resulting bands are transformed using gradient-based optimization on the HOS measure. Next, the data are reduced through a selection of the components associated with small targets using the changes of the slope in the scree graph of the non-Gaussianity values. The targets are filtered using histogram-based analysis. The end result is a map of the pixels associated with small targets.
Nonlinear image restoration methods for marker extraction in 3D fluorescent microscopy
Author(s):
Aleh Kryvanos;
Juergen Hesser;
Gabriele Steidl
Show Abstract
Localization of biological markers in images obtained by fluorescent microscopy is a relevant problem in biological research. Due to blurring from imaging and noise, the analysis of supra-molecular structures can be improved by image restoration. In this paper, we compare various deblurring algorithms with and without regularization. In the first group we consider the EM (Expectation Maximization) and the JVC (Jansson-van-Cittert) algorithms and examine the effect of the Tikhonov and the TV (Total Variation) regularization in the second group. The last approach uses the I-divergence as similarity measure. As solution method for our new I-divergence--TV model we propose a non-linear projective conjugate gradient algorithm with inexact linear search. Optimal regularization parameters were found by the shape analysis of corresponding L-curves.
Contour-based image mosaicking in the presence of moving objects
Author(s):
Sung-Yong Jung;
Yoon-Hee Choi;
Tae Sun Choi
Show Abstract
Most of previous image mosaicking techniques deal with stationary images that do not contain moving objects. But these moving objects cause serious errors on global motion estimation which is the core process of the image mosaicking since the global motion is estimated biased by local motions due to moving objects. There are some proposed techniques to effectively eliminate local motions and get precise global motion parameters but they have their own drawbacks, respectively.
In this paper a contour-based approach for mosaicking images that contain moving objects in them is presented. First, we extract contours from each image to be mosaicked. And then we estimate initial global motion. The key task of our work is how to eliminate local motions and obtain a precise global motion between two input images. To do this, we use three kinds of consistency check algorithm. Shape similarity consistency, scale consistency, and rigid transformation consistency. In these check processes, local movings are detected due to their motion vectors far different from the dominant one and removed in an iterative way. Besides, since we use contour information for image mosaicking, our approach is robust against the global gray level change between input images. Experimental results demonstrate the performance of our algorithm.
Multigrid inversion algorithms for Poisson noise model-based tomographic reconstruction
Author(s):
Seungseok Oh;
Charles A. Bouman;
Kevin J. Webb
Show Abstract
A multigrid inversion approach is proposed to solve Poisson noise model-based inverse problems. The algorithm works by moving up and down in resolution with a set of coarse scale cost functions, which incorporates a coarse scale Poisson mean defined in low resolution data and image spaces. Applications of the approach to Bayesian reconstruction algorithms in transmission and emission tomography are presented. Simulation results indicate that the proposed multigrid approach results in significant improvement in convergence speed compared to the fixed-grid iterative coordinate descent (ICD) method.
Model-based automatic calculation and evaluation of camera positions for industrial machine vision
Author(s):
Marc M. Ellenrieder;
Hitoshi Komoto
Show Abstract
One of the key issues for a successful inspection process is the determination of the necessary number of cameras and their respective positions given a specific inspection task and a geometric model of the inspected work-piece and its surroundings.
In the last decades, a number of approaches concerning camera positioning strategies have been proposed. Generally, these approaches define an inspection task in terms of good visibility of certain features on the surface of the inspected objects. However, these approaches neither provide general means to include arbitrary inspection requirements, nor do they minimize the number of required cameras. Others use only hard constraints to determine the area of feasibility for certain task requirements. To overcome these shortcomings, we propose a model-based approach to optimize one or more camera positions by optimizing cost-functions derived from the inspection task. The goal is to use a minimum number of cameras / camera positions to fulfill the inspection task. Feature-visibility is represented using a novel concept: the visibility map. It can be calculated quickly by using a projective approach, consumes little storage memory and allows for quick feature-visibility checks. The system is evaluated on several real-world examples using real inspection tasks from current production processes.
Implementation of alternating minimization algorithms for fully 3D CT imaging
Author(s):
David G. Politte;
Shenyu Yan;
Joseph A. O'Sullivan;
Donald L. Snyder;
Bruce R. Whiting
Show Abstract
Algorithms based on alternating minimization (AM) have recently been derived for computing maximum-likelihood images in transmission CT, incorporating accurate models of the transmission-imaging process. In this work we report the first fully three-dimensional implementation of these algorithms, intended for use with multi-row detector spiral CT systems. The most demanding portion of the computations, the three-dimensional projections and backprojections, are calculated using a precomputed lookup table containing a discretized version of the point-spread function that maps between the measurement and image spaces. This table accounts for the details of the scanner. A cylindrical phantom with cylindrical and spherical inserts of known attenuation was scanned with a Siemens Sensation 16, which was employed in a rapid, spiral acquisition mode with 16 active detector rows. These data were downsampled and reconstructed using a monoenergetic version of our AM algorithm. The estimated attenuation coefficients closely match the known coefficients for the cylinder and the embedded objects. We are investigating methods for further accelerating these computations by using a combination of techniques that reduce the time of each iteration and that increase the convergence of the log-likelihood from iteration to iteration.
Super-resolution image synthesis using projections onto convex sets in the frequency domain
Author(s):
Frederick W. Wheeler;
Ralph T. Hoctor;
Eamon B. Barrett
Show Abstract
Optical imaging systems are often limited in resolution, not by the imaging optics, but by the light intensity sensors on the image formation plane. When the sensor size is larger than the optical spot size, the effect is to smooth the image with a rectangular convolving kernel with one sample at each non-overlapping kernel position, resulting in aliasing. In some such imaging systems, there is the possibility of collecting multiple images of the same scene. The process of reconstructing a de-aliased high-resolution image from multiple images of this kind is referred to as “super-resolution image reconstruction.” We apply the POCS method to this problem in the frequency domain. Generally, frequency domain methods have been used when component images were related by subpixel shifts only, because rotations of a sampled image do not correspond to a simple operation in the frequency domain. This algorithm is structured to accommodate rotations of the source relative to the imaging device, which we believe helps in producing a well-conditioned image synthesis problem. A finely sampled test image is repeatedly resampled to align with each observed image. Once aligned, the test and observed images are readily related in the frequency domain and a projection operation is defined.
Model selection in cognitive science as an inverse problem
Author(s):
Jay I. Myung;
Mark A. Pitt;
Daniel J. Navarro
Show Abstract
How should we decide among competing explanations (models) of a cognitive phenomenon? This problem of model selection is at the heart of the scientific enterprise. Ideally, we would like to identify the model that actually generated the data at hand. However, this is an un-achievable goal as it is fundamentally ill-posed. Information in a finite data sample is seldom sufficient to point to a single model. Multiple models may provide equally good descriptions of the data, a problem that is exacerbated by the presence of random error in the data. In fact, model selection bears a striking similarity to perception, in that both require solving an inverse problem. Just as perceptual ambiguity can be addressed only by introducing external constraints on the interpretation of visual images, the ill-posedness of the model selection problem requires us to introduce external constraints on the choice of the most appropriate model. Model selection methods differ in how these external constraints are conceptualized and formalized. In this review we discuss the development of the various approaches, the differences between them, and why the methods perform as they do. An application example of selection methods in cognitive modeling is also discussed.