Proceedings Volume 6065

Computational Imaging IV

cover
Proceedings Volume 6065

Computational Imaging IV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 2 February 2006
Contents: 11 Sessions, 44 Papers, 0 Presentations
Conference: Electronic Imaging 2006 2006
Volume Number: 6065

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Hierarchical and Graph-based Image Analysis
  • Reconstruction from Sparse Data
  • Microscopy
  • Inverse Problems
  • Keynote Presentation II
  • Image and Video Analysis
  • Biomedical Imaging
  • Tomography
  • Color
  • Image Modeling and Analysis
  • Poster Session and Demonstrations
Hierarchical and Graph-based Image Analysis
icon_mobile_dropdown
Modeling hierarchical structure of images with stochastic grammars
Wiley Wang, Tak-Shing Wong, Ilya Pollak, et al.
We construct a hierarchical image grammar model based on stochastic grammars and apply it to document images. An efficient maximum a posteriori probability estimation algorithm for this model produces accurate segmentations of document images and classifications of image parts.
Multiresolution analysis of digital images using the continuous extension of discrete group transforms
Mickaël Germain, Jiri Patera
A new technique is presented for multiresolution analysis (MRA) of digital images. In 2D, it has four variants, two of which are applicable on square lattices; the Discrete Cosine Transform (DCT) is the simpler of the two. The remaining variants can be used in the same way on triangular lattices. The property of the Continuous Extension of the Discrete Group Transform (CEDGT) is used to analyse data for each level of decomposition. The MRA principle is obtained by increasing the data grid for each level of decomposition, and by using an adapted low filter to reduce some irregularities due to noise effect. Compared to some stationary wavelet transforms, the image analysis with a multiresolution CEDGT transform gives better results. In particular, a wavelet transform is capable of providing a local representation at multiple scales, but some local details disappear due to the use of the low pass filter and the reduction of the spatial resolution for a high level of decomposition. This problem is avoided with CEDGT. The smooth interpolation, used by the multiresolution CEDGT, gives interesting results for coarse-to-fine segmentation algorithm and others analysis processes.
Modeling multiscale differential pixel statistics
The statistics of natural images play an important role in many image processing tasks. In particular, statistical assumptions about differences between neighboring pixel values are used extensively in the form of prior information for many diverse applications. The most common assumption is that these pixel difference values can be described be either a Laplace or Generalized Gaussian distribution. The statistical validity of these two assumptions is investigated formally in this paper by means of Chi-squared goodness of fit tests. The Laplace and Generalized Gaussian distributions are seen to deviate from real images, with the main source of error being the large number of zero and close to zero nearby pixel difference values. These values correspond to the relatively uniform areas of the image. A mixture distribution is proposed to retain the edge modeling ability of the Laplace or Generalized Gaussian distribution, and to improve the modeling of the effects introduced by smooth image regions. The Chi-squared tests of fit indicate that the mixture distribution offers a significant improvement in fit.
Graph-based 3D object classification
Sajjad Baloch, Hamid Krim
In this paper, we propose a novel method for the classification of 3D shapes, based on topo-geometric shape descriptors. Topo-geometric models have an advantage over existing shape descriptors that they capture complete shape information - capturing topology through skeletal graphs, and geometry via edge weights. The resulting weighted graph representation allows shape classification by establishing error correcting subgraph isomorphisms between the test graph and model graphs, where the best match is the one that corresponds to largest subgraph isomorphism. We propose various cost assignments for graph edit operations for error correction, which in turn takes into account any shape variations arising due to noise and measurement errors.
Reconstruction from Sparse Data
icon_mobile_dropdown
Compressed sensing in noisy imaging environments
Jarvis Haupt, Rui Castro, Robert Nowak
Compressive Sampling, or Compressed Sensing, has recently generated a tremendous amount of excitement in the image processing community. Compressive Sampling involves taking a relatively small number of non-traditional samples in the form of projections of the signal onto random basis elements or random vectors (random projections). Recent results show that such observations can contain most of the salient information in the signal. It follows that if a signal is compressible in some basis, then a very accurate reconstruction can be obtained from these observations. In many cases this reconstruction is much more accurate than is possible using an equivalent number of conventional point samples. This paper motivates the use of Compressive Sampling for imaging, presents theory predicting reconstruction error rates, and demonstrates its performance in electronic imaging with an example.
A new compressive imaging camera architecture using optical-domain compression
Dharmpal Takhar, Jason N. Laska, Michael B. Wakin, et al.
Compressive Sensing is an emerging field based on the revelation that a small number of linear projections of a compressible signal contain enough information for reconstruction and processing. It has many promising implications and enables the design of new kinds of Compressive Imaging systems and cameras. In this paper, we develop a new camera architecture that employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudorandom binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while sampling the image fewer times than the number of pixels. Other attractive properties include its universality, robustness, scalability, progressivity, and computational asymmetry. The most intriguing feature of the system is that, since it relies on a single photon detector, it can be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers.
Microscopy
icon_mobile_dropdown
A fast algorithm for 3D reconstruction from unoriented projections and cryo electron microscopy of viruses
In a cryo electron microscopy experiment, the data is noisy 2-D projection images of the 3-D electron scattering intensity where the orientation of the projections is not known. In previous work we have developed a solution for this problem based on a maximum likelihood estimator that is computed by an expectation maximization algorithm. In the expectation maximization algorithm the expensive step is the expectation which requires numerical evaluation of 3- or 5-dimensional integrations of a square matrix of dimension equal to the number of Fourier series coeffcients used to describe the 3-D reconstruction. By taking advantage of the rotational properties of spherical harmonics, we can reduce the integrations of a matrix to integrations of a scalar. The key properties is that a rotated spherical harmonic can be expressed as a linear combination of the other harmonics of the same order and that the weights in the linear combination factor so that each of the three factors is a function of only one of the Euler angles describing the orientation of the projection.
Spatially adaptive 3D inverse for optical sectioning
Dmitriy Paliy, Vladimir Katkovnik, Karen Egiazarian
In this paper, we propose a novel nonparametric approach to reconstruction of three-dimensional (3D) objects from 2D blurred and noisy observations which is a problem of computational optical sectioning. This approach is based on an approximate image formation model which takes into account depth varying nature of blur described by a matrix of shift-invariant 2D point-spread functions (PSF) of an optical system. The proposed restoration scheme incorporates the matrix regularized inverse and matrix regularized Wiener inverse algorithms in combination with a novel spatially adaptive denoising. This technique is based on special statistical rules for selection of the adaptive size and shape neighbourhood used for the local polynomial approximation of the 2D image intensity. The simulations on a phantom 3D object show efficiency of the developed approach. The objective result evaluation is presented in terms of quadratic-error criteria.
On soft clipping of Zernike moments for deblurring and enhancement of optical point spread functions
Nico Becherer, Hanna Jödicke, Gregor Schlosser, et al.
Blur and noise originating from the physical imaging processes degrade the microscope data. Accurate deblurring techniques require, however, an accurate estimation of the underlying point-spread function (PSF). A good representation of PSFs can be achieved by Zernike Polynomials since they offer a compact representation where low-order coefficients represent typical aberrations of optical wavefronts while noise is represented in higher order coefficients. A quantitative description of the noise distribution (Gaussian) over the Zernike moments of various orders is given which is the basis for the new soft clipping approach for denoising of PSFs. Instead of discarding moments beyond a certain order, those Zernike moments that are more sensitive to noise are dampened according to the measured distribution and the present noise model. Further, a new scheme to combine experimental and theoretical PSFs in Zernike space is presented. According to our experimental reconstructions, using the new improved PSF the correlation between reconstructed and original volume is raised by 15% on average cases and up to 85% in the case of thin fibre structures, compared to reconstructions where a non improved PSF was used. Finally, we demonstrate the advantages of our approach on 3D images of confocal microscopes by generating visually improved volumes. Additionally, we are presenting a method to render the reconstructed results using a new volume rendering method that is almost artifact-free. The new approach is based on a Shear-Warp technique, wavelet data encoding techniques and a recent approach to approximate the gray value distribution by a Super spline model.
Adaptive sampling for atomic force microscopy
In atomic force microscopy, a 3-D image of a substrate is obtained. With the total number of samples remains constant, there is a trade-off between the size of the scanned image and the resolution. For the scanning mechanism, the time needed to image an area depends mainly on the number of samples and the size of the image. It is desirable to improve the imaging speed with limited impact to the effective resolution of the portion of the substrate that is of interested. To improve the imaging speed, there are two options: 1) increase the data process rate or 2) reduce the amount of data. One key issue for reducing the amount of data is to maintain acceptable image fidelity. To address this issue, we need to classify the sample area into regions based on importance. For high importance regions, a higher resolution is needed. For regions of less importance, a coarse sample density is employed. In this study, we propose a new adaptive sampling scheme that is leveraged from image compression. By adapting the sampling resolution to the substrate profile, the proposed method can decrease the scanning time by reducing the amount of data while maintaining the desired image fidelity.
Inverse Problems
icon_mobile_dropdown
Bayesian image reconstruction from Fourier-domain samples using prior edge information: convergence and parameter sensitivity
Image reconstruction from Fourier-domain measurements is a specialized problem within the general area of image reconstruction using prior information. The structure of the equations in Fourier imaging is challenging, since the observation equation matrix is non-sparse in the spatial domain but diagonal in the Fourier domain. Recently, the Bayesian image reconstruction with prior edges (BIRPE) algorithm has been proposed for image reconstruction from Fourier-domain samples using edge information automatically extracted from a high-resolution prior image. In the BIRPE algorithm, the maximum a posteriori (MAP) estimate of the reconstructed image and edge variables involves high-dimensional, non-convex optimization, which can be computationally prohibitive. The BIRPE algorithm performs this optimization by iteratively updating the estimate of the image then updating the estimate of the edge variables. In this paper, we propose two techniques for updating the image based on fixed edge variables one based on iterated conditional modes (ICM) and the other based on Jacobi iteration. ICM is guaranteed to converge, but, depending on the structure of the Fourier-domain samples, can be computationally prohibitive. The Jacobi iteration technique is more computationally efficient but does not always converge. In this paper, we study the convergence properties of the Jacobi iteration technique and its parameter sensitivity.
Thin digital imaging systems using focal plane coding
With this work we show the use of focal plane coding to produce nondegenerate data between subapertures of an imaging system. Subaperture data is integrated to form a single high resolution image. Multiple apertures generate multiple copies of a scene on the detector plane. Placed in the image plane, the focal plane mask applies a unique code to each of these sub-images. Within each sub-image, each pixel is masked so that light from only certain optical pixels reaches the detector. Thus, each sub-image measures a different linear combination of optical pixels. Image reconstruction is achieved by inversion of the transformation performed by the imaging system. Registered detector pixels in each sub-image represent the magnitude of the projection of the same optical information onto different sampling vectors. Without a coding element, the imaging system would be limited by the spatial frequency response of the electronic detector pixel. The small mask features allow the imager to broaden this response and reconstruct higher spatial frequencies than a conventional coarsely sampling focal plane.
3D reconstructions from spherically averaged Fourier transform magnitude and solution x-ray scattering experiments
Youngha Hwang, Peter C. Doerschuk
Measuring the scattering of a beam of x-rays off a solution of identical particles gives data that is the spherically averaged magnitude of the Fourier transform of the electron number density in the particle. Although the 1-D data provides only limited information for a 3-D reconstruction of the particle, this approach is still attractive because it does not require that the particle be crystallized for x-ray crystallography or frozen for cryo electron microscopy. We describe ongoing work using two mathematical models of the particle, a piecewise constant model and an orthonormal expansion model, and a variety of specialized optimization tools to determine the 3-D reconstruction of the particle from a weighted nonlinear least squares problem.
Computed spectroscopy using segmented apertures
A novel technique for imaging spectroscopy is introduced. The technique makes use of an optical imaging system with a segmented aperture and intensity detector array on the imaging plane. The point spread function (PSF) of such a system can be adjusted by modifying the path lengths from the subapertures to the image plane, and the shape of the resulting point spread function will vary as a function of wavenumber. An image reconstruction approach is taken to convert multiple recorded pan-chromatic images with different wavenumber-varying point spread functions into a hyperspectral data set. Thus, the technique described here is a new form of computed imaging.
Preconditioned conjugate gradient without linesearch: a comparison with the half-quadratic approach for edge-preserving image restoration
Christian Labat, Jérôme Idier
Our contribution deals with image restoration. The adopted approach consists in minimizing a penalized least squares (PLS) criterion. Here, we are interested in the search of efficient algorithms to carry out such a task. The minimization of PLS criteria can be addressed using a half-quadratic approach (HQ). However, the nontrivial inversion of a linear system is needed at each iteration. In practice, it is often proposed to approximate this inversion using a truncated preconditioned conjugate gradient (PCG) method. However, we point out that theoretical convergence is not proved for such approximate HQ algorithms, referred here as HQ+PCG. In the proposed contribution, we rely on a different scheme, also based on PCG and HQ ingredients and referred as PCG+HQ1D. General linesearch methods ensuring convergence of PCG type algorithms are difficult to code and to tune. Therefore, we propose to replace the linesearch step by a truncated scalar HQ algorithm. Convergence is established for any finite number of HQ1D sub-iterations. Compared to the HQ+PCG approach, we show that our scheme is preferable on both the theoretical and practical grounds.
Keynote Presentation II
icon_mobile_dropdown
Computational methods for image restoration, image segmentation, and texture modeling
Ginmo Chung, Triet M. Le, Linh H. Lieu, et al.
This work is devoted to new computational models for image segmentation, image restoration and image decomposition. In particular, we partition an image into piecewise-constant regions using energy minimization and curve evolution approaches. Applications of denoising-segmentation in polar coordinates (motivated by impedance tomography) and of segmentation of brain images will be presented. Also, we decompose a natural image into a cartoon or geometric component and an oscillatory or texture component using a variational approach and dual functionals. Thus, new computational methods will be presented for denoising, deblurring and texture modeling.
Image and Video Analysis
icon_mobile_dropdown
An adaptive model for restoration of optically distorted video frames
Dalong Li, Mark J. T. Smith, Russell Mersereau
Atmospheric turbulence is a common problem in astronomy and long distance surveillance applications. It can lead to optical distortions that can significantly degrade the quality of the captured images and video. Quality improvement can be achieved through digital restoration methods that effectively suppress the effects of optical distortion. In this paper, atmospheric optical distortion is modeled as having two components: a dispersive component and a time-varying distortion component. A new restoration algorithm is introduced that compensates for dispersion using a fourth-order statistic and employs a new adaptive warping algorithm to suppress turbulent motion effects. The new algorithm is able to improve quality significantly and is able to handle diffcult cases involving panning, zooming, and natural motion.
Resource-driven content adaptation
Recent trends have created new challenges in the presentation of multimedia information. First, large, high-resolution video displays are increasingly popular. Meanwhile, many mobile devices, such as PDAs and mobile telephones, can display images and videos on small screens. One obvious issue is that content designed for a large display is inappropriate for a small display. Moreover, wireless bandwidth and battery lifetime are precious resources for mobile devices. In order to provide useful content across systems with different resources, we propose "resource-driven content adaptation" by augmenting the content with metadata that can be used to display or render the content based on the available resources. We are investigating several problems related to resource-driven content adaptation. These include: adaptation of the presented content based on available resources- display resolution, bandwidth, processor speed, quality of services, and energy. Content adaptation may add or remove information based on available resources. Adaptive content can utilize resources more effectively but also present challenges in resource management, content creation, transmission, and user perception.
Improving the numerical stability of structure from motion by algebraic elimination
Mireille Boutin, Ji Zhang, Daniel G. Aliaga
Structure from motion (SFM) is the problem of reconstructing the geometry of a scene from a stream of images on which features have been tracked. In this paper, we consider a projective camera model and assume that the internal parameters of the camera are known. Our goal is to reconstruct the geometry of the scene up to a rigid motion (i.e. Euclidean reconstruction.) It has been shown that estimating the pose of the camera from the images is an ill-conditioned problem, as variations in the camera orientation and camera position cannot be distinguished. Unfortunately, the camera pose parameters are an intrinsic part of current formulations of SFM. This leads to numerical instability in the reconstruction of the scene. Using algebraic methods, we obtain a basis for a new formulation of SFM which does not involve pose estimation and thus eliminates this cause of instability.
A maximum entropy kernel density estimator with applications to function interpolation and texture segmentation
In this paper, we develop a new algorithm to estimate an unknown probability density function given a finite data sample using a tree shaped kernel density estimator. The algorithm formulates an integrated squared error based cost function which minimizes the quadratic divergence between the kernel density and the Parzen density estimate. The cost function reduces to a quadratic programming problem which is minimized within the maximum entropy framework. The maximum entropy principle acts as a regularizer which yields a smooth solution. A smooth density estimate enables better generalization to unseen data and offers distinct advantages in high dimensions and cases where there is limited data. We demonstrate applications of the hierarchical kernel density estimator for function interpolation and texture segmentation problems. When applied to function interpolation, the kernel density estimator improves performance considerably in situations where the posterior conditional density of the dependent variable is multimodal. The kernel density estimator allows flexible non parametric modeling of textures which improves performance in texture segmentation algorithms. We demonstrate performance on a text labeling problem which shows performance of the algorithm in high dimensions. The hierarchical nature of the density estimator enables multiresolution solutions depending on the complexity of the data. The algorithm is fast and has at most quadratic scaling in the number of kernels.
Multiple watermarking: a vector space projections approach
Oktay Altun, Gaurav Sharma, Mark Bocko
We present a new paradigm for the insertion of multiple watermarks in images. Instead of an explicitly defined embedding process, the watermark embedding is achieved implicitly by determining a feasible image meeting multiple desired constraints. The constraints are designed to ensure that the watermarked image is visually indistinguishable from the original and produces a positive detection result when subjected to detectors for the individual watermarks even in the presence of signal processing operations, particularly compression. We develop useful mathematical definitions of constraint sets for different visual models, for transform domain compression, and for both spread-spectrum and quantization index modulation (QIM) watermark detection scenarios. Using the constraints with a generalized vector space projections method (VSPM), we determine a watermarked signal. Experimental results demonstrate the flexibility and usefulness of the presented methodology in addressing multiple watermarking scenarios while providing implicit shaping of the watermark power to meet visual requirements.
Biomedical Imaging
icon_mobile_dropdown
Spherical harmonics for shape-based inverse problems as applied to electrical impedance tomography
Electrical Impedance Tomography (EIT) is a badly posed inverse problem. In a 3-D volume too many parameters are required to be able to obtain stable estimates with good spatial resolution and good accuracy. One approach to such problems that has been presented recently in a number of reports, when the relevant constituent parameters can be modeled as isotropic and piecewise continuous or homogeneous, is to use shape-based solutions. In this work, we report on a method, based on a spherical harmonics expansion, that allows us to parameterize the 3-D objects which constitute the conductivity inhomogeneities in the interior; for instance, we could assume the general shape of piecewise constant inhomogeneities is known but their conductivities and their exact location and shape are not. Using this assumption, we have developed a 3-stage optimization algorithm that allows us to iteratively estimate the location of the inhomogeneous objects, to find their external boundaries and to estimate their internal conductivities. The performance of the proposed method is illustrated via simulation in a realistic torso model, as well as via experimental data from a tank phantom.
Adaptation of fast marching methods to intracellular signaling
Aristide C. Chikando, Jason M. Kinser
Imaging of signaling phenomena within the intracellular domain is a well studied field. Signaling is the process by which all living cells communicate with their environment and with each other. In the case of signaling calcium waves, numerous computational models based on solving homogeneous reaction diffusion equations have been developed. Typically, the reaction diffusion approach consists of solving systems of partial differential equations at each update step. The traditional methods used to solve these reaction diffusion equations are very computationally expensive since they must employ small time steps in order to reduce the computational error. The presented research suggests the application of fast marching methods to imaging signaling calcium waves, more specifically fertilization calcium waves, in Xenopus laevis eggs. The fast marching approach provides fast and efficient means of tracking the evolution of monotonically advancing fronts. A model that employs biophysical properties of intracellular calcium signaling, and adapts fast marching methods to tracking the propagation of signaling calcium waves is presented. The developed model is used to reproduce simulation results obtained with reaction diffusion based model. Results obtained with our model agree with both the results obtained with reaction diffusion based models, and confocal microscopy observations during in vivo experiments. The adaptation of fast marching methods to intracellular protein or macromolecule trafficking is also briefly explored.
Machine learning of human responses to images
The human user is an often ignored component of the imaging chain. In medical diagnostic tasks, the human observer plays the role of the decision-maker, forming opinions based on visual assessment of images. In content-based image retrieval, the human user is the ultimate judge of the relevance of images recalled from a database. We argue that data collected from human observers should be used in conjunction with machine-learning algorithms to model and optimize performance in tasks that involve humans. In essence, we treat the human observer as a nonlinear system to be identified. In this paper, we review our work in two applications of this general idea. In the first, a learning machine is trained to predict the accuracy of human observers in a lesion detection task for purposes of assessing image quality. In the second, a learning machine is trained to predict human users' perception of the similarity of two images for purposes of content-based image retrieval from a database. In both examples, it is shown that a nonlinear learning machine can accurately identify the nonlinear human system that maps images into numerical values, such as detection performance or image similarity.
Tomography
icon_mobile_dropdown
Image reconstruction algorithms for a novel PET system with a half-ring insert
Debashish Pal, Yuan-Chuan Tai, Martin Janecek, et al.
Breast cancer continues to be the most common malignancy of women in the United States. Nuclear imaging techniques such as positron emission tomography (PET) have been widely used for the staging of cancer. The primary limitations of PET for breast cancer diagnosis are the lack of a highly specific radiotracer and the limited resolution of imaging systems. The sensitivity for detecting small lesions is very low. Many groups are developing positron emission mammography (PEM) systems dedicated for breast imaging using high resolution detectors. Although image resolution is significantly improved compared to whole-body PET systems, the clinical value of a PEM system is yet to be proven,3.4 Most PET systems have limitations in imaging tissues near the chest walls and lymph nodes. The proposed system addresses the sampling requirements specific to breast imaging and achieves high resolution in PET images of breast and thorax.
A Bayesian approach to tomography of multiply scattered beams
Recently, Levine, Kearsley, and Hagedorn proposed a generalization of generalized Gaussian random Markov field (GGMRF) as developed by Bouman and Sauer. The principal components of the Bouman-Sauer formulation are a quadratic approximation to the log-likelihood assuming a Poisson distribution and a Beer's Law interaction and a prior distribution which penalized deviation of the values in a neighborhood as a user-defined power in the interval (1-2]. The generalization removes the restriction that the transmission function follows Beer's Law, but instead admits any functional form for the transmission-thickness relation, such as those arising in transmission electron microscopy of thick samples. Several illustrative examples are given in this paper.
Progress in multiple-image radiography
Miles N. Wernick, Jovan G. Brankov, Dean Chapman, et al.
Conventional mammography is one of the most widely used diagnostic imaging techniques, but it has serious and well-known shortcomings, which are driving the development of innovative alternatives. Our group has been developing an x-ray imaging approach called multiple-image radiography (MIR), which shows promise as a potential alternative to conventional x-ray imaging (radiography). Like computed tomography (CT), MIR is a computed imaging technique, in which the images are not directly observed, but rather computed algorithmically. Whereas conventional radiography produces just one image depicting absorption effects, MIR simultaneously produces three images, showing separately the effects of absorption, refraction, and ultra-small-angle x-ray scattering. The latter two effects are caused by refractive-index variations in the object, which yield fine image details not seen in standard radiographs. MIR has the added benefits of dramatically lessening radiation dose, virtually eliminating scatter degradation, and lessening the importance of compressing the breast during imaging. In this paper we review progress to date on the MIR technique, focus on the basic physics and signal-processing issues involved in this new imaging method.
A recursive filter for noise reduction in statistical iterative tomographic imaging
Computed Tomography (CT) screening and pediatric imaging, among other applications, demand the development of more efficient reconstruction techniques to diminish radiation dose to the patient. While many methods are proposed to limit or modulate patient exposure to x-ray at scan time, the resulting data is excessively noisy, and generates image artifacts unless properly corrected. Statistical iterative reconstruction (IR) techniques have recently been introduced for reconstruction of low-dose CT data, and rely on the accurate modeling of the distribution of noise in the acquired data. After conversion from detector counts to attenuation measurements, however, noisy data usually deviate from simple Gaussian or Poisson representation, which limits the ability of IR to generate artifact-free images. This paper introduces a recursive filter for IR, which conserves the statistical properties of the measured data while pre-processing attenuation measurements. A basic framework for inclusion of detector electronic noise into the statistical model for IR is also presented. The results are shown to successfully eliminate streaking artifacts in photon-starved situations.
Branchless distance driven projection and backprojection
This paper presents a variation on our Distance Driven projection and backprojection method in which the inner loop is essentially branchless. The new inner loop structure is highly parallelizable and amenable to vectorization or highly pipelined implementations. We demonstrate that the new loop structure computes the same results as the original method to within numerical precision.
Cupping artifacts analysis and correction for a FPD-based cone-beam CT
Cupping artifact is one of the most serious problems in a middle-low energy X-ray Flat panel detector (FPD)-based cone beam CT system. Both beam hardening effects and scatter could induce cupping artifacts in reconstructions and degrade image quality. In this paper, a two-step cupping-correction method is proposed to eliminate cupping: 1) scatter removal; 2) beam hardening correction. By experimental measurement using Beam Stop Array (BSA), the X-ray scatter distribution of a specific object is estimated in the projection image. After interpolation and subtraction, the primary intensity of the projection image is computed. The scatter distribution can also be obtained using convolution with a low-pass filter as kernel. The linearization is used as beam hardening correction method for one-material object. For two-material cylindrical objects, a new approach without iteration involved is present. There are three processes in this approach. Firstly, correct raw projections by the mapping function of the outer material. Secondly, reconstruct the cross-section image from the modified projections. Finally, scale the image by a simple weighting function. After scatter removal and beam hardening correction, the cupping artifacts are well removed, and the contrast of the reconstructed image is remarkably improved.
Color
icon_mobile_dropdown
Estimation of color filter array data from JPEG images for improved demosaicking
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Separation of irradiance and reflectance from observed color images by logarithmical nonlinear diffusion process
Takahiro Saito, Hiromi Takahashi, Takashi Komatsu
The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.
Novel scanner characterization method for color measurement and diagnostics applications
We propose a novel scanner characterization approach for applications requiring color measurement of hardcopy output in printer calibration, characterization, and diagnostic applications. It is assumed that a typical printed medium comprises the three basic colorants C, M, Y. The proposed method is particularly advantageous when additional colorants are used in the print (e.g. black (K)). A family of scanner characterization targets is constructed, each varying in C, M, Y and at a fixed level of K. A corresponding family of 3-D scanner characterizations is derived, one for each level of K. Each characterization maps scanner RGB to a colorimetric representation such as CIELAB, using standard characterization techniques. These are then combined into a single 4-D characterization mapping RGBK to CIELAB. A refinement of the technique improves performance significantly by using a function of the scanned values for K (e.g. the scanner's green channel response to printed K) instead of the digital K value directly. This makes this new approach more robust with respect to variations in printed K over time. Secondly it enables, with a single scanner characterization, accurate color measurement of prints from different printers within the same family. Results show that the 4-D characterization technique can significantly outperform standard 3-D approaches especially in cases where the image being scanned is a patch target made up of unconstrained CMYK combinations. Thus the algorithm finds particular use in printer characterization and diagnostic applications. The method readily generalizes to printed media containing other (e.g "hi-fi") colorants, and also to other image capture devices such as digital cameras.
Image Modeling and Analysis
icon_mobile_dropdown
Elastic surface registration by parameterization optimization in spectral space
This paper proposes a novel method to register 3D surfaces. Given two surface meshes, we formulate the registration as a problem of optimizing the parameterization of one mesh for the other. The optimal parameterization of the mesh is achieved in two steps. First, we find an initial solution close to the optimal solution. Second, we elastically modify the parameterization to minimize the cost function. The modification of the parameterization is expressed as a linear combination of a relatively small number of low-frequency eigenvectors of an appropriate mesh Laplacian. The minimization of the cost function uses a standard nonlinear optimization procedure that determines the coefficients of the linear combination. Constraints are added so that the parameterization validity is preserved during the optimization. The proposed method extends parametric registration of 2D images to the domain of 3D surfaces. This method is generic and capable of elastically registering surfaces with arbitrary geometry. It is also very efficient and can be fully automatic. We believe that this paper for the first time introduces eigenvectors of mesh Laplacians into the problem of surface registration. We have conducted experiments using real meshes that represent human cortical surfaces and the results are promising.
Mosaicking of astronomical images with MOPEX
David Makovoz, Iffat Khan, Frank Masci
We present MOPEX - a software package for mosaicking of astronomical images.MOPEX features image registration, background matching, usage of several interpolation techniques, coaddition schemes, and robust and flexible outlier detection based on spatial and temporal filtering. Image registration is based on matching the positions and fluxes of common point sources in image overlap regions. This information is used to compute a set of image offset corrections by globally minimizing the cumulative point source positional difference. A similar approach was used for background matching in overlapping. The cumulative pixel-by-pixel difference between the overlapping areas of all pairs of images is minimized with respect to the unknown constant offsets of the input images. The interpolation techniques used by MOPEX are the area overlap, drizzle, grid, and bicubic interpolation. We compare different interpolation techniques for their fidelity and speed. Robust outlier detection techniques allow for effective and reliable removal of the cosmic ray hits contaminating the detector array images. Efficient use of computer memory allows mosaicking of data sets of very deep coverage of thousands of images per pointing, as well as areas of sky covering many square degrees. MOPEX has been developed for the Spitzer Space Telescope.
Image processing on parallel GPU pixel units
GPUs have become a key component for most modern PCs. Their wide availability and programmable nature permit exceptional opportunities for acceleration of many common image-processing and machine-vision applications, yet only a few image-processing professionals have yet implemented such algorithms. A brief explanation of GPU design is followed by application notes and a survey of fundamental algorithms for image processing on GPUs, providing guidance for best practices and future research in this field.
Partial shape similarity of contours is needed for object recognition
We will provide psychophysical evidence that recognition of parts of object contours is a necessary component of object recognition. It seems to be obvious that the recognition of parts of object contours is performed by applying a partial shape similarity measure to the query contour part and to the known contour parts. The recognition is completed once a sufficiently similar contour part is found in the database of known contour parts. We will derive necessary requirements for any partial shape similarity measure based on this scenario. We will show that existing shape similarity measures do not satisfy these requirements, and propose a new partial shape similarity measure.
Poster Session and Demonstrations
icon_mobile_dropdown
A block-iterative deterministic annealing algorithm for Bayesian tomographic reconstruction
We introduce a block iterative method to accelerate edge-preserving Bayesian reconstruction algorithms for emission tomography. Most common Bayesian approaches to tomographic reconstruction involve assumptions on the local spatial characteristics of the underlying source. To explicitly model the existence of anatomical boundaries, the line-process model has been often used. The unobservable binary line processes in this case acts to suspend smoothness constraints at sites where they are turned on. Deterministic annealing (DA) algorithms are known to provide an efficient means of handling the problems associated with mixed continuous and binary variable objectives. However, they are still computer intensive and require many iterations to converge. In this work, to further improve the DA algorithm by accelerating its convergence speed, we use a block-iterative (BI) method, which is derived from the ordered subset algorithm. The BI-DA algorithm processes the data in blocks within each iteration, thereby accelerating the convergence speed of a standard DA algorithm by a factor proportional to the number of blocks. The net conclusion is that, with moderate numbers of blocks and properly chosen hyperparameters, the BI-DA algorithm provides good reconstructions as well as a significant acceleration.
Deinterlacing in spatial and temporal domain
A number of deinterlacing algorithms have been proposed, which can be divided into two categories: spatial interpolation methods and temporal interpolation methods. Each technique has its own advantages and limitations. The temporal deinterlacing methods using motion compensation provide improved performance among various deinterlacing techniques. However, its performance suffers if motion estimation is inaccurate and tends to yield undesirable results when rapid motion exists. Thus, a number of spatial interpolation methods have been used along with the temporal deinterlacing method using motion compensation. In this paper, we investigate the performance of several spatial interpolation methods when they are used with temporal deinterlacing methods.
Cosine transform generalized to lie groups SU(2)xSU(2), O(5), and SU(2)xSU(2)xSU(2): application to digital image processing
We propose to apply three of the multiple variants of the 2 and 3-dimensional of the cosine transform. We consider the Lie groups leading to square lattices, namely SU(2)xSU(2) and O(5) in the 2-dimensional space, and the cubic lattice SU(2)xSU(2)xSU(2) in the 3-dimensional space. We aim at evaluating the benefits of some Discrete Group Transform (DGT) techniques, in particular the Continuous Extension of the Discrete Cosine Transform (CEDCT), and at developing new techniques that refine image quality: this refinement is called the high-resolution process. This highest quality is useful to increase the effectiveness of standard features extraction, fusion and classification algorithms. All algorithms based on the 2 and 3-dimensional DGT have the advantage to give the exact value of the original data at the points of the grid lattice, and interpolate well the data values between the grid points. The quality of the interpolation is comparable with the most efficient data interpolation, which are currently used for purposes of image zooming. In our first application, we use DGT techniques to refine fully polarimetric radar images, and to increase the effectiveness of standard features extraction algorithms. In our second application, we apply DGT techniques on medical images extracted from a system and a Magnetic Resonance Imaging (MRI) system.
A prioritized and adaptive approach to volumetric seeded region growing using texture descriptors
Nathan J. Backman, Brian W. Whitney, Jacob D. Furst, et al.
The performance of segmentation algorithms often depends on numerous parameters such as initial seed and contour placement, threshold selection, and other region-dependent a priori knowledge. While necessary for successful segmentation, appropriate setting of these parameters can be difficult to achieve and requires a user experienced with the algorithm and knowledge of the application field. In order to overcome these difficulties, we propose a prioritized and adaptive volumetric region growing algorithm which will automatically segment a region of interest while simultaneously developing a stopping criterion. This algorithm utilizes volumetric texture extraction to establish the homogeneity criterion by which the analysis of the aggregating voxel similarities will, over time, define region boundaries. Using our proposed approach on a volume, derived from Computed Tomography (CT) images of the abdomen, we segmented three organs of interest (liver, kidney and spleen). We find that this algorithm is capable of providing excellent volumetric segmentations while also demanding significantly less user intervention than other techniques as it requires only one interaction from the user, namely the selection of a single seed voxel.
A fast MAP-based super-resolution algorithm for general motion
Masayuki Tanaka, Masatoshi Okutomi
We propose a fast MAP-based super-resolution algorithm for reconstructing a high-resolution image (HRI) by combining multiple low-resolution images (LRIs). The proposed algorithm optimizes a cost function with respect to the HRI in the frequency domain, whereas existing MAP algorithms optimize with respect to the HRI in the spatial domain. Calculation amount comparison verifies that the proposed algorithm has a much smaller calculation cost than a classical algorithm. Experiments using real images are also demonstrated. They show that the proposed algorithm greatly hastens the super-resolution process, reconstructing an identical HRI to the classical algorithm.
Image deblurring by the combined use of a super-resolution technique and inverse filtering
Yasuyuki Yamada, Koji Nakamae, Hiromu Fujioka
We have tried to deblur the image by the combined use of a super-resolution technique (extrapolation by error energy reduction) and inverse filtering. The procedure is as follows. At first, the blurring function is estimated from an observed image. Then the Fourier transforms of the original image and the estimated blurring function are low-pass filtered by truncating their Fourier transforms to zero outside the specified interval. The bandlimited image is divided by the bandlimited blurring function to obtain the bandlimited estimated original image in the frequency domain (inverse filtering). By limiting the analysis to frequencies near the origin, we reduce the probability of encountering zero values in inverse filtering. Lastly, by applying the error energy reduction extrapolation method to the bandlimited estimated original image, we can estimate the original, deblurred image. We applied our proposed method to a model image with noise and a scanning electron microscope image. The quality of our results is superior to that of images restored with Wiener filtering technique.
Interactive volume visualization of cellular structures
Qiqi Wang, Yinlong Sun, Bartek Rajwa, et al.
Modern optical imaging techniques such as confocal and multi-photon microscopy can acquire volumetric datasets of cellular structures. In this paper we propose an approach for interactive volume rendering of such cellular datasets. In the first stage, we create a set of 2D textures corresponding to the image stacks in the original dataset. These textures are generated through a transfer function that maps voxel intensities to colors and opacities, and stored in the texture memory in computer. In the second stage, by blending the textures with hardware support, we can achieve interactive volume rendering including rotation and zooming on regular PCs. Besides, to generate good images for viewing in lateral directions, we use two additional sets of 2D textures for two orthogonal lateral directions and the texture resolutions can be adapted to the rendering requirement and computer hardware. This approach offers an effective visualization environment for biologists to better understand and analyze measured cellular structures.