Show all abstracts
View Session
- Front Matter: Volume 8500
- Compressive Sensing and Sampling
- Inverse Problems
- Information from Projections and Microscopy
- Phase Retrieval and Deconvolution
- Coherent Diffraction Imaging
- Interferometry
- Compressive Sensing and Imaging
- Remote Sensing
Front Matter: Volume 8500
Front Matter: Volume 8500
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8500, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Compressive Sensing and Sampling
Optical motion tracking to improve image quality in MRI of the brain
Julian Maclaren,
Murat Aksoy,
Melvyn Ooi,
et al.
Show abstract
Magnetic resonance imaging (MRI) of the brain is highly sensitivity to head motion. Prospective motion correction is a
promising new method to prevent artifacts resulting from this effect. The image volume is continuously updated based
on head tracking information, ensuring that the magnetic fields used for imaging maintain a constant geometric
relationship relative to the object. This paper reviews current developments and methods of performing prospective
correction. Optical tracking using cameras has major advantages over other methods used to obtain head pose
information, as it does not affect the MR imaging process or interfere with the sequence timing. Results show that
motion artifacts can be almost completely prevented for most imaging sequences. Despite this success, there are still
engineering challenges to be solved before the technique becomes widely accepted in the clinic. These include
improvements in miniaturization, marker fixation and MR compatibility.
MR images from fewer data
Show abstract
There is a strong motivation to reduce the amount of acquired data necessary to reconstruct clinically useful MR images,
since less data means faster acquisition sequences, less time for the patient to remain motionless in the scanner and better
time resolution for observing temporal changes within the body. We recently introduced an improvement in image quality
for reconstructing parallel MR images by incorporating a data ordering step with compressed sensing (CS) in an algorithm
named `PECS'. That method requires a prior estimate of the image to be available. We are extending the algorithm
to explore ways of utilizing the data ordering step without requiring a prior estimate. The method presented here first
reconstructs an initial image x1 by compressed sensing (with scarcity enhanced by SVD), then derives a data ordering
from x1, R'1 , which ranks the voxels of x1 according to their value. A second reconstruction is then performed which
incorporates minimization of the first norm of the estimate after ordering by R'1 , resulting in a new reconstruction x2.
Preliminary results are encouraging.
Phase-space analysis of sparse signals and compressive sensing
Show abstract
Compressive sampling schemes for sparse signals are investigated in the framework of phase-space optics. Phasespace representations are used to identify signal sparsity and construct compressive sensing schemes. Both linear and nonlinear compressive sampling methods are interpreted as applications of Lukosz superresolution. For two iterative methods, the l1-magic algorithm and the CLEAN algorithm, numerical experiments are performed to determine the practical limits of sparse signal recovery. In addition, the phase-space interpretation is used to construct a phase retrieval algorithm for signals with a sparse phase space.
Compressive sampling methods for superresolution imaging
Show abstract
We investigate superresolution imaging using negative index metamaterials. Measurement of subwavelength scale
features in the image domain is tedious and compressive sampling techniques are considered to alleviate this problem. A single detector (c.f. a single pixel camera geometry) is considered from which a high resolution image can be computed, which makes use of structured illumination for coding.
Inverse Problems
A Gibbs sampler for conductivity imaging and other inverse problems
Colin Fox
Show abstract
Gibbs samplers have many desirable theoretical properties, but also have the pesky requirement that conditional distributions be available. We show how conditional densities can be evaluated for the posterior distribution in conductivity imaging - virtually for free in coordinate directions and very cheaply in other ‘special’ directions. The analysis actually applies to a broad class of non-invasive imaging techniques that utilize strong scattering of energy, and leads to efficient iterative algorithms whether implementing inference or optimization. The resulting Gibbs sampler draws an independent conductivity image in only a little more compute time than required for optimization.
Imaging from scattered fields: limited data and degrees of freedom
Show abstract
We describe how the number of degrees of freedom associated with a scattering experiment provides a guide to the
minimum number of source and receiver locations required to image the scattering target. Since the number of degrees
of freedom is approximately fixed, additional measurements do not necessarily improve the image fidelity in the absence
of any prior knowledge. We illustrate these observations using a fast nonlinear inverse scattering method.
Joint time-frequency analysis of EEG signals based on a phase-space interpretation of the recording process
Show abstract
Time-frequency transforms are used to identify events in clinical EEG data. Data are recorded as part of a
study for correlating the performance of human subjects during a memory task with pathological events in the
EEG, called spikes. The spectrogram and the scalogram are reviewed as tools for evaluating spike activity. A
statistical evaluation of the continuous wavelet transform across trials is used to quantify phase-locking events.
For simultaneously improving the time and frequency resolution, and for representing the EEG of several channels
or trials in a single time-frequency plane, a multichannel matching pursuit algorithm is used. Fundamental
properties of the algorithm are discussed as well as preliminary results, which were obtained with clinical EEG
data.
Superresolved image reconstruction from incomplete data
Show abstract
A finite thickness slab of a metamaterial having a refractive index close to n = -1, can be used for sub-wavelength scale
imaging. In the image domain, the measured fields contain evanescent wave contributions from subwavelength scale
features in the object but these have to be related to the intrinsic parameters describing the scatterer such as refractive
index or permittivity. For weak scatterers there can be a simple relationship between the field distribution and the
permittivity profile. However for strong (multiple) scatterers and, more importantly, for objects for which
subwavelength features contribute to the scattered (near) field, there is no simple relationship between the measured data
and the permittivity profile. This is a significant inverse scattering problem for which no immediate solution exists and
given the metamaterial slab’s limitations one cannot assume that either angle or wavelength diversity will be available to
apply an inverse scattering algorithm. We consider wavelength diversity in this paper to acquire the measured data
necessary to estimate a superresolved solution to the inverse scattering problem.
Information from Projections and Microscopy
Multi-axial CT reconstruction from few view projections
Show abstract
This paper focuses on tomographic reconstruction from a smaller number of projections than usual. Whereas traditional
CT scanner are based on sequential X-ray sources, the proposed methodology in this work is based on simultaneous x-ray
sources on each projection. Simulations have shown that only four projections are needed to reconstruct a slice,
which are captured simultaneously, offering drastic reduction of image capture time. Algebraic Reconstruction
Technique (ART) has been used for reconstruction. Although ART has many advantages over the established
methods, it remained unpopular due to its high computational cost, and most importantly due to the artefacts caused by
the patient's movement during image capture. The simultaneity of the projections helps to overcome this serious
shortcoming of ART.
Using inverse fringe projection to speed up the detection of local and global geometry defects on free-form surfaces
Show abstract
Inverse fringe projection can be seen as an improvement to the classical fringe projection method to significantly speed
up the measurement of geometry defects of optical cooperative workpieces requiring no hardware changes to the
classical setup. The CAD model of an ideal specimen is used in a virtual fringe projection system to generate a single
sophisticated inverse fringe projection pattern which is, then, projected onto the surface of the real workpiece.
Subsequently, 3D-geometry defects can be extracted directly and very quickly from a single image captured by the real
camera using elaborate 2D-algorithms. This allows for verification of allowed geometry tolerances with a significantly
reduced latency time.
Study of optical microscanning reconstruction
Xunjie Zhao,
Chengjin Li
Show abstract
Infrared images provide valuable information for many applications. However, compared to a visible image, the image
quality is poor and its spatial resolution is limited due to the focal plane arrays cannot be made dense enough to yield a
sufficiently high spatial sampling frequency, which consequently leads to image blurring. Optical micro-scanning
technique has been proven to be an effective method to increase the resolution of images. This technique is able to
produce high resolution (HR) images from a set of optically shifted images of low-resolution (LR). Over the last decade,
optical micro-scanning technique has become one of the active topics of research, among this, the super-resolution (SR)
reconstruction algorithms are the focus. This paper starts with the basic principle of SR reconstruction. Then several
methods of high-precision movement registration algorithm and SR reconstruction algorithms were introduced. This
study particularly focuses on the more recent development in motion estimation methods. Furthermore, an algorithm
based on sub-pixel image registration that estimates the displacements of the LR image is presented. The critical steps in
image registration are collecting feature points and estimating a spatial transformation especially when outliers are
present. In this paper, the Harris corner detector is used to find the feature points and then the point feature is described
by the neighborhood difference in order to reduce the sensitivity to illumination variations. Moreover, the Random
Sample Consensus(RANSAC) algorithm is employed to build a transformation model. Simulation results demonstrate
that the method can estimate the displacements accurately.
Phase Retrieval and Deconvolution
Characteristics of iterative projection algorithms
Show abstract
A brief description of various iterative projection algorithms and the relationships between them is given, along
with some possible reasons for their ability to solve non-convex problems. An empirical model of their behaviour
when applied to non-convex problems is also described.
Support estimation for phase retrieval image reconstruction from sparse-aperture interferometry data
Show abstract
Imaging interferometry suffers from sparse Fourier measurements, and, at the visible wavelengths, a lack of phase information, creating a need for an image reconstruction algorithm. A support constraint is useful for optimization but is often not known a priori. The two-point rule for finding an object support from the autocorrelation is limited in usefulness by the sparsity and non-uniformity of the Fourier data and is insufficient for image reconstruction. Compactness, a common prior, does not require knowledge of the support. Compactness penalizes solutions that have bright pixels away from the center, favoring soft-edged objects with a bright center and darker extremities. With regards to imaging hard-edged objects such as satellites, a support constraint is desired but unknown and compactness may be unfavorable. Combining various techniques, a method of simultaneously estimating the object’s support and the object’s intensity distribution is presented. Though all the optimization parameters are in the image domain, we are effectively performing phase retrieval at the measurement locations and interpolation between the sparse data points.
Single-image, spatially variant, out-of-focus blur removal
Stanley H. Chan,
Truong Q. Nguyen
Show abstract
This paper addresses the problem of two-layer out-of-focus blur removal from a single image, in which either the
foreground or the background is in focus while the other is out of focus. To recover details from the blurry parts,
the existing blind deconvolution algorithms are insufficient as the problem is spatially variant. The proposed
method exploits the invariant structure of the problem by first predicting the occluded background. Then a
blind deconvolution algorithm is applied to estimate the blur kernel and a coarse estimate of the image is found
as a side product. Finally, the blurred region is recovered using total variation minimization, and fused with the
sharp region to produce the final deblurred image.
Adaptive binary material classification of an unknown object using polarimetric images degraded by atmospheric turbulence
Show abstract
An improved binary material-classification algorithm using passive polarimetric imagery degraded by atmospheric turbulence is presented. The technique implements a modified version of an existing polarimetric blind-deconvolution algorithm in order to remove atmospheric distortion and correctly classify the unknown object. The classification decision, dielectric or metal in this case, is based on degree of linear polarization (DoLP) estimates provided by the blind-deconvolution algorithm augmented by two DoLP priors – one statistically modeling the polarization behavior of metals and the other statistically modeling the polarization behavior of dielectrics. The DoLP estimate which maximizes the log-likelihood function determines the image pixel's classification. The method presented here significantly improves upon a similar published polarimetric classification method by adaptively updating the DoLP priors as more information becomes available about the scene. This new adaptive method significantly extends the range of validity of the existing polarimetric classification technique to near-normal collection geometries where most polarimetric material classifiers perform poorly. In this paper, brief reviews of the polarimetric blind-deconvolution algorithm and the functional forms of the DoLP priors are provided. Also provided is the methodology for making the algorithm adaptive including three techniques for updating the DoLP priors using in-progress DoLP estimates. Lastly, the proposed technique is experimentally validated by comparing classification results of two dielectric and metallic samples obtained using the new method to those obtained using the existing technique.
Coherent Diffraction Imaging
Novel algorithms in coherent diffraction imaging using x-ray free-electron lasers
Chunhong Yoon
Show abstract
The emergence of X-ray Free-Electron Lasers has enabled coherent diffraction imaging of single nanoparticles by
outrunning radiation damage with an intense ultrafast X-ray pulse. A 3D reconstruction from an ensemble of 2D
diffraction patterns requires recovery of the orientations of individual diffraction patterns. This assumes that each
diffraction pattern comes from a slice of a common diffraction volume. In the presence of particle heterogeneity, this
assumption does not hold and the recovered structure is severely degraded. In this paper, I review couple of emerging
algorithms useful for dealing with conformational changes in a heterogeneous sample. A simulated case study of a
“particle in motion” is included to demonstrate the algorithms and also show that these novel algorithms work in the
presence of missing Fourier regions caused by new detector geometries at XFEL facilities.
Phase retrieval in nanocrystallography
Show abstract
Protein X-ray crystallography is a method for determining the three-dimensional structures of large biological molecules by analysing the amplitudes of X-rays scattered from a crystalline specimen of the molecule under study. Conventional structure determination in protein crystallography requires chemical modification to the sample and collection of additional data in order to solve the corresponding phase problem. There is an urgent need for a direct (digital) low-resolution phasing method that does not require modified specimens. Whereas diffraction from large crystals corresponds to samples (so-called Bragg samples) of the amplitude of the Fourier transform of the scattering density, the diffraction from very small crystals allows measurement of the diffraction amplitude between the Bragg samples. Although highly attenuated, these additional measurements offer the possibility of iterative phase retrieval without the use of ancillary experimental data. In this study we examine the noise characteristics of small-crystal diffraction and propose a data selection strategy to improve the quality of reconstructions using iterative phase retrieval algorithms. Simulation results verify that a higher noise level can be tolerated by using such a data selection strategy.
Effects of extraneous noise in cryptotomography
N. Duane Loh
Show abstract
X-ray pulses produced by free-electron lasers can be focussed to produce high-resolution diffraction signal from single nanoparticles before the onset of considerable radiation damage.1–3 These two-dimensional (2D) diffraction patterns are inherently noisy and have no direct means of signal-averaging because the particles themselves are currently injected at random, unknown three-dimensional (3D) orientations into the particle-radiation interaction region. Simulations have successfully recovered 3D reconstructions from such remarkably noisy and fully unoriented 2D diffraction data.4 However, actual experimental data5 show that extraneous noise (either from background scattering or detector noise) can limit the resolution of the reconstruction or even jeopardize reconstruction attempts. This paper studies the second and more severe of these two effects through a simplified version of this reconstruction problem. A straightforward consideration of conditional probabilities 4, 6 can help define when the extraneous noise overwhelms reconstruction attempts. Nevertheless, an ensemble of data with considerable numbers of bright fluctuations may still reconstruct successfully. Incidentally, we also extend a specialized reconstruction algorithm 4, 6 to recover distinct species within an ensemble of illuminated samples. We expect our simplified simulations to provide insights that would have taken considerably longer to develop when restricted to the full 3D reconstruction problem.
Interferometry
Wide field imaging for the square kilometre array
Show abstract
Wide-field radio interferometric telescopes such as the Square Kilometre Array now being designed are subject to a number of aberrations. One particularly pernicious aberration is that due to non-coplanar baselines whereby long baselines incur a quadratic image-plane phase error. There are numerous algorithms for dealing with the non-coplanar baselines effect. As a result of our experience with developing processing software for the Australian Square Kilometre Array Pathinder, we advocate the use of a hybrid algorithm, called w snapshots, based on a combination of w projection and snapshot imaging. This hybrid overcomes some of the deficiencies of each and has advantages from both. Compared to pure w projection, w snapshots uses less memory and execution time, and compared to pure snapshot imaging, w snapshots uses less memory and is more accurate. At the asymptotes, w snapshots devolves to w projection and to snapshots.
High-dynamic range interferometric astronomical imaging in the presence of direction dependent effects
Show abstract
Modern high sensitivity radio interferometric telescopes use ultra wide-band receivers on a large number of antenna elements to achieve the capability of imaging dynamic ranges in excess of 1:1,000,000. In practice, the imaging performance is limited by instrumental and ionospheric/atmospheric effects that corrupt the recorded data. Many of these effects are directionally dependent and vary with time and frequency. Correcting for them is therefore fundamentally more difficult and these effects have been ignored in classical image reconstruction algorithms. Few attempts in the past to correct for these effects in the image-domain did not deliver the required accuracy. Recent developments in new algorithms that can account for such direction dependent effects show promising results. In this paper I give a general mathematical description of these techniques, show that the resulting algorithms are more optimal in terms of imaging performance and computing requirements and show some results.
Radio interferometric imaging of spatial structure that varies with time and frequency
Urvashi Rau
Show abstract
The spatial-frequency coverage of a radio interferometer is increased by combining samples acquired at different times and observing frequencies. However, astrophysical sources often contain complicated spatial structure that varies within the time-range of an observation, or the bandwidth of the receiver being used, or both. Image reconstruction algorithms can been designed to model time and frequency variability in addition to the average intensity distribution, and provide an improvement over traditional methods that ignore all variability. This paper describes an algorithm designed for such structures, and evaluates it in the context of reconstructing three-dimensional time-varying structures in the solar corona from radio interferometric measurements between 5 GHz and 15 GHz using existing telescopes such as the EVLA and at angular resolutions better than that allowed by traditional multi-frequency analysis algorithms.
Maximum subarray algorithms for use in optical and radio astronomy
Show abstract
The maximum sub-array algorithm has been implemented within a field programmable gate array as an efficient centroiding method for wavefront slope estimation. However, a convenient platform for this work is a graphics processor unit (GPU). Translation of the maximum subarray algorithm to a GPU has been performed and shows significant performance gains compared to a single-core CPU. Recently, this algorithm has been applied to radio telescope images acquired for the Australian square kilometer array pathfinder project. This paper provides an overview of the maximum subarray algorithm and shows how this can be utilized for optical and radio telescope applications.
Compressive Sensing and Imaging
Tomographic compressive holographic reconstruction of 3D objects
Show abstract
Compressive holography with multiple projection tomography is applied to solve the inverse ill-posed problem
of reconstruction of 3D objects with high axial accuracy. To visualize the 3D shape, we propose Digital
Tomographic Compressive Holography (DiTCH), where projections from more than one direction as in
tomographic imaging systems can be employed, so that a 3D shape with better axial resolution can be
reconstructed. We compare DiTCH with single-beam holographic tomography (SHOT) which is based on
Fresnel back-propagation. A brief theory of DiTCH is presented, and experimental results of 3D shape
reconstruction of objects using DITCH and SHOT are compared.
Remote Sensing
Determining wind fields in atmospheric mountain waves using sailplane flight data
Show abstract
The problem of estimating wind velocities from limited flight data recordings is considered, with application to
sailplane flights in high-altitude atmospheric mountain waves. Sailplane flight recorders routinely measure only
GPS position and the problem is highly underdetermined. The nature of this problem is studied and a maximum
a posteriori estimator is developed using prior information on the wind velocity and the sailplane airspeed and
heading. The method is tested by simulation and by application to sailplane flight data.
Kepler mission exoplanet transit data analysis using fractal imaging
Show abstract
The Kepler mission is designed to survey a fist-sized patch of the sky within the Milky Way galaxy for the
discovery of exoplanets, with emphasis on near Earth-size exoplanets in or near the habitable zone. The Kepler
space telescope would detect the brightness fluctuation of a host star and extract periodic dimming in the
lightcurve caused by exoplanets that cross in front of their host star. The photometric data of a host star could be
interpreted as an image where fractal imaging would be applicable. Fractal analysis could elucidate the
incomplete data limitation posed by the data integration window. The fractal dimension difference between the
lower and upper halves of the image could be used to identify anomalies associated with transits and stellar
activity as the buried signals are expected to be in the lower half of such an image. Using an image fractal
dimension resolution of 0.04 and defining the whole image fractal dimension as the Chi-square expected value of
the fractal dimension, a p-value can be computed and used to establish a numerical threshold for decision
making that may be useful in further studies of lightcurves of stars with candidate exoplanets. Similar fractal
dimension difference approaches would be applicable to the study of photometric time series data via the
Higuchi method. The correlated randomness of the brightness data series could be used to support inferences
based on image fractal dimension differences. Fractal compression techniques could be used to transform a
lightcurve image, resulting in a new image with a new fractal dimension value, but this method has been found
to be ineffective for images with high information capacity. The three studied criteria could be used together to
further constrain the Kepler list of candidate lightcurves of stars with possible exoplanets that may be planned
for ground-based telescope confirmation.
A high-resolution lightfield camera with dual-mask design
Show abstract
In this paper, we present a new design for lightfield acquisition. In comparison with the conventional lightfield acquisition
techniques, the key characteristic of our system is its ability to achieve a higher resolution lightfield given a fixed sensor. In
particular, the system architecture employs two attenuation masks respectively positioned at the aperture stop and the optical
path of the camera, so that the four-dimensional (4D) lightfield spectrum is encoded and sampled by a two-dimensional
(2D) camera sensor in a single snapshot. In post-processing, by exploiting the coherence embedded in a lightfield, we are
able to retrieve the desired 4D lightfield of a higher resolution using inverse imaging. We demonstrate the performance of
our proposed method with simulations based on the actual lightfield dataset.