Show all abstracts
View Session
- Front Matter: Volume 8296
- Special Session on Micropscopy and Information Modeling
- Reconstruction
- Classification and Detection
- Enhancement, Denoising, and Restoration I
- Enhancement, Denoising, and Restoration II
- Computer Vision and 3D Modeling
- Interactive Paper Session
Front Matter: Volume 8296
Front Matter: Volume 8296
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8296, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Special Session on Micropscopy and Information Modeling
Image sequence segmentation combining global labeling and local relabeling and its application to materials science images
Jarrell W. Waggoner,
Jeff Simmons,
Song Wang
Show abstract
Accurately segmenting a series of 2D serial-sectioned images for multiple, contiguous 3D structures has important
applications in medical image processing, video sequence analysis, and materials science image segmentation.
While 2D structure topology is largely consistent across consecutive serial sections, it may vary locally because
a 3D structure of interest may not span the entire 2D sequence. In this paper, we develop a new approach to
address this challenging problem by considering both the global consistency and possible local inconsistency of the
2D structural topology. In this approach, we repeatedly propagate a 2D segmentation from one slice to another,
and we formulate each step of this propagation as an optimal labeling problem that can be efficiently solved
using the graph-cut algorithm. Specifically, we divide the optimal labeling into two steps: a global labeling that
enforces topology consistency, and a local labeling that identifies possible topology inconsistency. We justify the
effectiveness of the proposed approach by using it to segment a sequence of serial-section microscopic images of an
alloy widely used in material sciences and compare its performance against several existing image segmentation
methods.
Computer-aided fiber analysis for crime scene forensics
Show abstract
The forensic analysis of fibers is currently completely manual and therefore time consuming. The automation
of analysis steps can significantly support forensic experts and reduce the time, required for the investigation.
Moreover, a subjective expert belief is extended by objective machine estimation. This work proposes the pattern
recognition pipeline containing the digital acquisition of a fiber media, the pre-processing for fiber segmentation,
and the extraction of the distinctive characteristics of fibers. Currently, basic geometrical features like width,
height, area of optically dominant fibers are investigated. In order to support the automatic classification of
fibers, supervised machine learning algorithms are evaluated. The experimental setup includes a car seat and
two pieces clothing of a different fabric. As preliminary work, acrylic as synthetic and sheep wool as natural
fiber are chosen to be classified. While sitting on the seat, a test person leaves textile fibers. The test aims at
automatic distinguishing of clothes through the fiber traces gained from the seat with the help of adhesive tape.
The digitalization of fiber samples is provided by a contactless chromatic white light sensor. First test results
showed, that two optically very different fibers can be properly assigned to their corresponding fiber type. The
best classifier achieves an accuracy of 75 percent correctly classified samples for our suggested features.
3D reconstruction based on single-particle cryo electron microscopy images as a random signal in noise problem
Show abstract
Instances of biological macromolecular complexes that have identical chemical constituents may not have the
same geometry due to, for example, flexibility. Cryo electron microscopy provides one noisy projection image
of each of many instances of a complex where the projection directions for the different instances are random.
The noise is sufficient severe (SNR << 1) that the projection direction for a particular image cannot be easily
estimated from the individual image. The goal is to determine the 3-D geometry of the complex (the 3-D
distribution of electron scattering intensity) which requires fusing information from these many images of many
complexes. In order to describe the geometric heterogeneity of the complexes, the complex is described as a
weighted sum of basis functions where the weights are random. In order to get tractable algorithms, the weights
are modeled as Gaussian random variables with unknown statistics and the noise is modeled as additive Gaussian
random variables with unknown covariance. The statistics of the weights and the statistics of the noise are jointly
estimated by maximum likelihood by a generalized expectation maximization algorithm. The method has been
developed to the point where it appears able to solve problems of interest to the structural biology community.
Highly scalable methods for exploiting a label with unknown location in order to orient a set of single-particle cryo electron microscopy images
Show abstract
A highly scalable method for determining the projection orientation of each image in a set of cryo electron
microscopy images of a labeled particle is proposed. The method relies on the presence of a label that is a
sufficiently strong scatterer such that its 2-D location in each image can be restricted to at most a small number
of sites by processing applied to each image individually. It is not necessary to know the 3-D location of the
label on the particle. After first determining the possible locations of the label in the 2-D images in parallel, the
information from all images is fused to determine the 3-D location of the label on the particle and then the 3-D
location is used to determine the projection orientation for each image by processing each image individually.
With projection orientations, many algorithms exist for computing the 3-D reconstruction. The performance of
the algorithm is studied as a function of the label SNR.
Reconstruction
Image reconstruction using projections from a few views by discrete steering combined with DART
Show abstract
In this paper, we propose an algebraic reconstruction technique (ART) based discrete tomography method to reconstruct
an image accurately using projections from a few views. We specifically consider the problem of reconstructing an
image of bottles filled with various types of liquids from X-ray projections. By exploiting the fact that bottles are usually
filled with homogeneous material, we show that it is possible to obtain accurate reconstruction with only a few
projections by an ART based algorithm. In order to deal with various types of liquids in our problem, we first introduce
our discrete steering method which is a generalization of the binary steering approach for our proposed multi-valued
discrete reconstruction. The main idea of the steering approach is to use slowly varying thresholds instead of fixed
thresholds. We further improve reconstruction accuracy by reducing the number of variables in ART by combining our
discrete steering with the discrete ART (DART) that fixes the values of interior pixels of segmented regions considered
as reliable. By simulation studies, we show that our proposed discrete steering combined with DART yields superior
reconstruction than both discrete steering only and DART only cases. The resulting reconstructions are quite accurate
even with projections using only four views.
One-dimensional control grid interpolation-based demosaicing and color image interpolation
Show abstract
We recently reported good results with our image interpolation algorithm, One-Dimensional Control Grid Interpolation
(1DCGI), in the context of grayscale images. 1DCGI has high quantitative accuracy, flexibility with
respect to scaling factor, and low computational cost relative to similarly performing methods. Here we look to
extend our method to the demosaicing of Bayer-Patterned images. 1DCGI-based demosaicing performs quantitatively
better than the gradient-corrected linear interpolation method of Malvar. We also demonstrate effective
interpolation of full color images.
Limited view angle iterative CT reconstruction
Show abstract
Computed Tomography (CT) is widely used for transportation security to screen baggage for potential threats.
For example, many airports use X-ray CT to scan the checked baggage of airline passengers. The resulting
reconstructions are then used for both automated and human detection of threats. Recently, there has been
growing interest in the use of model-based reconstruction techniques for application in CT security systems.
Model-based reconstruction offers a number of potential advantages over more traditional direct reconstruction
such as filtered backprojection (FBP). Perhaps one of the greatest advantages is the potential to reduce reconstruction
artifacts when non-traditional scan geometries are used. For example, FBP tends to produce very
severe streaking artifacts when applied to limited view data, which can adversely affect subsequent processing
such as segmentation and detection.
In this paper, we investigate the use of model-based reconstruction in conjunction with limited-view scanning
architectures, and we illustrate the value of these methods using transportation security examples. The advantage
of limited view architectures is that it has the potential to reduce the cost and complexity of a scanning system,
but its disadvantage is that limited-view data can result in structured artifacts in reconstructed images. Our
method of reconstruction depends on the formulation of both a forward projection model for the system, and a
prior model that accounts for the contents and densities of typical baggage. In order to evaluate our new method,
we use realistic models of baggage with randomly inserted simple simulated objects. Using this approach, we
show that model-based reconstruction can substantially reduce artifacts and improve important metrics of image
quality such as the accuracy of the estimated CT numbers.
Variational semi-blind sparse image reconstruction with application to MRFM
Show abstract
This paper addresses the problem of joint image reconstruction and point spread function PSF) estimation when the PSF of
the imaging device is only partially known. To solve this semi-blind deconvolution problem, prior distributions are specified for
the PSF and the 3D image. Joint image reconstruction and PSF estimation is then performed within a Bayesian framework,
using a variational algorithm to estimate the posterior distribution. The image prior distribution imposes an explicit atomic
measure that corresponds to image sparsity. Simulation results demonstrate that the semi-blind deconvolution algorithm
compares favorably with previous Markov chain Monte Carlo MCMC) version of myopic sparse reconstruction. It also
outperforms non-myopic algorithms that rely on perfect knowledge of the PSF. The algorithm is illustrated on real data from
magnetic resonance force microscopy MRFM).
Classification and Detection
Moon search algorithms for NASA's Dawn Mission to asteroid Vesta
Show abstract
A moon or natural satellite is a celestial body that orbits a planetary body such as a planet, dwarf planet, or an asteroid.
Scientists seek understanding the origin and evolution of our solar system by studying moons of these bodies.
Additionally, searches for satellites of planetary bodies can be important to protect the safety of a spacecraft as it
approaches or orbits a planetary body. If a satellite of a celestial body is found, the mass of that body can also be
calculated once its orbit is determined. Ensuring the Dawn spacecraft's safety on its mission to the asteroid (4) Vesta
primarily motivated the work of Dawn's Satellite Working Group (SWG) in summer of 2011. Dawn mission scientists
and engineers utilized various computational tools and techniques for Vesta's satellite search. The objectives of this
paper are to 1) introduce the natural satellite search problem, 2) present the computational challenges, approaches, and
tools used when addressing this problem, and 3) describe applications of various image processing and computational
algorithms for performing satellite searches to the electronic imaging and computer science community. Furthermore,
we hope that this communication would enable Dawn mission scientists to improve their satellite search algorithms and
tools and be better prepared for performing the same investigation in 2015, when the spacecraft is scheduled to approach
and orbit the dwarf planet (1) Ceres.
Multichannel hierarchical image classification using multivariate copulas
Show abstract
This paper focuses on the classification of multichannel images. The proposed supervised Bayesian classification
method applied to histological (medical) optical images and to remote sensing (optical and synthetic aperture
radar) imagery consists of two steps. The first step introduces the joint statistical modeling of the coregistered
input images. For each class and each input channel, the class-conditional marginal probability density functions
are estimated by finite mixtures of well-chosen parametric families. For optical imagery, the normal distribution
is a well-known model. For radar imagery, we have selected generalized gamma, log-normal, Nakagami and
Weibull distributions. Next, the multivariate d-dimensional Clayton copula, where d can be interpreted as the
number of input channels, is applied to estimate multivariate joint class-conditional statistics. As a second step,
we plug the estimated joint probability density functions into a hierarchical Markovian model based on a quadtree
structure. Multiscale features are extracted by discrete wavelet transforms, or by using input multiresolution
data. To obtain the classification map, we integrate an exact estimator of the marginal posterior mode.
Enhancement, Denoising, and Restoration I
Denoising and deblurring of Fourier transform infrared spectroscopic imaging data
Show abstract
Fourier transform infrared (FT-IR) spectroscopic imaging is a powerful tool to obtain chemical information from
images of heterogeneous, chemically diverse samples. Significant advances in instrumentation and data processing
in the recent past have led to improved instrument design and relatively widespread use of FT-IR imaging, in a
variety of systems ranging from biomedical tissue to polymer composites. Various techniques for improving signal
to noise ratio (SNR), data collection time and spatial resolution have been proposed previously. In this paper
we present an integrated framework that addresses all these factors comprehensively. We utilize the low-rank
nature of the data and model the instrument point spread function to denoise data, and then simultaneously
deblurr and estimate unknown information from images, using a Bayesian variational approach. We show that
more spatial detail and improved image quality can be obtained using the proposed framework. The proposed
technique is validated through experiments on a standard USAF target and on prostate tissue specimens.
Iterative weighted risk estimation for nonlinear image restoration with analysis priors
Show abstract
Image acquisition systems invariably introduce blur, which necessitates the use of deblurring algorithms
for image restoration. Restoration techniques involving regularization require appropriate
selection of the regularization parameter that controls the quality of the restored result. We focus
on the problem of automatic adjustment of this parameter for nonlinear image restoration using
analysis-type regularizers such as total variation (TV). For this purpose, we use two variants of
Stein's unbiased risk estimate (SURE), Predicted-SURE and Projected-SURE, that are applicable
for parameter selection in inverse problems involving Gaussian noise. These estimates require
the Jacobian matrix of the restoration algorithm evaluated with respect to the data. We derive
analytical expressions to recursively update the desired Jacobian matrix for a fast variant of the
iterative reweighted least-squares restoration algorithm that can accommodate a variety of regularization
criteria. Our method can also be used to compute a nonlinear version of the generalized
cross-validation (NGCV) measure for parameter tuning. We demonstrate using simulations that
Predicted-SURE, Projected-SURE, and NGCV-based adjustment of the regularization parameter
yields near-MSE-optimal results for image restoration using TV, an analysis-type 1-regularization,
and a smooth convex edge-preserving regularizer.
Nonlocal transform-domain denoising of volumetric data with groupwise adaptive variance estimation
Show abstract
We propose an extension of the BM4D volumetric filter to the denoising of data corrupted by spatially nonuniform
noise. BM4D implements the grouping and collaborative filtering paradigm, where similar cubes of voxels
are stacked into a four-dimensional "group". Each group undergoes a sparsifying four-dimensional transform,
that exploits the local correlation among voxels in each cube and the nonlocal correlation between corresponding
voxels of different cubes. Thus, signal and noise are effectively separated in transform domain. In this work
we take advantage of the sparsity induced by the four-dimensional transform to provide a spatially adaptive
estimation of the local noise variance by applying a robust median estimator of the absolute deviation to the
spectrum of each filtered group. The adaptive variance estimates are then used during coefficients shrinkage.
Finally, the inverse four-dimensional transform is applied to the filtered group, and each individual cube estimate
is adaptively aggregated at its original location.
Experiments on medical data corrupted by spatially varying Gaussian and Rician noise demonstrate the
efficacy of the proposed approach in volumetric data denoising. In case of magnetic resonance signals, the
adaptive variance estimate can be also used to compensate the estimation bias due to the non-zero-mean errors
of the Rician-distributed data.
Non-uniform contrast and noise correction for coded source neutron imaging
Show abstract
Since the first application of neutron radiography in the 1930s, the field of neutron radiography has matured
enough to develop several applications. However, advances in the technology are far from concluded. In general,
the resolution of scintillator-based detection systems is limited to the 10μm range, and the relatively low neutron
count rate of neutron sources compared to other illumination sources restricts time resolved measurement.
One path toward improved resolution is the use of magnification; however, to date neutron optics are inefficient,
expensive, and difficult to develop. There is a clear demand for cost-effective scintillator-based neutron
imaging systems that achieve resolutions of 1μm or less. Such imaging system would dramatically extend the
application of neutron imaging. For such purposes a coded source imaging system is under development. The
current challenge is to reduce artifacts in the reconstructed coded source images. Artifacts are generated by
non-uniform illumination of the source, gamma rays, dark current at the imaging sensor, and system noise from
the reconstruction kernel. In this paper, we describe how to pre-process the coded signal to reduce noise and
non-uniform illumination, and how to reconstruct the coded signal with three reconstruction methods correlation,
maximum likelihood estimation, and algebraic reconstruction technique. We illustrates our results with
experimental examples.
Image enhancement and quality measures for dietary assessment using mobile devices
Show abstract
Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We
are developing a system, known as the mobile device food record (mdFR), to automatically identify and quantify foods
and beverages consumed based on analyzing meal images captured with a mobile device. The mdFR makes use of a
fiducial marker and other contextual information to calibrate the imaging system so that accurate amounts of food can be
estimated from the scene. Food identification is a difficult problem since foods can dramatically vary in appearance. Such
variations may arise not only from non-rigid deformations and intra-class variability in shape, texture, color and other
visual properties, but also from changes in illumination and viewpoint. To address the color consistency problem, this
paper describes illumination quality assessment methods implemented on a mobile device and three post color correction
methods.
Enhancement, Denoising, and Restoration II
Subjective evaluations of example-based, total variation, and joint regularization for image processing
Show abstract
We report on subjective experiments comparing example-based regularization, total variation regularization,
and the joint use of both regularizers. We focus on the noisy deblurring problem, which generalizes image
superresolution and denoising. Controlled subjective experiments suggest that joint example-based regularization
and total variation regularization can provide subjective gains over total regularization alone, particularly when
the example images contain similar structural elements as the test image. We also investigate whether the
regularization parameters can be trained by cross-validation, and we compare the reconstructions using crossvalidation
judgments made by humans or by fully automatic image quality metrics. Experiments showed that of
five image quality metrics tested, the structural similarity index (SSIM) correlates best with human judgement
of image quality, and can be profitably used to cross-validate regularization parameters. However, there is a
significant quality gap between images restored using human or automatic parameter cross-validation.
Removal of haze and noise from a single image
Show abstract
Images of outdoor scenes often contain degradation due to haze, resulting in contrast reduction and color fading.
For many reasons one may need to remove these effects. Unfortunately, haze removal is a difficult problem due
the inherent ambiguity between the haze and the underlying scene. Furthermore, all images contain some noise
due to sensor (measurement) error that can be amplified in the haze removal process if ignored.
A number of methods have been proposed for haze removal from images. Existing literature that has also
addressed the issue of noise has relied on multiple images either for denoising prior to dehazing1 or in the dehazing
process itself.2, 3 However, multiple images are not always available. Recent single image approaches, one of the
most successful being the "dark channel prior",4 have not yet considered the issue of noise.
Accordingly, in this paper we propose two methods for removing both haze and noise from a single image.
The first approach is to denoise the image prior to dehazing. This serial approach essentially treats haze and
noise separately, and so a second approach is proposed to simultaneously denoise and dehaze using an iterative,
adaptive, non-parametric regression method. Experimental results for both methods are then compared.
Our findings show that when the noise level is precisely known a priori, simply denoising prior to dehazing
performs well. When the noise level is not given, latent errors from either "under"-denoising or "over"-denoising
can be amplified, and in this situation, the iterative approach can yield superior results.
Computer Vision and 3D Modeling
Finding saliency in noisy images
Show abstract
Recently, many computational saliency models have been introduced2, 5, 7, 13, 23 to transform a given image into
a scalar-valued map that represents visual saliency of the input image. These approaches, however, generally
assume the given image is clean. Fortunately, most methods implicitly suppress the noise before calculating the
saliency by blurring and downsampling the input image, and therefore tend to be apparently rather insensitive to
noise.11 However, a fundamental and explicit treatment of saliency in noisy images is missing from the literature.
Indeed, as we will show, the price for this apparent insensitivity to noise is that the overall performance over a
large range of noise strengths is diminished. Accordingly, the question is how to compute saliency in a reliable
way when a noise-corrupted image is given. To address this problem, we propose a novel and statistically sound
method for estimating saliency based on a non-parametric regression framework. The proposed estimate of
the saliency at a pixel is a data-dependent weighted average of dissimilarities between a center patch and its
surrounding patches. This aggregation of the dissimilarities is simple and more stable despite the presence of
noise. For comparison's sake, we apply a state of the art denoising approach before attempting to calculate the
saliency map, which obviously produces much more stable results for noisy images. Despite the advantage of
preprocessing, we still found that our method consistently outperforms the other state-of-the-art2, 13 methods
over a large range of noise strengths.
Automatic loop closure detection using multiple cameras for 3D indoor localization
Show abstract
Automated 3D modeling of building interiors is useful in applications such as virtual reality and environment
mapping. We have developed a human operated backpack data acquisition system equipped with a variety of
sensors such as cameras, laser scanners, and orientation measurement sensors to generate 3D models of building
interiors, including uneven surfaces and stairwells. An important intermediate step in any 3D modeling system,
including ours, is accurate 6 degrees of freedom localization over time. In this paper, we propose two approaches
to improve localization accuracy over our previously proposed methods. First, we develop an adaptive localization
algorithm which takes advantage of the environment's floor planarity whenever possible. Secondly, we show that
by including all the loop closures resulting from two cameras facing away from each other, it is possible to reduce
localization error in scenarios where parts of the acquisition path is retraced. We experimentally characterize
the performance gains due to both schemes.
An information theoretic trackability measure
Show abstract
There exists no measure to quantify the difficulty of a video tracking problem. Such difficulty depends upon the quality
of the video and upon the ability to distinguish the target from the background and from other potential targets. We
define a trackability measure in an information theoretic framework. The tools of information theory allow a measure of
trackability that seamlessly combines the video-dependent aspects with the target-dependent aspects of tracking
difficulty using measure of rate and information content. Specifically, video quality is encapsulated into a term that
measures spatial resolution, temporal resolution and signal-to-noise ratio by way of a Shannon-Hartley analysis. Then,
the ability to correctly match a template to a target is evaluated through an analysis of the mutual information between
the template, the detected signal and the interfering clutter. The trackability measure is compared to the performance of a
recent tracker based on scale space features computed via connected filters. The results show high Spearman correlation
magnitude between the trackability measure and actual performance.
Text replacement on cylindrical surfaces: a semi-automatic approach
Show abstract
Image-based customization that incorporates personalized text strings into photorealistic images in a natural
and appealing way has been of great interest lately. We describe a semi-automatic approach for replacing text
on cylindrical surfaces in images of natural scenes or objects. The user is requested to select a boundary for the
existing text and align a pair of edges for the sides of the cylinder. The algorithm erases the existing text, and
instantiates a 3-D cylinder forward projection model to render the new text. The parameters of the forward
projection model are estimated by optimizing a carefully designed cost function. Experimental results show that
the text-replaced images look natural and appealing.
Registration and integration of multiple depth images using signed distance function
Daniel B. Kubacki,
Huy Q. Bui,
S. Derin Babacan,
et al.
Show abstract
Depth camera is a new technology that has potential to radically change the way humans record the world and
interact with 3D virtual environments. With depth camera, one can have access to depth information up to
30 frames per second, which is much faster than previous 3D scanners. This speed enables new applications,
in that objects are no longer required to be static for 3D sensing. There is, however, a trade-off between the
speed and the quality of the results. Depth images acquired with current depth cameras are noisy and have low
resolution, which poses a real obstacle to incorporating the new 3D information into computer vision techniques.
To overcome these limitation, the speed of depth camera could be leveraged to combine data from multiple depth
frames together. Thus, we need a good registration and integration method that is specifically designed for such
low quality data. To achieve that goal, in this paper we propose a new method to register and integrate multiple
depth frames over time onto a global model represented by an implicit moving least square surface.
Image reconstruction from nonuniformly spaced samples in Fourier domain optical coherence tomography
Show abstract
In this work, we use inverse imaging for object reconstruction from nonuniformly-spaced samples in Fourier
domain optical coherence tomography (FD-OCT). We first model the FD-OCT system with a linear system of
equations, where the source power spectrum and the nonuniformly-spaced sample positions are represented accurately.
Then, we reconstruct the object signal directly from the nonuniformly-spaced wavelength measurements.
With the inverse imaging method, we directly estimate the 2D cross-sectional object image instead of a set of
independent A-line signals. By using the Total Variation (TV) as a constraint in the optimization process, we
reduce the noise in the 2D object estimation. Besides TV, object sparsity is also used as a regularization for
the signal reconstruction in FD-OCT. Experimental results demonstrate the advantages of our method, as we
compare it with other methods.
Interactive Paper Session
Analysis of the practical coverage of uniform motions to approximate real camera shakes
Show abstract
Motion blur is usually modeled as the convolution of a latent image with a motion blur kernel, and most of
current deblurring methods limit types of motion blurs to be uniform with the convolution model. However,
real motion blurs are often non-uniform, and in consequence the methods may not well remove real motion
blurs caused by camera shakes. To utilize the existing methods in practice, it is necessary to understand how
much the uniform motions (i.e., translations) can approximate real camera shakes. In this paper, we analyze the
displacement of real camera motions on image pixels and present the practical coverage of uniform motions (i.e.,
translations) to approximate complicated real camera shakes. We first analyze mathematically the difference of
the motion displacement between the optical axis and image boundary under real camera shakes, then derive
the practical coverage of uniform motion deblurring methods when used for real blurred images. The coverage
can effectively guide how much one can utilize the existing uniform motion deblurring methods, and informs the
need to model real camera shakes accurately rather than assuming uniform motions.
Real-time computational camera system for high-sensitivity imaging by using combined long/short exposure
Satoshi Sato,
Yusuke Okada,
Takeo Azuma
Show abstract
In this study, we realize high-resolution (4K-format), small-size (1.43 x 1.43 μm pixel pitch size with a single imager)
and high-sensitivity (four times higher sensitivity as compared to conventional imagers) video camera system. Our
proposed system is the real time computational camera system that combined long exposure green pixels with short
exposure red / blue pixels. We demonstrate that our proposed camera system is effective even in conditions of low
illumination.
Color correction with edge preserving and minimal SNR decrease using multi-layer decomposition
Show abstract
This paper describes the method related to correcting color distortion in color imaging. Acquiring color images from
CMOS or CCD digital sensors can suffer from color distortion, which means that the image from sensors is different
from the original image in the color space. The main reasons are the cross-talks between adjacent pixels, the color
pigment characteristic's mismatch with human perception and infra-red (IR) influx to visible channel or red, green, blue
(RGB) due to IR cutoff filter imperfection. To correct this distortion, existing methods use multiplying gain coefficients
in each color channel and this multiplication can cause noise boost and loss of detail information. This paper proposes
the novel method which can not only preserve color distortion correction ability, but also suppress noise boost and loss
of detail information in the color correction process of IR corrupted pixels. In the case of non-IR corruption pixels, the
use of image before color correction instead of IR image makes this kind of method available. Specifically the color and
low frequency information in luminance channel is extracted from the color corrected image. And high frequency
information is from the IR image or the image before color correction. The method extracting the low and high
frequency information use multi-layer decomposition skill with edge preserving filters.
Bayesian image superresolution for hyperspectral image reconstruction
Show abstract
This study presents a novel method which applies superresolution to hyperspectral image reconstruction in order
to achieve a more efficient spectral imaging method. Theories of spectral reflectance estimation, such as Wiener
estimation, have reduced the time and problems faced in spectral imaging. Recently Wiener estimation has
been extended to increase not only the spectral resolution but also the spatial resolution of a hyperspectral
image by combining the methods for image deblurring. However, there is a demand for more efficient spectral
imaging techniques. This study extended the Wiener estimation further to achieve superresolution beyond simple
deblurring because superresolution has more advantages: the possibility of getting higher spatial resolution,
and the automatic registration of multispectral images. Maximization of the marginal likelihood function is
employed in this method to reconstruct the high resolution hyperspectral image on the basis of Bayesian image
superresolution. The obvious effect of superresolution was validated through an experiment using acquired
multispectral images of a Japanese traditional painting.
ToF depth image motion blur detection using 3D blur shape models
Seungkyu Lee,
Hyunjung Shim,
James D. K. Kim,
et al.
Show abstract
Time-of-flight cameras produce 3D geometry enabling faster and easier 3D scene capturing. The depth camera,
however, suffers from motion blurs when the movement from either camera or scene appears. Unlike other
noises, depth motion blur is hard to eliminate by any general filtering methods and yields the serious distortion
in 3D reconstruction, typically causing uneven object boundaries and blurs. In this paper, we provide a through
analysis on the ToF depth motion blur and a modeling method which is used to detect a motion blur region from
a depth image. We show that the proposed method correctly detects blur regions using the set of all possible
motion artifact models.
Computational imaging of defects in commercial substrates for electronic and photonic devices
Show abstract
Computational defect imaging has been performed in commercial substrates for electronic and photonic devices by
combining the transmission profile acquired with an imaging type of linear polariscope and the computational algorithm
to extract a small amount of birefringence. The computational images of phase retardation δ exhibited spatial
inhomogeneity of defect-induced birefringence in GaP, LiNbO3, and SiC substrates, which were not detected by
conventional 'visual inspection' based on simple optical refraction or transmission because of poor sensitivity. The
typical imaging time was less than 30 seconds for 3-inch diameter substrate with the spatial resolution of 200 μm, while
that by scanning polariscope was 2 hours to get the same spatial resolution. Since our proposed technique have been
achieved high sensitivity, short imaging time, and wide coverage of substrate materials, which are practical advantages
over the laboratory-scale apparatus such as X-ray topography and electron microscope, it is useful for nondestructive
inspection of various commercial substrates in production of electronic and photonic devices.
Nondestructive three-dimensional measurement of gas temperature distribution by phase tomography
Satoshi Tomioka,
Shusuke Nishiyama
Show abstract
This study presents a nondestructive three-dimensional (3D) measurement of gas temperature distribution around
a heater. The distribution is obtained by a coupling method of optical interferometry and computed tomography
(CT). Since the gas temperature is related to refractive index, once a series of two-dimensional (2D) phase
modulation that is an integral of refractive index along an optical path is obtained, the 3D gas temperature
distribution can be ideally determined in the same way as the widely-used CT to determine a distribution of
attenuation factor. However, the series of 2D phase images is not complete; phase images from certain directions
cannot be obtained because of limitations of the measurement system. Furthermore, the 2D images of phase
modulation are not observed directly, since the interferometer can only detect a 2D image of intensity distribution
called fringe pattern. To retrieve the phase modulation from the fringe pattern, both digital holography and
phase unwrapping algorithm are applied. To obtain 3D gas temperature distributions with such incomplete data
sets, we apply a method using localized compensator for phase unwrapping algorithm to obtain 2D modulation
maps, and a maximum-likelihood tomography for a 3D reconstruction. Accuracy of each method is compared
with that of conventional methods.
Closed-form inverses for the mixed pixel/multipath interference problem in AMCW lidar
Show abstract
We present two new closed-form methods for mixed pixel/multipath interference separation in AMCW lidar
systems. The mixed pixel/multipath interference problem arises from the violation of a standard range-imaging
assumption that each pixel integrates over only a single, discrete backscattering source. While a numerical
inversion method has previously been proposed, no close-form inverses have previously been posited. The first
new method models reflectivity as a Cauchy distribution over range and uses four measurements at different
modulation frequencies to determine the amplitude, phase and reflectivity distribution of up to two component
returns within each pixel. The second new method uses attenuation ratios to determine the amplitude and phase
of up to two component returns within each pixel. The methods are tested on both simulated and real data and
shown to produce a significant improvement in overall error. While this paper focusses on the AMCW mixed
pixel/multipath interference problem, the algorithms contained herein have applicability to the reconstruction of
a sparse one dimensional signal from an extremely limited number of discrete samples of its Fourier transform.