Show all abstracts
View Session
- Front Matter: Volume 8000
- Stereovision
- Size/Shape Measurements
- Interferometry
- Advanced Tools
- Imaging Systems and Sensors
- Data Fusion
- Motion/Registration
- Texture and Surface Inspection
- Poster Session
Front Matter: Volume 8000
Front Matter: Volume 8000
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8000, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Stereovision
Illumination control in view of dynamic (re)planning of 3D reconstruction tasks
Show abstract
Accuracy of 3D vision-based reconstruction tasks depends both on the complexity of analyzed objects and on good
viewing / illumination conditions, ensuring image quality and minimizing consequently measurement errors after
processing of acquired images. In this contribution, as a complement to an autonomous cognitive vision system
automating 3D reconstruction and using Situation Graph Trees (SGTs) as a planning / control tool, these graphs are
optimized in two steps. The first (off-line) step addresses the placement of lighting sources, with the aim to find positions
minimizing processing errors during the subsequent reconstruction steps. In the second step, on-line application of the
SGT-based control module focuses on adjustment of illumination conditions (e. g., intensity), leading eventually to
process re-planning, and enabling further to extract optimally the contour data required for 3D reconstruction. The whole
illumination optimization procedure has been fully automated and included in the dynamic (re-)planning tool for visionbased
reconstruction tasks, e. g. in view of quality control applications.
Using variable homography to measure emergent fibers on textile fabrics
Show abstract
A fabric's smoothness is a key factor to determine the quality of textile finished products and has great influence on the
functionality of industrial textiles and high-end textile products. With popularization of the 'zero defect' industrial
concept, identifying and measuring defective material in the early stage of production is of great interest for the industry.
In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and
nonwoven material during the entire production process, however online measurement of hairiness is still an open topic
and highly desirable for industrial applications.
In this paper we propose a computer vision approach, based on variable homography, which can be used to measure the
emergent fiber's length on textile fabrics. The main challenges addressed in this paper are the application of variable
homography to textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose
that a fibrous structure can be considered as a two-layer structure and then show how variable homography can estimate
the length of the fiber defects. Simulations are carried out to show the effectiveness of this method to measure the
emergent fiber's length. The true lengths of selected fibers are measured precisely using a digital optical microscope, and
then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable
homography is an accurate and robust method for quality control of important industrially fabrics.
Defect detection of electronic devices by single stereo vision
Show abstract
It is very important to guarantee the quality of the industrial products by means of visual inspection. In order to reduce
the soldering defect with terminal deformation and terminal burr in the manufacturing process, this paper
proposes a 3D visual inspection system based on a stereo vision with single camera.
It is technically noted that the base line of this single camera stereo was precisely calibrated by the image processing
procedure. Also to extract the measuring point coordinates for computing disparity; the error is reduced with original
algorithm. Comparing its performance with that of human inspection using industrial microscope, the proposed 3D
inspection could be an alternative in precision and in processing cost. Since the practical specification in 3D precision
is less than 0.02 mm and the experimental performance was around the same, it was demonstrated by the proposed system
that the soldering defect with terminal deformation and terminal burr in inspection, especially in 3D inspection,
was decreased.
In order to realize the inline inspection, this paper will suggest how the human inspection of the products could be
modeled and be implemented by the computer system especially in manufacturing process.
Size/Shape Measurements
Recognizing overlapped particles during a crystallization process from in situ video images for measuring their size distributions
Show abstract
This paper presents a method to recognize polygonal-shaped particles (i.e. rectangles, regular/irregular prisms)
among agglomerated crystals from in situ images during a crystallization process. The aim is to measure the
particle size distributions (PSD), which are key measurements needed for the purification operations and the
quality control of chemical products or drugs. The method is first based on detecting the geometric features
of the particles identified by their salient corners. A clustering technique is then applied by grouping three
correspondent salient corners belonging to the same polygon. The efficiency of the proposed method is tested
on particles of Ammonium Oxalate during batch crystallization in pure water. Particle size distributions are
calculated, and a quantitative comparison between automatic and manual sizing is performed.
Extracting the ridge set as a graph for quantification of actin filament images obtained by confocal laser scanning microscopy
Harald Birkholz
Show abstract
The progress in image acquisition technique provides life sciences with an abundance of data. Image analysis
facilitates the assessment. The actin cytoskeleton plays a crucial role in understanding the behavior of osteoblastic
cells on biomaterials. It can be visualized by confocal laser scanning microscopy. In these images it appears as a
dense network of bright ridges which is so far only qualitatively assessed. For quantification there is a need for
ridge detection techniques which provide a geometrical description of this graph-like feature. The state of the
art methods do not cope with the systematical degradation of the brightness information. This paper presents
the key part of a ridge tracking technique, which makes more efficient use of context information. Two random
models confirm the accuracy and highlight the area for further improvement of the method.
Distance maps and inscribed convex sets for shape classification applied to road signs
Frédérique Robert-Inacio
Show abstract
This paper presents an algorithm enabling to detect disks on color images. The proposed method is based on a basic color segmentation giving a preliminary binary image. Then distance mapping is used to determine possible circles. And finally circle location is combined to color information in order to find the best-fitting disk. Furthermore, disks are, in a wide understanding, sets of points for which the distance to a particular point called center, is lower or equal to a given radius value. That is why the proposed method can also detect squares, octagons and other shapes that occur to be disks for a given distance, such as Euclidean, chessboard, Manhattan or chamfer distances. An application to pattern recognition for road sign interpretation is also presented in order to illustrate how road sign shape is a useful and significant information in the sign interpretation process.
Interferometry
Nano-level 3D shape measurement system using color analysis method of RGB interference fringes
Show abstract
Nano-level 3-D measurement is one of the key technologies for the current and future generation of production systems
for semi-conductors, LCDs and nano-devices. To meet with these applications, wide range nano-level 3-D shape
measurement method using combination of RGB laser lights has been developed. It measures the height of nano-objects
from the combination of RGB LED lights interference. To analyze the combination of RGB lights, the color analysis
method on xy-color plane has been introduced. In this method, the color changes on xy-color plane means the height
changes. Experimental system to measure the three micro-meter height has been developed, and succeeded to measure
the 50 nm step and 1000 nm step samples. The method has been applied to measure a nano-device, a contact needle for
measurement. The shape of the needle has been extracted, successfully.
Multiwavelength single-shot interferometry without carrier fringe introduction
Show abstract
As a single-shot interferometric technique, spatial carrier interferometry has been thoroughly investigated,
and it has been shown to have some problems, such as low spatial resolution. To overcome the problems, we
propose a novel single-shot surface profiling technique that does not require carrier introduction. It is based
on a model-fitting algorithm and estimates the model parameters and the heights of plural points
simultaneously based on their multi-wavelength intensity data. The validity of the proposed method is
demonstrated by computer simulations and actual experiments.
Advanced Tools
Combining high productivity and high performance in image processing using Single Assignment C
Show abstract
In this paper the problem of high performance software engineering is addressed in the context of image processing
regarding productivity and optimized exploitation of hardware resources. Therefore, we introduce the functional
array processing language Single Assignment C (SaC), which relies on a hardware virtualization concept for
automated, parallel machine code generation. An illustrative benchmarking example proves both utility and
adequacy of SaC for image processing.
Imaging Systems and Sensors
Creation of an artifact database and experimental measurement of their detectability thresholds in noises of different spectra, in the context of quality control of an x-ray imager
J.M. Vignolle,
L. Debize,
I. Bensaid,
et al.
Show abstract
In order to assess the quality of an X-ray imager it is necessary to measure the visibility of any artifact that might be
present in the image. Several methods have been proposed in the literature to calculate this visibility. To predict the
performance of these methods in the context of quality control of X-ray imagers, a base of 10 artifacts as different as
possible in shape and aspect have been created (a pixel, a line, a step, various spots and noises). The amplitude for which
each artifact has a probability of 50% to be detected has been determined. To do so, the artifacts have been observed
merged with three noises of different spectra ("white noise", "high-frequency" noise and "low-frequency" noise). To
determine the 50% detection probability amplitudes, a variant of the 2 Alternative Forced Choice procedure has been
used. It has been checked that the measurement exploitation method is unbiased and its precision is sufficient. The
dispersion of results between testers, around 15% in average, is also satisfactory. These results are a solid and objective
basis to check the relevance and limits of visibility measurement methods described in literature, applied to the domain
of quality control of X-ray imagers.
Evaluation of the reasons why freshly appearing citrus peel fluorescence during automatic inspection by fluorescent imaging technique
Md. Abdul Momin,
Naoshi Kondo,
Makoto Kuramoto,
et al.
Show abstract
Defective unshu oranges (Citrus reticulate Blanco var. unshu) were sorted based on fluorescent imaging technique in a
commercial packinghouse but fresh appearing unshu were rejected due to fluorescence appearing on their peel. We
studied the various visible patterns based on colour, fluorescence and microscopic images, where even areas of the peel
that are not obviously damaged can have fluorescence, to provide a categorization of fluorescence reasons. The
categorization corresponded to: 1) hole and flow; 2) influenced by damaged or rotten fruits that have released peel oil
onto it; 3) immature or poor peel quality; 4) whitish fluorescence due to agro-chemicals and 5) variation of the growing
season. The identification of such patterns of fluorescence might be useful for citrus grading industry to take some
initiatives to make the entire automated system more efficient.
Secondary radiations in CBCT: a simulation study
Show abstract
Accurate quantitative reconstruction in kV cone-beam computed tomography (CBCT) is challenged by the presence of secondary radiations (scattering, fluorescence and bremsstrahlung photons) coming from the object and from the flat-panel detector itself. This paper presents a simulation study of the CBCT imaging chain as a first step towards the development of a comprehensive correction algorithm. A layer model of the detector is built in a Monte Carlo environment in order to help localizing and analyzing the secondary radiations. The contribution of these events to the final image is estimated with a forced-detection scheme to speed-up the Monte Carlo simulation without loss of accuracy. We more specifically assess to what extent a 2D description of the flat-panel detector would be sufficient for the forward model (i.e. the image formation process) of an iterative correction algorithm, both in terms of energy and incidence angle of incoming photons. A convolution model to account for detector secondary radiations is presented and validated. Results show that both object and detector secondary radiations have to be considered in CBCT.
A proposal of virtual lens model by using multi-camera array
Show abstract
The purpose of this study is to construct a model of an optical lens by using a multi camera array. It is known that
virtually focused images can be produced by synthetic aperture focusing techniques. However there is a difference
between the blur of the virtually focused image and the blur of an image produced by an optical lens. We suggest a method
to correct this difference. Using our method, it is possible to create images that have multiple discrete focus depths,
something that is impossible using an optical lens. Basic experiments were conducted, and the effectiveness of the
approach was demonstrated.
Data Fusion
Automatic classification of 3D segmented CT data using data fusion and support vector machine
Show abstract
The three dimensional X-ray computed tomography (3D-CT) has proved its successful usage as
inspection method in non destructive testing. The generated 3D volume using high efficiency
reconstruction algorithms contains all the inner structures of the inspected part. Segmentation of this
volume reveals suspicious regions which need to be classified into defects or false alarms. This paper
deals with the classification step using data fusion theory and support vector machine. Results
achieved are very promising and prove the effectiveness of the data fusion theory as a method to build
stronger classifier.
Fusion of geometric and thermographic data for the automatic inspection of forgings and castings
Show abstract
Many workpieces produced in large numbers and having a large variety of sizes and geometries, e.g. castings and forgings,
have to be 100% inspected; on the one side, geometric tolerances need to be examined, and on the other side material
defects, surface cracks have to be detected. In the paper a fully automated non-destructive testing technique is presented,
whereby the workpiece is continuously moved and during this movement two measurements are carried out: first a thermographical
measurement combined with inductive heating, where an infrared camera records the temperature distribution at
the surface of the workpiece in order to localize material defects and surface cracks. In the second step a light sectioning
measurement is carried out, to measure the 3d geometry of the piece. With help of registration the data from the two different
sources are fused (merged) and evaluated together. The advantage of this technique is, that a more reliable decision can
be made about the failures and their possible causes. The same registration technique can also be used for the comparison
of different pieces and therefore to localize different failure types, compared to a 'golden', defect-free piece.
Motion/Registration
Automatic quantitative evaluation of image registration techniques with the E dissimilarity criterion in the case of retinal images
Yann Gavet,
Mathieu Fernandes,
Jean-Charles Pinoli
Show abstract
In human retina observation (with non mydriatic optical microscopes), a registration process is often employed
to enlarge the field of view. For the ophthalmologist, this is a way to spare time browsing all the images. A
lot of techniques have been proposed to perform this registration process, and indeed, its good evaluation is a
question that can be raised.
This article presents the use of the ε dissimilarity criterion to evaluate and compare some classical featurebased
image registration techniques. The problem of retina images registration is employed as an example, but it
could also be used in other applications. The images are first segmented and these segmentations are registered.
The good quality of this registration is evaluated with the dissimilarity criterion for 25 pairs of images with a
manual selection of control points. This study can be useful in order to choose the type of registration method
and to evaluate the results of a new one.
Mobile robot control using 3D hand pose estimation
Show abstract
We propose a mobile robot control method using 3D hand pose estimation without using sensors or controllers. The
hand pose estimation we propose reduces the number of image features per data set, which makes the construction of a
large-scale database possible, as well as estimation of the 3D hand poses of unspecified users with individual differences
without sacrificing estimation accuracy. The system involves the construction in advance of a large database comprising
three elements: hand joint information including the wrist, low-order proportional information on the hand images
indicating the rough hand shape, and hand pose data comprised of low-order image features per data set.
Non-rigid registration for qualitiy control of printed materials
Show abstract
This paper presents a new approach to non-rigid elastic registration. The method is applied to hyper spectral imaging
data for the automatic quality control of decorative foils which are subject to deformation during lamination. A new image
decimation procedure based on Savitzky-Golay smoothing is presented and applied in a multiresolution pyramid. Modified
Fourier basis functions implemented by projection onto the orthogonal complement of a truncated Gram polynomial basis
are presented. The modified functions are used to compute spectra whereby the Gibbs error associated with local gradients
in the image are reduced. The paper also presents the first direct linear solution to weighted tensor product polynomial
approximation. This method is used to regularize the patch coordinates, the solution is equivalent to a Galerkin type
solution to a partial differential equations. The new solution is applied to published standard data set and to data acquired
in a production environment. The speed of the new solution justifies explicit reference: the present solution implemented
in MATLAB requires approximatly 1.3s to register an image of size 800 ×× 500 pixels. This is approximately a factor 10
to 100 faster than previously published results for the same data set.
Texture and Surface Inspection
A template matching approach based on the discrepancy norm for defect detection on regularly textured surfaces
Show abstract
In this paper we introduce a novel algorithm for automatic fault detection in textures. We study the problem of
finding a defect in regularly textured images with an approach based on a template matching principle.
We aim at registering patches of an input image in a defect-free reference sample according to some admissible
transformations. This approach becomes feasible by introducing the so-called discrepancy norm as fitness function
which shows particular behavior like a monotonicity and a Lipschitz property. The proposed approach relies
only on few parameters which makes it an easily adaptable algorithm for industrial applications and, above all,
it avoids complex tuning of configuration parameters.
Experiments demonstrate the feasibility and the reliability of the proposed algorithms with textures from
real-world applications in the context of quality inspection of woven textiles.
Unsupervised segmentation based on Von Mises circular distributions for orientation estimation in textured images
Show abstract
This paper deals with textured images and more particularly with directional textures. We propose a new parametric technique to estimate the orientation field of textures. It consists in partitioning the image into regions with homogeneous orientations, and then to estimate the orientation inside each of these regions, which allows us to maximize the size of the samples used to estimate the orientation without being corrupted by the presence of frontiers between regions. Once estimated the local - hence noisy - orientations of the texture using small filters (3×3 pixels), image partitioning is based on the minimization of the stochastic complexity (Minimum Description Length principle) of the orientation field. The orientation fluctuations are modeled with Von Mises probability density functions, leading to a fast and unsupervised partitioning algorithm. The accuracy of the orientations estimated with the proposed method is then compared with other approaches on synthetic images. An application to the processing of real images is finally addressed.
Algorithms for microindentation measurement in automated Vickers hardness testing
Show abstract
Current algorithms in automated indentation measurement in the context of Vickers microindentation hardness testing suffer from a lack of robustness with respect to entirely missed indentation corner points when applied to real world data sets. Four original algorithms are proposed, discussed and evaluated on a significant data set of indentation images. Three out of these four exhibit accuracy close to human operated hardness testing which has been conducted as a reference technique.
Measuring image sharpness for a computer vision-based Vickers hardness measurement system
Show abstract
A large variety of autofocus functions used for assessing image sharpness is evaluated for the application in
a passive autofocus system in the context of microindentation-based Vickers hardness testing. The functions
are evaluated on a significant dataset of microindentation images with respect to the accuracy of sharpness
assessment, their robustness to downsampling the image data, and their computational demand. Experiments
suggest that the simple Brenner autofocus function is the best compromise between accuracy and computational
effort in the considered application context.
Poster Session
An algorithm for a progressive acquisition image sensor
Show abstract
In this paper, we present a novel approach for adaptive and progressive image acquisition, based on the progressive
transmission of an image decomposed into compositions and superpositions of monovariate functions. The
monovariate functions are iteratively constructed from the acquired data, to progressively reconstruct the final
image: the transmission is performed directly in the 1D space of the monovariate functions, independently of any
statistical properties of the image. Each monovariate function contains only a fraction of the pixels of the image.
Each new transmitted monovariate function adds data to the previously transmitted monovariate functions.
After each partial acquisition, by using the updated monovariate functions, the image is reconstructed with an
increased resolution. Finally, once all the monovariate functions have been transmitted, the original image is
reconstructed exactly at the maximum resolution of the sensor. This approach is characterized by its flexibility:
any numbers of intermediate transmissions and reconstructions are possible. Moreover, the intermediate images
can be reconstructed at any resolution, and for any number of intermediate reconstructions, the original image
will be exactly reconstructed. Finally, the quantity of data is independent of the number and resolutions of
intermediate reconstructions.
Our contributions include the application of a flexible progressive transmission scheme to provide a progressive
and flexible acquisition at various resolutions. Moreover, the accuracy of the full resolution image is preserved,
and the acquired data are encrypted and resilient to packet-loss.
Shape matching by affine movement estimation for 3D reconstruction
M. Saidani,
F. Ghorbel
Show abstract
In this work, we intend to introduce a matching algorithm for 3D reconstruction of a planar object immersed in
three dimensional scene with stereo pair of images. Such algorithm starts by matching closed contours having
the same affine shape. This task is obtained by applying a complete and stable set of absolute affine invariants
which have been computed after an affine invariant re-sampling procedure based on a normalized affine arc
length. For each couple of corresponding contours, and by using the pseudo-inverse method, we introduce an
efficient and robust affine motion estimation. The robustness is given by the contribution of all contour points to
the geometrical parameters estimation to reconstruct apparent movement. We propose this idea to consolidate
conventional matching algorithm in the context of stereovision assuming that objects are placed enough far from
sensors. In this paper, we illustrate the efficiency of the proposed algorithm by applying it in archeological
context.