Proceedings Volume 8300

Image Processing: Machine Vision Applications V

cover
Proceedings Volume 8300

Image Processing: Machine Vision Applications V

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 28 February 2012
Contents: 6 Sessions, 22 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2012
Volume Number: 8300

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8300
  • Systems
  • Algorithms
  • Detection and Tracking
  • Applications
  • Interactive Paper and Symposium Demonstration Session
Front Matter: Volume 8300
icon_mobile_dropdown
Front Matter: Volume 8300
This PDF file contains the front matter associated with SPIE Proceedings Volume 8300, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Systems
icon_mobile_dropdown
Sensor placement optimization in buildings
Simone Bianco, Francesco Tisato
In this work we address the problem of optimal sensor placement for a given region and task. An important issue in designing sensor arrays is the appropriate placement of the sensors such that they achieve a predefined goal. There are many problems that could be considered in the placement of multiple sensors. In this work we focus on the four problems identified by Hörster and Lienhart. To solve these problems, we propose an algorithm based on Direct Search, which is able to approach the global optimal solution within reasonable time and memory consumption. The algorithm is experimentally evaluated and the results are presented on two real floorplans. The experimental results show that our DS algorithm is able to improve the results given by the most performing heuristic introduced in. The algorithm is then extended to work also on continuous solution spaces, and 3D problems.
Optical feature extraction with illumination-encoded linear functions
Robin Gruna, Jürgen Beyerer
The choice of an appropriate illumination design is one of the most important steps in creating successful machine vision systems for automated inspection tasks. In a popular technique, multiple inspection images are captured under angular-varying illumination directions over the hemisphere, which yields a set of images referred to as illumination series. However, most existing approaches are restricted in that they use rather simple patterns like point- or sector-shaped illumination patterns on the hemisphere. In this paper, we present an illumination technique which reduces the effort for capturing inspection images for each reflectance feature by using linear combinations of basis light patterns over the hemisphere as feature-specific illumination patterns. The key idea is to encode linear functions for feature extraction as angular-dependent illumination patterns, and thereby to compute linear features from the scene's reflectance field directly in the optical domain. In the experimental part, we evaluate the proposed illumination technique on the problem of optical material type classification of printed circuit boards (PCBs).
Algorithms
icon_mobile_dropdown
An illumination-invariant phase-shifting algorithm for three-dimensional profilometry
Fuqin Deng, Chang Liu, Wuifung Sze, et al.
Uneven illumination is a common problem in real optical systems for machine vision applications, and it contributes significant errors when using phase-shifting algorithms (PSA) to reconstruct the surface of a moving object. Here, we propose an illumination-reflectivity-focus (IRF) model to characterize this uneven illumination effect on phase-measuring profilometry. With this model, we separate the illumination factor effectively, and then formulate the phase reconstruction as an optimization problem. To simplify the optimization process, we calibrate the uneven illumination distribution beforehand, and then use the calibrated illumination information during surface profilometry. After calibration, the degrees of freedom are reduced. Accordingly, we develop a novel illumination-invariant phase-shifting algorithm (II-PSA) to reconstruct the surface of a moving object under an uneven illumination environment. Experimental results show that the proposed algorithm can improve the reconstruction quality both visually and numerically. Therefore, using this IRF model and the corresponding II-PSA, not only can we handle uneven illumination in a real optical system with a large field of view (FOV), but we also develop a robust and efficient method for reconstructing the surface of a moving object.
Fusing shape and texture features for pose-robust face recognition
Thorsten Gernoth, Rolf-Rainer Grigat
Unconstrained environments with variable ambient illumination and changes of head pose are still challenging for many face recognition systems. To recognize a person independent of pose, we separate shape from texture information using an active appearance model. We do not directly use the texture information from the active appearance model for recognition. Instead we extract local texture features from a shape and pose free representation of facial images. We use a smooth warp function to transform the images. We compensate also the shape information for head pose changes and fuse the results of separate classiers for shape features and local texture features. We analyze the inuence of the individual contributions of shape and texture information on the recognition performance. We show that fusing shape and texture information can boost the recognition performance in an access control scenario.
Automated inspection of tubular material based on magnetic particle inspection
Automatic industrial surface inspection methodology based on Magnetic Particle Inspection is developed from image acquisition to defect classification. First the acquisition system is optimized, then tubular material images are acquired, reconstructed then stored. The characteristics of the crack-like defects with respect to its geometric model and curvature are used as a priori knowledge for mathematical morphology and linear filtering. After the segmentation and binarization of the image, vast amount of defect candidates exist. Finally classification is performed with decision tree learning algorithm due to its robustness and speed. The parameters for mathematical morphology, linear filtering and classification are analyzed and optimized with Design Of Experiments based on Taguchi approach. The most significant parameters obtained may be analyzed and tuned further. Experiments are performed on tubular materials and evaluated by its accuracy and robustness by comparing ground truth and processed images. The result is promising with 97 % True Positive and only 0.01 % False Positive rate on the testing set.
Detection and Tracking
icon_mobile_dropdown
Runway hazard detection in poor visibility conditions
Bo Jiang, Zia-ur Rahman
More recently, research on enhancing the situational awareness of pilots, especially in poor visibility flight conditions, gains more and more interests. Since pilots may not be able to spot the runway clearly in poor visibility conditions, such as fog, smoke, haze or dim lighting conditions, aviation landing problem can occur due to the (unexpected) presence of objects on the runway. Complicated and trivial instruments, switches, bottoms, plus sudden happenings are enough for the pilots to take care of during landing approach. Therefore, an automatic hazard detection approach that combines non-linear Multi-scale Retinex (MSR) image enhancement, edge detection with basic edge pattern analysis, and image analysis is investigated. The effect of applying the enhancement method is to make the image of the runway almost independent from the poor atmospheric conditions. The following smart edge detection process extracts edge information, which can also reduce the storing space, the comparison and retrieval time, and the effect of sensor noise. After analyzing the features existing in the edge differences occurring in the runway area by digital image processing techniques, the existing potential hazard will be localized and labeled. Experimental results show that the proposed approach is effective in runway hazard detection in poor visibility conditions.
Application of image processing to track twin boundary motion in magnetic shape memory alloys
Adrian Rothenbuhler, Elisa H. Barney Smith, Peter Müllner
Materials scientists make use of image processing tools more and more as technology advances and the data volume that needs to be analyzed increases. We propose a method to optically measure magnetic eld induced strain (MFIS) as well as twin boundary movement in Ni2MnGa single crystal shape memory alloys to facilitate spatially resolved tracking of deformation. Current magneto-mechanical experiments used to measure MFIS can measure strain only in one direction and do not provide information about the movement of individual twin boundaries. A sequence of images captured from a high resolution camera is analyzed by a boundary detection algorithm to provide strain data in multiple directions. Subsequent motion detection and Hough feature extraction provide quantitative information about the location and movement of active twin boundaries.
A new point process model for trajectory-based events annotation
Nicolas Ballas, Bertrand Delezoide, Françoise Prêteux
Human actions annotation in videos has received an increase attention from the scientific community these last years mainly due to its large implication in many computer vision applications. The current leading paradigm to perform human actions annotation is based on local features. Local features robust to geometric transformations and occlusion are extracted from a video and aggregated to obtain a global video signature. However, current aggregation schemes such as Bag-of-Words or spatio-temporal grids have no or limited information about the local features spatio-temporal localization in videos. It has been shown that local features localization can be hepful for detecting a concept or an action. In this work we improve on the aggregation step by embedding local features spatio-temporal information in the final video representation by introducing a point process model. We propose an event recognition system involving two main steps: (1) local features extraction based on robust point trajectories, and (2) a global action representation capturing the spatio-temporal context information through an innovative point process clustering. A point process provides indeed a well-defined formalism to characterize local features localization along with their interactions information. Results are evaluated on the HOllywood in Human Action (HOHA) dataset showing an improvement over the state-of-art.
Face detection and eyeglasses detection for thermal face recognition
Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.
Applications
icon_mobile_dropdown
Strain analysis by regularized non-rigid registration
This paper presents a new approach to optical material stress analysis, which eliminates the need to apply a random dot pattern to the surface of the sample being tested. A multi-resolution hierarchical sub-division is implemented, with a consistent polynomial decimation applied at each layer of the tree. The degree of decimation must be selected depending on the nature of the structure of the surface of the sample being At each layer the individual patches are registered using a modified normalized phase correlation, whereby the Fourier basis functions are projected onto the orthogonal complement of a low degree Gram polynomial basis. This reduces the effect of the Gibbs error on the local registration. The registration positions are then subjected to a regularization via an entropy weighted tensor-polynomial approximation. The Gibbs polynomial basis is used for the tensor product, since they are orthonormal and model the continuous deformation associated with an elastic deformation. The stability of the proposed method is demonstrated in real measurements and the results with and without the application of the random pattern are compared.
Combining spatial and spectral information to improve crop/weed discrimination algorithms
L. Yan, G. Jones, S. Villette, et al.
Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.
Automated parasites detection in clams by transillumination imaging and pattern classification
Miguel Soto, Pablo Coelho, Jose Soto, et al.
Quality control of clams considers the detection of foreign objects like shell pieces, sand and even parasites. Particularly, Mulinia edulis clams are susceptible to have a parasite infection caused by the isopoda Edotea magellanica, which represents a serious commercial problem commonly addressed by manual inspection. In this work a machine vision system capable of automatically detect the parasite using a clam image is presented. The parasite visualization inside the clam is achieved by an optoelectronic imaging system based on an transillumination technique. Furthermore, automatic parasite detection in the clam's image is accomplished by a pattern recognition system designed to quantitatively describe parasite candidate zones. The extracted features are used to predict the parasite presence by means of a binary decision tree classifier. A real sample dataset of more than 155000 patterns of parasite candidate zones was generated using 190 shell-off cooked clams from the Chilean south pacific coasts. This data collection was used to train a test the classifier using cross-validation. Primary results have shown a mean parasite detection rate of 85% and a mean total correct classification of 87%, which represent a substantive improvement to the existing solutions.
Vision-based in-line fabric defect detection using yarn-specific shape features
Dorian Schneider, Til Aach
We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved (~200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved (~1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.
3D temperature mapping of turboshaft components using thermal paints and color recognition
S. Guérin, C. Lempereur, P. Brevet
In order to enhance turboshaft lifespan and increase thermal efficiency, aeronautical manufacturers have to optimize the temperature of engine components in operation. Dedicated combustion tests are undertaken and specific techniques are developed to measure surface temperatures. Thermal paints have been used for several years, associated with skilled operator observations, as a valuable means to get peak temperature profiles. This article describes major advances in the analysis process based on color to temperature supervised classification and digitization of the outer shape of the components in order to get 3D temperature maps. A non-contact scanner enables to acquire both a color image and a 3D mesh of the component. The color image is processed with a classification algorithm to give a temperature image. Different colorimetric distances are tested to compare each pixel to the database and find the best matching temperature, which is then associated to a node of the 3D mesh. The relevance of the method is to increase temperature resolution and robustness and to allow more reliable comparisons between numerical simulation and bench test measurement. This system is currently implemented in the engine development process at Turbomeca.
Interactive Paper and Symposium Demonstration Session
icon_mobile_dropdown
Efficient local approximation of perceptual color differences for color inspection
Reinhold Huber-Mörk
We suggest a local approximation of perceptual color differences in a device dependent color space, e.g. an RGB space. The approximation is efficiently computed from measuring Euclidean color distance in the device dependent color space combined with an associate memory data structure. Established measures of color difference are considered. Results for small perceptual color differences in a color inspection setup are given.
Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering
Aida Rodríguez, Juan Luis Nieves, Eva Valero, et al.
We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.
Robust recognition of 1D barcodes using Hough transform
John Dwinell, Peng Bian, Long Xiang Bian
In this paper we present an algorithm for the recognition of 1D barcodes using the Hough transform, which is highly robust regarding the typical degraded image. The algorithm addresses various typical image distortions, such as inhomogeneous illumination, reflections, damaged barcode or blurriness etc. Other problems arise from recognizing low quality printing (low contrast or poor ink receptivity). Traditional approaches are unable to provide a fast solution for handling such complex and mixed noise factors. A multi-level method offers a better approach to best manage competing constraints of complex noise and fast decode. At the lowest level, images are processed in gray scale. At the middle level, the image is transformed into the Hough domain. At the top level, global results, including missing information, is processed within a global context including domain heuristics as well as OCR. The three levels work closely together by passing information up and down between levels.
Estimating the coordinates of pillars and posts in the parking lots for intelligent parking assist system
Jae Hyung Choi, Jung Gap Kuk, Young Il Kim, et al.
This paper proposes an algorithm for the detection of pillars or posts in the video captured by a single camera implemented on the fore side of a room mirror in a car. The main purpose of this algorithm is to complement the weakness of current ultrasonic parking assist system, which does not well find the exact position of pillars or does not recognize narrow posts. The proposed algorithm is consisted of three steps: straight line detection, line tracking, and the estimation of 3D position of pillars. In the first step, the strong lines are found by the Hough transform. Second step is the combination of detection and tracking, and the third is the calculation of 3D position of the line by the analysis of trajectory of relative positions and the parameters of camera. Experiments on synthetic and real images show that the proposed method successfully locates and tracks the position of pillars, which helps the ultrasonic system to correctly locate the edges of pillars. It is believed that the proposed algorithm can also be employed as a basic element for vision based autonomous driving system.
Recognizing human gestures using a novel SVM tree
Hitesh Jain, Abhik Chatterjee, Sanjeev Kumar, et al.
In this paper, a novel support vector machine (SVM) tree is proposed for gesture recognition from the silhouette images. A skeleton based strategy is adopted to extract the features from a video sequence representing any human gesture. In our binary tree implementation of SVM, the number of binary classifiers required is reduced since, instead of grouping different classes together in order to train a global classifier, we select two classes for training at every node of the tree and use probability theory to classify the remaining points based on their similarities and differences to the two classes used for training. This process is carried on, randomly selecting two classes for training at a node, thus creating two child nodes and subsequently assigning the classes to the nodes derived. In the classification phase, we start out at the root node. At each node of the tree, a binary decision is made regarding the assignment of the input data point to either of the group represented by the left and right sub-tree of the node which may contain multiple classes. This is repeated recursively downward until we reach a leaf node that represents the class to which the input data point belonging. Finally, the proposed framework is tested on various data sets to check its efficiency. Encouraging results are achieved in terms of classification accuracy.
Fabric defect detection using the wavelet transform in an ARM processor
J. A. Fernández, S. A. Orjuela, J. Álvarez, et al.
Small devices used in our day life are constructed with powerful architectures that can be used for industrial applications when requiring portability and communication facilities. We present in this paper an example of the use of an embedded system, the Zeus epic 520 single board computer, for defect detection in textiles using image processing. We implement the Haar wavelet transform using the embedded visual C++ 4.0 compiler for Windows CE 5. The algorithm was tested for defect detection using images of fabrics with five types of defects. An average of 95% in terms of correct defect detection was obtained, achieving a similar performance than using processors with float point arithmetic calculations.
Orthophotoplan segmentation based on region merging for roof detection
Y. El Merabet, C. Meurie, Y. Ruichek, et al.
In this paper, we propose an orthophotoplan segmentation method based on watershed algorithm combined with an efficient region merging strategy for roof detection. The preliminary segmentation is obtained by the watershed algorithm with an optimal couple of colorimetric invariant/color gradient optimized for the application. The use of the appropriate couple of invariant/gradient permits to limit illumination changes (shadows, brightness, etc) affecting the images. Even if the watershed based results are good, the images are over-segmented. That is why, a region merging procedure is proposed. This procedure uses a merging criteria based on 2D modeling of roof ridges and region features adapted to the orthophotoplan particularities. The proposed strategy is evaluated on 100 real roof images with a ground truth image segmentation in order to demonstrate the effectiveness and the reliability of the proposed approach.