Proceedings Volume 1194

Optics, Illumination, and Image Sensing for Machine Vision IV

Donald J. Svetkoff
cover
Proceedings Volume 1194

Optics, Illumination, and Image Sensing for Machine Vision IV

Donald J. Svetkoff
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 April 1990
Contents: 1 Sessions, 29 Papers, 0 Presentations
Conference: 1989 Symposium on Visual Communications, Image Processing, and Intelligent Robotics Systems 1989
Volume Number: 1194

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection
Akira Tomono, Muneo Iida, Yukio Kobayashi
This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.
Extraction of the "Time to Contact" from Real Visual Data
Daniel Raviv
This paper presents algorithms to extract the "time to contact" parameter from a sequence of images. This parameter which is essential for animals' behavior can be used to control mobile robots. We concentrate on two practical methods. One is based on observing the change of the projected area of the object, and the other is based on a new technique for indirect and robust computation of the optic flow. The methods are independent of the shape of the object. An implementation in a real-time closed-loop robotic system is discussed.
Position Decoupled Optical Inspection Relay System
Kevin G. Harding, Leonard H. Bieman
Applying optical inspection to process control problems can often become encumbered by the required size and weight of the optical system and lens. Although it may be desirable to mount the camera and lens off of the machinery being controlled for mechanical stability, this creates a problem in relating the machine position to the image of the part being processed. By employing a relay optical system on the machine or other moving apparatus, in conjunction with a mechanically decoupled imaging system on the camera, the image can be coupled with the machine motion, while still producing a stable image on the separately mounted video camera. The relay system can translate in X,Y, or Z without moving the video image. Another potential application of this optical design is the overlay of two separated images, each captured with a separate relay system. The resulting image will show the two scenes stability superimposed on top of one another on the video camera, independent of the relative position between the two relay systems. We will discuss variations of the optical design, and the limitations imposed by the use of this design on the overall optical performance.
TDI Imaging In Industrial Inspection
David L. Gilblom
Time delay and integration (TDI) imaging was first applied in the early 1970's to aerial photoreconnaissance as an electronic replacement for photographic film. Since that time, TDI has been applied elsewhere infrequently with bare printed circuit board inspection and document digitizing as the only well-known recent applications. Now, TDI has been effectively brought to bear on an ubiquitous problem, that of inspection of moving webs. Although other imaging techniques have been and are being used for web inspection, none shows the broad adaptability and ease of use of TDI-based imaging. Herein is a review of the TDI technique, a specific implementation of TDI in a unique web-imaging-compatible camera and a comparison of the TDI technique with other current technologies.
A Time Delay And Integration Camera For Machine Vision
Don W. Lake, Satoru C. Tanaka
The continual challenge for machine vision is speed and illumination. Imaging capability and sophistication are not reasons enough to slow a production line. Camera performance must meet the demands of imaging precisely detailed features occurring on high speed lines. A Time Domain and Integtration (TDI) camera provides an advanced solution to those requirements. The TDI operation provides two basic benefits. First is an increase in the speed of operation. Second is the decrease in required light level. The remainder of this paper discusses the principle of TDI operation, how the benefits are achieved, and how TDI is realized in an industrial camera.
Special Scanning Modes In CCD Cameras
David L. Gilblom
Although the vast majority of television cameras using charge-coupled (CCD) imagers produce video signals compatible with one of the familiar scanning standards, the internal structure of these imagers may often permit their use in non-standard scan modes offering additional useful functions. Some special scan modes may result in production of non-standard output signals while others preserve the usual output. This is a survey of some of these scanning modes, the effect they have on the output signal format, the artifacts which each may generate and the precautions to be taken when each is used.
Characterization And Correction Of Image Acquisition System Response For Machine Vision
Jay McClellan
Non-ideal characteristics of image acquisition systems are discussed. A model is formulated which describes the behavior of an image acquisition system, including effects of the camera, lens and frame grabber. Procedures for measuring the model parameters are developed. A procedure for inverting the model to obtain luminance measurements from frame-grabber values is given, along with a discussion of its limitations. This image correction procedure is tested for two solid-state cameras, and the results are analyzed. System noise measurements provide estimates of the reliability of the results, and the effects of averaging succesive frames to reduce the effects of noise are discussed.
Instrument Noise And The Signal-To-Noise Power Spectrum Of Laser Diode-Based Optical Processors
Donald J. Svetkoff, Donald B. Kilgus
The signal-to-noise power spectrum, evaluated through point-spread-function profiling, is of general utility for assessing the quality of an optical system, with implications for most machine vision processors. Recently, this classic test has been applied to laser diode-based processors. In this paper we present empirical findings of the instrument noise spectra of such systems, and illustrate improvements in the instrument function given by various optimization techniques. The resulting low-noise system is used to evaluate the quality of components used in optical processors.
Scale Invariant Processing Using Multiple Wavelengths
Jerome Knopp
A method for optical correlation is discussed that can use a liquid crystal television in the filter plane. Conventional binary phase-only filtering is compared with a binary amplitude approach that uses the sum of the intensities from multiple binary amplitude correlations. One parallel processing implementation is presented that uses three binary amplitude filters, each filter using a different wavelength of light. A computer simulation using synthetic filters shows that a binary amplitude multiple wavelength filter (BANE) that uses ternary phase correction works as well as a conventional binary phase-only filter when used as a scale invariant filter and as a synthetic estimation filter.
Incoherent Optical Correlators
David Casasent, Neil Carender, James Connelly
Optical correlators utilizing spatially incoherent light are examined and compared with coherent correlators. Frequency and image domain optical correlator architectures are discussed. The effect of speckle noise and input noise is examined. Simplified linear systems and Fourier transform descriptions of incoherent processors are provided. Simulation results are presented to quantitatively compare the performance of coherent and incoherent systems processing inputs containing additive noise. We consider pattern recognition correlator applications, rather than imaging systems in which the SNR of incoherent systems is better. For pattern recognition applications, SNR, PSR, and discrimination effects are very different as we show.
Light Source Models For Machine Vision
Kevin G. Harding
As the analysis and processing sophistication has increased in machine vision applications, the type of application has extended to the more difficult tasks of inspection. One such area is high speed inspection requiring high light levels. Occasionally, high speed inspection can be addressed through the use of strobes, but otherwise large amounts of light may be necessary. A variety of high efficiency lights have emerged for use in applications beyond machine vision, which may now be adapted to use with vision applications. These sources include high pressure gases, stable arcs, and solid state sources. This paper presents experimental data characterizing some of these new sources, and explores the experimental match between these sources and the spectral response available from the latest solid state video cameras. In order to better understand the restrictions on the use of these sources, and how they may be overcome, this paper suggests the use of graphical models of the light source performance.
Design And Testing Of A Microscopic Reflectometer
Jay McClellan, Norman Wittels, Allison Gotkin, et al.
A reflectometer was designed to measure reflectivities of sample areas ranging from 10 microns to 1 millimeter. It is capable of illuminating the sample from any angle between 0 and 45 degrees relative to the surface normal, with the observation angle always normal to the surface. The instrument was calibrated and tested using reflectivity standards. Plots of reflectivity versus illumination angle are presented for some common materials.
The Effect Of Optical Fiber Transmission Properties On The Operation Of Optical Fiber Colour Vision Systems In Robotics.
Elzbieta Marszalec
The paper presents a model of the influence of optical fiber transmission properties on the operation of optical fiber colour vision systems and results of a computer analysis of the effect of these properties on colour discrimination by colour recognition systems. The performed analysis provided a basis for formulation of same general conclusions and recommendations for designing different types of colour recognition systems employing optical fibers.
A Comparison Between Square and Hexagonal Sampling Methods for Pipeline Image Processing
Richard C. Staunton, Neil Storey
The majority of machine vision systems derive their input data by digitising an image to produce a square grid of sampled points. However, other sampling techniques can represent equal picture information in a smaller number of samples, with a consequent reduction in data rate. Several workers have looked at regular hexagonal sampling of images which produces optimum data rates for a given information content. Previous work on hexagonal sampling by the authors and others, has shown that image processing operators are computationally more efficient, and as accurate, as their square counterparts. Historically, one factor which has lead to the predominance of square sampling in vision systems, is that this produces images which are more visually pleasing to human observers. This paper describes an investigation of machine vision systems performing industrial inspection tasks, which suggests that in such applications, hexagonal systems out-perform square systems. In particular hexagonal operators can follow tight curves more accurately, allowing better surface defect detection. A surprising observation of this work was that with such images, hexagonal sampling also gave images which were more visually pleasing to human operators. The paper presents a study of sampling point geometry and operator design. Details are given of an implementation of a set of hexagonal, grey-scale operators for use in pipeline or other image processing systems, and a comparison of square and hexagonal techniques has been made. Results of operations on real and simulated surface defect images are given for both sampling systems and the requirement for a defect detection figure of merit identified.
Obtaining Centroids Of Digitized Regions Using Square And Hexagonal Tilings For Photosensitive Elements
Samir Chettri, Michael Keefe, John Zimmerman
In computer vision and graphics, the square or rectangular tessellation is most commonly used. The hexagonal lattice has not been studied as frequently. In this paper we project squares and circles on each of these grids, digitize these figures and obtain the error in locating the centroid. We adopt a Monte Carlo approach, with the centroid of the actual figures being chosen randomly. In the case of the circle, we project various sizes onto the respective grids and study the error in obtaining the centroid. In the case of the square we combine different sizes with angles of rotation that vary from 0 to 90 degrees. Theoretical formulae are developed for the circle on square tile case. These symbolic representations are compared to the results from the Monte Carlo simulation and they are found to be quite close. Finally, comparisons are drawn between the two grids. In the case of the circle, there is a definite advantage in using the hexagonal grid. For the square there is no inherent advantage to either. These results are of use if it is decided to build a camera with hexagonal picture elements.
A Prediction Scheme For A Verification Vision System
Albert G. Mpe, Christian Melin
The need for intelligent interaction of a robot with its environment requires rapid sensing methods. One way to achieve this goal with vision is to consider that the robot can help the sensing system in its recognition and location tasks. We propose a system that uses information about object manipulation to predict features in an image and to update the environment with a verification of these features. The approach is to use calibration parameters to compute "where and how" the features may appear in the image. A vision verification method based on shape supperposition is used to confirm the prediction. This system can also be used in a three dimensional interpretation of single 2D images based on the prediction/verification strategy.
An Accurate Calibration Technique for 3â€"D Laser Stripe Sensors
Marianne Chiarella, Kenneth A. Pietrzak
Laser stripe sensors are being used in a variety of industrial applications including part inspection, process control, and robot guidance. Certain characteristics of the sensor such as standoff and field of view are likely to change from one application to the next. The paper describes an approach to laser stripe sensor calibration when the sensor is composed of off-the-shelf equipment. Sensor dimensions such as the position of the laser in relation to the camera and particular lens characteristics such as focal length and distortion are treated as unknown parameters in the calibration procedure presented here. The paper shows that high accuracy is attainable in range measurements acquired with off-the-shelf components and proper calibration.
Triangulation-Based Camera Calibration For Machine Vision Systems
R. A. Bachnak, M. Celenk
This paper describes a camera calibration procedure for stereo-based machine vision systems. The method is based on geometric triangulation using only a single image of three distinctive points. Both the intrinsic and extrinsic parameters of the system are determined. The procedure is performed only once at the initial set-up using a simple camera model. The effective focal length is extended in such a way that a linear transformation exists between the camera image plane and the output digital image. Only three world points are needed to find the extended focal length and the transformation matrix elements that relates the camera position and orientation to a real world coordinate system. The parameters of the system are computed by solving a set of linear equations. Experimental results show that the method, when used in a stereo system developed in this research, produces reasonably accurate 3-D measurements.
3-D Gradient and Curvature Measurement Using Local Image Information
Harry S. Gallarda, Leonard H. Bieman, Kevin G. Harding
This paper describes an image processing method that measures 3-D gradient and curvature information directly from local surface information using structured moire light. The method relies on the use of a sinusoidal grating to produce the moire patterns. It is shown that the gradient can be estimated by ratioing the third and first spatial derivative of the gray-scale image, but this simple solution does not work well in practice. We derive an alternate solution that uses finite differences and computes a Least Square Estimate of the ratio for small regions of the surface. We describe the results of this method implemented on a PC-based image processing system. Initial results indicate that the method worked well, could be applied to many simple 3-D problems, and implemented on an inexpensive computer system.
Depth from Defocus of Structured Light
Bernd Girod, Stephen Scherock
We propose a new range sensing technique that uses defocus of structured light to measure depth. The technique is an extension of the original passive depth-from-defocus idea to an active, structured-light system. Depth from defocus of structured light has similar properties as structured light triangulation, but it avoids the "missing parts problem" and the "correspondence problem" by eliminating the parallax between the structured light source and the camera. We propose refinements to the technique, using either an anisotropic aperture or astigmatic optics for the light source. Both refinements use an isotropic structured light pattern and compare blur in two orthogonal directions. We point out different ways to remove the ambiguity between objects behind and in front of the plane of best focus.
Range Sensing By Projecting Multiple Slits With Random Cuts
Minoru Maruyama, Shigeru Abe
In this paper, we describe a range sensing method by projecting a single pattern of multiple slits. To obtain 3D data by projecting a single pattern, certain codes for identifying each slit must be contained in the patten. In our method, random dots are used to identify each slit. The random dots are given as randomly distributed cuts on each slit. Thus, each slit is divided into many small line segments and using these segments as features, stereo matching is carried out to obtain 3D data. Using adjacent relations among slit-segments, the false matches are reduced and segment pairs, whose adjacent segments also correspond with each other, are extracted and considered to be correct matches. Then, from the resultant matches, the correspondence is propagated by utilizing the adjacency relationships to get an entire range image.
The Use of Linear Arrays in Electronic Speckle Pattern Interferometry
Michael Short
Electronic Speckle Pattern Interferometry (ESPI) offers high resolution displacement information of objects by analysis of the relative phase between a reference beam of light and an object beam of light at two or more instants in time. The sampling of the data is usually performed with two-dimensional solid-state arrays. Use of a one-dimensional array, however, offers increased displacement resolution or larger field of view, depending upon the optical setup, and offers higher line rates over two-dimensional cameras. The disadvantages imposed by the one-dimensional vs two-dimensional sampling, including object motion restrictions and automatic fringe extraction are discussed.
New 3D-vision Sensor for Shape Measurement Applications
I. Moring, R. Myllyla, E. Honkanen, et al.
In this paper we describe a new 3D-vision sensor developed in cooperation with the Technical Research Centre of Finland, the University of Oulu, and Prometrics Oy Co. The sensor is especially intended for the non-contact measurement of the shapes and dimensions of large industrial objects. It consists of a pulsed time-of-flight laser rangefinder, a target point detection system, a mechanical scanner, and a PC-based computer system. Our 3D-sensor has two operational modes: one for range image acquisition and the other for the search and measurement of single coordinate points. In the range image mode a scene is scanned and a 3D-image of the desired size is obtained. In the single point mode the sensor automatically searches for cooperative target points on the surface of an object and measures their 3D-coordinates. This mode can be used, e.g. for checking the dimensions of objects and for calibration. The results of preliminary performance tests are presented in the paper.
Position Sensitive Detection Techniques for Manufacturing Accuracy Control
Anssi J. Makynen, Juha T. Kostamovaara, Risto A. Myllyla
A sensor system is proposed which can be used to detect a stationary reflective point on a target object. It includes a servo-controlled measuring head and small optical reflectors attached to the target points of the object. The measuring head consists of the target point detector, rangefinder and angle encoders. Target detection is accomplished by illuminating the cooperative marks of the target with an infra-red emitting diode (WED) and focusing the reflected light on the surface of the position sensor. The latter acts as a null detector and its signals are used to drive the servomotors of the measuring head. The proposed sensor is used in a 3D-vision system designed for checking the manufacturing accuracy of large objects in engineering shops. Preliminary results for the target point detection electronics show that a lateral tracking resolution of about 0.003 mm (la-value) and an accuracy of about +1- 0.5 mm are achievable when the distance and angle of the target point reflector vary in the ranges 2 - 5 m and +/- 45•, respectively.
A High Resolution, High Speed 3-D Laser Line Scan Camera For Inspection And Measurement
Donald J. Svetkoff
Many techniques for 3-D data acquisition exist and have been reviewed in the literature. Figures of merit have been published based upon key parameters. This paper first reviews the performance of high speed imagers. Then the performance of a high speed, high resolution 3-D laser scanning sensor developed at SVS will be examined. Results will be shown which illustrate the applicability of this imaging sensor for inspection of electronic components, solder paste, machined parts, etc.
Wide-Area, High Dynamic Range 3-D Imager
Yoshikazu Kakinoki, Tetsuo Koezuka, Shinji Hashinami, et al.
This paper describes a 3-D laser scanning imager for visual inspection of mounted devices on printed circuit boards (PCB). A 3-D imager for this application must satisfy the following requirements: (1) It must be fast enough to sense a 250 by 330 mm area in 14 seconds; (2) It must have a measurement resolution of at least 125 gm; (3) It must be capable of measuring height and light intensity simultaneously; and (4) It must have an optical dynamic range of at least 10 4. We developed a wide-area telecentric scanning optical system which meets these requirements. It uses retroreflective triangulation optics and digital signal processing hardware. Our system scans a laser beam over a 256 mm length with a resolution of 125 μm, without scanning distortion. The retroreflection triangulation optics collect light reflected from objects on a printed circuit board and focus the image on a position-sensitive detector (PSD). This system measures the profile of objects with a vertical resolution of 30 μm, within a range of 7.6 mm. The digital signal processing hardware has a dynamic range of 10 4 and obtains range data from the output signals of the PSD. Its processing speed is 1M pixels/s. This hardware enables profile measurement of objects having a wide range of light reflectance (about 3000 times), from black devices to glossy metal, with an accuracy of 0.1 mm. This 3-D imager was used in an automated inspection system for PC board-mounted devices. This system detects missing, misplaced, and incorrectly installed devices with an inspection speed of 0.1 s/device.
Integration Of Stereo Camera Geometries
Nicolas Alvertos
One disadvantage of the lateral stereo model is that there always exists a set of image elements, in both images, for which there is no correspondence. Similarly, one disadvantage of the axial motion stereo model is that there always exists a set of image elements around the center of either image for which the absolute error in determining depth increases drastically as the distance between image element and image center decreases. However, the portion of the image which results in erroneous correspondences when utilizing one stereo geometry can be processed with no similar errors if the other stereo geometry is incorporated. Therefore, a stereoscopic system with no such disadvantages can be developed by integrating the two models. It is shown that this can be achieved without reducing the contents of the depicted scene. The zoom-based stereo camera model is also proposed as a possible replacement of the axial motion arrangement.
Surface Orientation From Two Camera Stereo With Polarizers
Lawrence B. Wolff
We present a simple method for determining the 3-D orientation of a flat surface from the specular reflection of light, exploiting the polarizing properties of materials. Existing methods which compute surface orientation from specular reflection are purely intensity based and need to rely upon precise knowledge of how specularly reflecting light rays are initially incident upon the material surface. These methods require elaborate structured lighting environments which involve much preliminary calibration. The method presented here computes surface orientation independent of any a priori knowledge of the geometry of specular reflection, as long as specular reflection occurs into the camera sensor from points of interest at which orientation is to be constrained. This obviates the need for any structured lighting. By observing how the transmitted intensity of specularly reflected light through a polarizing filter is varied by rotating the filter, one can determine the specular plane of incidence in which the path of specular reflection must lie. Under the assumption that the surface normal is contained in the specular plane of incidence, a determination of two nonparallel specular planes of incidence from a stereo pair of cameras yields a computation of the surface normal from their planar intersection. For flat surfaces the correspondence of points between the stereo pair of cameras need not be precise, only that the intersecting specular planes of incidence correspond to two points that lie on the same flat surface, or on two flat surfaces with equivalent orientation. This is a much weaker correspondence requirement than for conventional parallax stereo.
Stereoscopic Vision - An Application Oriented Overview
Rolf-Jurgen Ahlers, Jianzhong Lu
Machine vision and it's application in the manufacturing industry and other field has become one of the most exciting activities in computer vision. Many vision systems have been developed for the tasks of inspection, measurement and assembly. Some of them which are able to solve the problems with 1-D, 2-D or simple 3-D informations have already been applied in the production procedures. The vision systems, which are capable for the tasks where general and complicated 3-D problems are to be sowed without loss of flexibility and effeciency, are drawing more and more attention from the field of development and application. This survey paper focuses on the main principles and techniques used by three dimensional vision systems for the inspection, measure-ment and assembly procedures in industry. They are organized into two groups: active-vision and passive-vision, according to the difference in the illumination procedures. To get a quick glance of them tables with figures and brief explanations are presented. Index terms - stereo vision, active vision, passive vision, industrial application, overview Content: I Introduction II Active-Vision HI Passive-Vision IV Conclosion V References