Proceedings Volume 0850

Optics, Illumination, and Image Sensing for Machine Vision II

Donald J. Svetkoff
cover
Proceedings Volume 0850

Optics, Illumination, and Image Sensing for Machine Vision II

Donald J. Svetkoff
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 February 1988
Contents: 1 Sessions, 28 Papers, 0 Presentations
Conference: Advances in Intelligent Robotics Systems 1987
Volume Number: 0850

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Quantitative Fidelity Criterion For Image Processing Applications
Joseph W. Carl
A psychometrically derived, mean square weighted error criterion is defined that mathematically quantifies the difference between an original gray scale image and some reconstructed version of it. The criterion is mathematically tractable in the same sense as the usual mean square error criterion, but it also, (1) accounts analytically for known differences in human spatial frequency contrast sensitivities over six decades of changing luminance, (2) relates to an alternative explanation of certain visual phenomena (such as brightness constancy and the Weber-Fechner law), (3) leads to a relationship between spatial frequency contrast thresholds in human vision and what rate distortion theory tells us about optimal coding for Gaussian channels, and (4) contributes to providing a means of assessing human performance at tasks that involve imagery. Results of an experiment to relate the new fidelity criterion to the prediction of human performance at a target recognition task are presented. Two signal-to-noise ratios and two luminance levels were tested, and a prediction of the effects of those changes was compared to the measured effects. Note that the mean square criterion predicts no change for changing luminance, but changes occur. The new fidelity criterion predicts the direction human performance took as changes were made, but people did worse than predicted.
Camera Edge Response
Stanley H. Zisk, Norman Wittels
Edge location is an important machine vision task. Machine vision systems perform mathematical operations on rectangular arrays of numbers that are intended to faithfully represent the spatial distribution of scene luminance. The numbers are produced by periodic sampling and quantization of the camera's video output. This sequence can cause artifacts to appear in the data with a noise spectrum that is high in power at high spatial frequencies. This is a problem because most edge detection algorithms are preferentially sensitive to the high-frequency content in an image. Solid state cameras can introduce errors because of the spatial periodicity of their sensor elements. This can result in problems when image edges are aligned with camera pixel boundaries: (a) some cameras introduce transients into the video signal while switching between sensor elements; (b) most cameras use analog low-pass filters to minimize sampling artifacts and these introduce video phase delays that shift the locations of edges. The problems compound when the vision system samples asynchronously with the camera's pixel rate. Moire patterns (analogous to beat frequencies) can result. In this paper, we examine and model quantization effects in a machine vision system with particular emphasis on edge detection performance. We also compare our models with experimental measurements.
Performance Characteristics Of The Gated Second Generation Image Intensifier
Jerry D. Bond
Gated Image Intensifiers, used in conjunction with solid state image sensors, for lowlight level and high speed imaging have greatly enhanced performance characteristics of solid state cameras, especially the areas of luminous gain and improved spectral response. This paper reviews operation parameters of the Second Generation Proximity Focused Image Intensifier used in this application. Spectral response characteristics of different mul-tialkali (Na2KSb:Cs) photocathodes will be discussed. Responses are available from the UV (200nm) to the near IR (850nm). The tradeoffs between efficiency and persistence of various output phosphors and the spectral matching to the solid state image sensor array is considered. The role of the microchannel plate (MCP) as the primary gain mechanism of the image intensifier and its effect on resolution and noise is also discussed. The gating of this device, in the areas of design and effects on performance, will be reviewed.
Uniform Illuminators For Inspection
Gordon T. Uber
Glinting and shadowing, problems in lighting three-dimensional objects, can be reduced with uniform illumination. Spherical, hemispherical and ring illuminators are discussed in terms of uniformity, efficiency and practicality. For diffuse surfaces, ring illuminators are satisfactory. For specular surfaces, constant luminance reduces glint and makes the surface visible. High luminance uniformity requires external sources. A high-reflectance integrating enclosure increases both uniformity and the efficiency.
Autofocus Camera System For FA
Hiroharu Yamamoto, Toshio Hara, Kunihiko Edamatsu
A high performance solid state TV camera with auto focusing for the purpose of assembly robot containing a robot vision, automatic visual inspection and other FA (factory automa-tion) use has been developed. The focus detection method is contrast detection method by through the lens. The contrast signal is created from a video signal. The contrast detecting method which uses a video signal have the following unique features. 1. Various lenses can be mounted. 2. The focus detecting area can be arranged freely. 3. The design is very simple, as it requires a minimum of external optics componens. This automatic focusing system can be built up a flexible vision system fitted several objects, as this system has the serial interface.
Cooperative Target Attitude Measurement
Francis G. Bretaudeau, Sylvie J. LagardE, Christine G. Enault
This paper presents a cooperative target's attitude measurement system using properties of plane, specular and circular symetrical pattern features ; when the light source is on the axis of such a transparent pattern M, or when the light source and the detector have common location D for a reflecting one, then the shining points of M define a straight line segment passing through the projection J of D. Point J can be located using either one camera and several coplanar patterns or one pattern and several cameras. In the latter arrangement, point J coordinates can be computed using the cameras coordinate systems passage matrix. A camera calibration methodology, built upon these pattern use, has been envolved. Two experimental attitude measurement systems have been set up and the joined pattern recognition methodologies developped. The first one has been developped for space docking (accuracy : 1°, view field : 40°, distance : from 0.1 up to lm ) and the other has been applied to aircraft attitude computation during landing on aircraft carrier deck ( accuracy : 0.1°, view field : 5°, distance : 800 m ). For this experiment, atmospheric turbulence perturbations can be reduced by averaging points J obtained for uncorrelated measurements. These measurements accuracy have then been compared to those of another attitude sensor : its developpement is based upon the possibility to recognize and 3-D reconstruct a known polygonal shape from its image.
Lighting For Color Vision
James A. Worthey
Some results concerning lighting for human color vision can be generalized to robot color vision. These results depend mainly on the spectral sensitivities of the color channels, and their interaction with the spectral power distribution of the light. In humans, the spectral sensitivities of the R and G receptors show a large overlap, while that of the B receptors overlaps little with the other two. A color vision model that proves useful for lighting work---and which also models many features of human vision---is one in which the "opponent color" signals are T = R - G, and D = B - R. That is, a "red minus green" signal comes from the receptors with greatest spectral overlap, while a "blue minus yellow" signal comes from the two with the least overlap. Using this model, we find that many common light sources attenuate red-green contrasts, relative to daylight, while special lights can enhance red-green contrast slightly. When lighting changes cannot be avoided, the eye has some ability to compensate for them. In most models of "color constancy," only the light's color guides the eye's adjustment, so a lighting-induced loss of color contrast is not counteracted. Also, no constancy mechanism can overcome metamerism---the effect of unseen spectral differences between objects. However, we can calculate the extent to which a particular lighting change will reveal metamerism. I am not necessarily arguing for opponent processing within robots, but only presenting results based on opponent calculations.
Combining X-Ray Imaging And Machine Vision
Gary G. Wagner
The use of x-rays as a light source for medical diagnosis has been around since the early 20th century. The use of x-rays as a tool in nondestructive testing (for industrial use) has been around almost as long. Photographic film has been the medium that converts the x-ray energy that it encounters into areas of light and dark depending on the amount of energy absorbed. Referring to industrial use of x-rays only, many systems have been manufactured that replaced the photographic film with a combination of x-rays, fluorescent screen, and a television camera. This type of system was mainly used as a nondestructive testing tool to inspect various products (such as tires) for internal flaws and always used an operator to make the decisions. In general, the images produced by such systems were poor in nature due to lack of contrast and noise. The improvements in digital image processing and complex algorithms have made it possible to combine machine vision and x-rays to address a whole new spectrum of applications that require automatic analysis for flaw inspection. The objective of this presentation will be to familiarize the audience with some of the techniques used to solve automatic real-time x-ray problems. References will be made to real applications in the aerospace, pharmaceutical, food, and automotive industries.
Image Quality Evaluation Of Machine Vision Sensors
Donald J. Svetkoff
Imaging sensors for machine vision systems include area sensors found in tube and solid state video cameras, linear arrays, and point detectors used in flying spot scanners. With any such sensor image quality is determined by several factors: spatial resolution, dynamic range, spectral sensitivity, etc. In addition, imager defects like structured pattern noise, smear, variations in sensitivity, or poor spatial sampling preclude detection of small changes in reflectivity or depth and limit the performance of pattern recognition algorithms. This paper evaluates several types of imaging sensors and critically reviews their performance for demanding machine vision inspection tasks which require discrimination of several colors, shades of grey, or levels of depth.
An Optical Processor For Product Inspection
David P. Casasent, Jeffrey Richards
Coherent optical proce'ssors are described that optically generate feature spaces in parallel at high speed. A wedge ring detector sampled Fourier feature space is described, since it provides dimensionality reduction and features that are invariant to translation and in-plane rotation of the input object. Thus, the object being inspected can be positioned and oriented anywhere within the input field of view. An optical Hough transform processor is also described. This Hough feature space is attractive for many mensuration functions required in inspection. New techniques for using Hough space features are presented and demonstrated. These new techniques provide in-plane distortion-invariance, and determination of the orientation and location of the object in the field of view. We also discuss how the Fourier coefficients, the wedge ring detector sampled Fourier coefficients, and the Hough transform features can all be produced on a single optical processor. To provide a cost-effective real time system, an inexpensive liquid crystal television is used as the input transducer, with its electrical input obtained from a CCD camera viewing the product to be inspected. Several different inspection problems and quantitative optical laboratory test data are included to demonstrate and quantify the performance and use of this processor for product inspection.
How To Design Inspection Systems For Electronic Components
Norman Wittels, Ross A. Beller, Anthony P. Erwin
Image contrast is caused by variations in object reflectivity and lighting. In automated electronic component inspection systems, component reflectivity can vary greatly and lighting is often sub-optimal because electronic assembly systems are crowded. Therefore, images typically have large contrast ranges. Conversely, most machine vision cameras have small contrast ranges, requiring that images be carefully designed to match the camera's transfer function. We discuss how to design these images. During the design process it is necessary to know the ref lectivies of all materials being inspected. We describe an instrument for measuring the ref lectivities of lmm to 25mm diameter regions on objects using lighting and imaging optics that are similar to typical industrial machine vision applications. This instrument measures both specular and diffuse components of reflectivity with a repeatability of ± 1.5%. We present measurements of the reflectivities of typical electronic components and discuss how to use these data in designing Images for component Inspection systems.
Structured Light Technique Applied To Solder Paste Height Measurement
Catherine A. Keely, Charles C. Morehouse
The technique of structured light has been used to measure the height of solder paste "bumps" on surface mount printed wiring boards. The issue of solder paste height and volume is important in the characterization of the surface mount production process, because solder paste shape and volume (among other things) can have a large effect on the final solder joint quality. Inspection of solder paste takes place after the paste is screened or stenciled on the board, and before components are placed. The structured light system was developed as an off-line process monitoring or process development station. The paper discusses some of the technical challenges overcome in the lighting and imaging side of this application. The challenges include applying the system to boards of various colors and translucencies, imaging the laser stripe reflected off different kinds of solder mask, dealing with large variations of reflectance within the same image, and finally analyzing the image in such a way as to obtain very accurate height measurements. In addition, a method for automating the image analysis was incorporated to significantly improve the usability of the system. Parameters investigated while developing the system include the laser color, polarization of the beam, configuration of the structured light components, and laser speckle effects. The accuracy and repeatability of the system was quantified by a study of the errors contributing to the final measurement.
Automatic SMT Inspection With -X-Ray Vision
Robert A. Kuntz, Peter D. Steinmetz
X-ray is used in many different ways and in a broad variety of applications with today's world. One of the most obvious uses is in the medically related applications. Although less obvious, x-ray is used within industry as well. Inspection of metal castings, pipe-line welds, equipment structures and personal security are just a few. Historically, both medical and industrial x-ray have been dependent on film exposure, development and reading to capture and present the projected image. This process however is labor intensive, time consuming and costly. Correct exposure time and proper view orientation are in question until the film is developed and examined. In many cases, this trial and error causes retakes with the accompanying expense and delays. Recently, due to advances in x-ray tube technology, tubes with microfocus construction have become available. These tubes operate at high enough flux density such that when combined with x-ray to visible light converters, real-time imaging is possible.
White Light Seam Tracking System For Arc-Welding Robot
T. C. Wang
This paper describes a visual seam tracking system used for the path and torch height correction of an arc-welding robot. The system consists of a white light as the light source, a solid state camera to acquire image, a 16 bits personal computer as the image processor, and a ITRI-W welding robot which is developed by Mechanical Industry Research Laboratories, ITRI. Light source and camera are mounted on robot hand accompanied with torch.
Surface Orientation From Polarization Images
Lawrence B. Wolff
It is demonstrated that measurement of local surface orientation for a wide variety of isotropically rough material surfaces can be achieved from knowledge of the polarization states of both incident and reflected light radiation upon and from the surface respectively. The reflection model used is the Torrance-Sparrow model assuming combined specular and diffuse reflection. The specular and diffuse reflection components have distinct polarization states which makes it possible to resolve intersecting specular and diffuse equireflection curves in gradient space thereby measuring surface orientation. The light source incident orientation and the viewer orientation are assumed to be known along with the complex index of refraction of the material surface and the root mean square slope of planar microfacets characterizing surface roughness. The theoretical development is very comprehensive with respect to the nature of the incident and reflected light radiation which is assumed to be quasi-monochromatic having arbitrary degree of polarization. Thus determination of surface orientation is feasible using incident incoherent natural sunlight which is completely unpolarized, using incident partially polarized light such as specularly reflected natural sunlight, or using completely polarized incident light such as light which is elliptically polarized including circular and linear polarizations.
Determining Surface Orientations Of Specular Surfaces By Intensity Encoded Illumination
Shree K. Nayar, Arthur C. Sanderson
This paper discusses an intensity encoded line source illumination approach to estimating the surface orientation of specular surfaces. The normalized brightness difference function (NBDF) is introduced as a real time invertible relationship between image irradiances and surface orientation, and provides the basis for estimation of the surface gradient using relative brightness values rather than calibrated photometric measurements. Two complementary line source intensity patterns are generated for each measurement, and a series of radial lines are scanned to span the surface gradient space. Experiments have estimated the accuracy of the orientation measurement to be within 2-3%. Sensitivity to variations in specularity, and the feasibility of the encoding technique are described. Careful attention to the illumination geometry, distant source approximation and computation speed will be required for the purpose of practical implementation. The use of line sources is most suited to applications where the relevant features can be extracted from a finite set of cross sections of the object. Inspection of solder and machined metal are examples of the application of intensity encoding.
3-D Shape Measurement Using Three Camera Stereopsis
Chi Chong Cheung, William A. Brown
Traditional stereopsis techniques involve using two cameras. When matching parts are identified in both images, the range of the matching parts is determined by triangulation. However, occlusions and missing parts in the two images have raised problems in determining correspondence. The use of three camera positions promises to reduce the occlusion and missing parts problems as well as to reduce the probability of incorrect matches. A relative calibration technique which determines the external parameters of the three cameras used is presented. With the cameras calibrated, feature points from images are selected so that matching parts can be easily identified. The edges in the three images are used as feature points. A trinocular matching technique using epipolar line constraints and line coherence constraints is used to find matching edges in the three cameras. With correspondences established in the images, the shape of objects can be obtained since camera parameters are known.
A Computational Model For Sensing Depth From A Single 2-D Image
M. R. Sayeh, M. Daneshdoost, F. Pourboghrat
It is generally a difficult task to obtain a complete geometrical model of a scene given a limited number of storage spaces, or to construct a 3-D scene geometry from one or more 2-D images. In this paper we focus on extracting information about a 3-D scene geometry given a 2-D image. The degree of blurriness (or sharpness) gives rise to computation of depth of an object. A method of shape from sharpness, based on this concept, is introduced. Given a point source image, its distance from a lens is obtained. This can be applied to an arbitrary scene consisting of superposition of many point sources.
Local Surface Structure From Disparity Measurements
Michael R. M. Jenkin, Allan D. Jepson, John K. Tsotsos
Current theories of stereopsis involve three distinct stages: First, the two images of a stereo pair are processed separately to extract monocular features. One common choice of feature is the presence of a zero-crossing in a bandpassed versions of the image. Second, the monocular features in one image are matched with corresponding features found in the other image. In practice this second stage cannot be expected to produce only the correct matches, and a third stage must be considered in order to remove the incorrect matches ("false targets"). There are therefore three main issues the design of such a traditional algorithm for stereopsis, namely i) the choice of image features; the choice of matching criteria; and iii) the way false targets are avoided or eliminated. In this paper we introduce a different approach. We propose that symbolic features should not be extracted from the monocular images in the first stage of processing. Rather we examine a technique for measuring the local phase difference between the two images. We show how local phase difference in a bandpassed version of the image can be interpreted as disparity. This essentially combines the first two stages of the traditional approach. These disparity measurements may contain "false targets" which must be eliminated. Building upon the results of these disparty detectors, we show that a simple surface model based on object cohesiveness and local surface planarity across a range of spatial-frequency tuned channels can be used to reduce false matches. The resulting local planar surface support can be used to segment the image into planar regions in depth. Due to the independent nature of both the disparity detection and local planar support mechanism, this method is capable of dealing with both opaque and transparent stimuli.
Automatic Inspection Of 3D Objects Using Stereo
Dave Hutber
A system is described which acquires a depth map of an object using the stereo principle. The system consists of a matched pair of cameras, which are able to move, zoom etc. under computer control, enabling an entire inspection sequence to be performed automati-cally. The correspondence problem is overcome by projecting a controlled stripe of laser light onto the object, and then moving the stripe to obtain a series of images that are processed to give a dense depth map. The accuracy of this depth map and its limiting factors are discussed, with particular reference to the calibration procedure used and the components of the system.
Spline-Based Algorithms For Shape From Shading
Rui J.P. de Figueiredo, Vishal Markandey
Algorithms to recover the shape of objects from the shading caused by illumination are reported. This constitutes an approach dif-ferent from the variational method using occluding boundaries found in the literature. The latter method determines a surface that satisfies the image irradiance equation as a constrained optimization problem, where occluding boundaries of the object's silhouette provide initial conditions and the image irradiance equation, and smoothness constraints determine an objective function to be minimized. Our technique splits the problem into two subproblems. The first subproblem consists of determining the surface function values on a two-dimensional grid of points from the corresponding image irradiance values that are measured. These points are determined by minimizing the square error between the irradiance values and the surface reflectance at the corresponding points determined from the reflectance map. The second subproblem consists of fitting a two-dimensional spline surface to the surface points computed from the above grid, using smoothing criteria based on appropriate partial different operators. Depending on the criterion chosen, different types of spline fits for the surface may be obtained.
Shape From A Single View: A Comparative Study
Y. M. Enab, J. Y. S. Luh
In this paper a proposed method for estimating the surface orientation of an object is introduced. The concept of using a calibration object to estimate the shape of unknown object is followed. In the proposed approach only one image of the calibration and the unknown object is used. This differs from the case of the known photometric stereo method, in which three images for each object are needed. The key idea in the proposed method is that, matching between a point in the image of the unknown object and another in the image of calibration object is done by using a similarity measure between two feature vectors constructed from the image irradiance values around the studied points according to certain masks. A comparison between the proposed method and photometeric stereo method is discussed. The performance of the proposed method is tested using synthetic image, and the experimental results of computer simulation is also discussed.
Small Angle Moire Contouring
Kevin G. Harding, Mark Michniewicz, Albert Boehnlein
Moire contouring techniques have proven useful for a variety of application in which continuous curved surfaces have been of interest. However, traditional moire contouring has limitations when dealing with many surfaces which contain step changes in the height of the surface. In particular, the fringe data provided by the moire method has an ambiguity when there is a step change of more than the contour interval of the fringe, that is, when the step change is greater than one fringe. In this case, the fringe order (which fringe we are on) can be lost such that the absolute change in distance is unknown. Steps also cause a problem with shadows since the surface is illuminated at other than the viewing angle in order to obtain the triangulation effect used in moire contouring. This paper suggests some possible solutions to this problem, and describes a specific experiment aimed at overcoming these difficulties.
Acquisition And Processing Of Range Data Using A Laser Scanner-Based 3-D Vision System
I. Moring, H. Ailisto, T. Heikkinen, et al.
In our paper we describe a 3-D vision system designed and constructed at the Technical Research Centre of Finland in co-operation with the University of Oulu. The main application fields our 3-D vision system was developed for are geometric measurements of large objects and manipulator and robot control tasks. It seems to be potential in automatic vehicle guidance applications, too. The system has now been operative for about one year and its performance has been extensively tested. Recently we have started a field test phase to evaluate its performance in real industrial tasks and environments. The system consists of three main units: the range finder, the scanner and the computer. The range finder is based on the direct measurement of the time-of-flight of a laser pulse. The time-interval between the transmitted and the received light pulses is converted into a continuous analog voltage, which is amplified, filtered and offset-corrected to produce the range information. The scanner consists of two mirrors driven by moving iron galvanometers. This system is controlled by servo amplifiers. The computer unit controls the scanner, transforms the measured coordinates into a cartesian coordinate system and serves as a user interface and postprocessing environment. Methods for segmenting the range image into a higher level description have been developed. The description consists of planar and curved surfaces and their features and relations. Parametric surface representations based on the Ferguson surface patch are studied, too.
High Speed Rangefinder
Kazuo Araki, Yukio Sato, Srinivasan Parthasarathy
We present a new type of high speed range finder system that is based on the principle of triangulation range-finding. One of the unique elements of this system is a novel custom range sensor. This sensor consists of a 2D array of discrete photo-detectors. Each photo-detector is attached to an individual memory element. A slit-ray is used to illuminate the object which is then imaged by the sensor. The slit-ray is scanned at a constant angular velocity, so elapsed time is a direct function of the direction of the slit source. This elapsed time is latched into each individual memory element when the corresponding detector is triggered. The system can acquire the basic data required for range computation without repeatedly scanning the sensor many times. The slit-ray scans the entire object once at high speed. The resulting reflected energy strip sweeps across the sensor triggering the photo-detectors in succession. The expected time to acquire the data is approximately 1 millisecond for a 100x100 pixel range data. The sensor is scanned only once at the end of data acquisition for transferring the stored data to a host processing computer. The range information for each pixel is obtained from the location of the pixel and the value of time (direction of the slit source) stored in the attached memory element. We have implemented this system in an abbreviated manner to verify the method. The implementation uses a 47 x 47 array of photo-transistors. Because of the practical difficulty of hooking up the entire array to individual memories and the magnitude of the hardware involved, the implementation uses only 47 memories corresponding to a row at a time. The sensor is energized a row at a time and the laser scanned. This yields one row of data at a time as we described before. In order to obtain the whole image, we repeat this procedure as many times as we have rows, i.e, 47 times. This is not due to any inherent limitation of the method, but due to implementational difficulties in the experimental system. This can be rectified when the sensor is emitted to custom VLSI hardware. The time to completely obtain a frame of data (47 x47) is approximately 80 milliseconds. The depth measurment error is less than 1.0%.
Signal Processing Requirements For A Video Rate Laser Range Finder Based Upon The Synchronized Scanner Approach
J. A. Beraldin, F. Blais, M. Rioux, et al.
This article presents the results of an analysis and a discussion of the signal processing requirements for a video rate laser range finder. The lateral effect photodiode is chosen as the position detector for its speed, ease of use, and cost trade-offs over other sensors. Two models are presented for this sensor: the transmission line and the lumped elements. Both are investigated for possible use in circuit analysis. The latter model is selected, although it is not as accurate as the first one. However, it can yield valuable results able to show dependences between system parameters. The results of a linear circuit analysis give some trade-offs for optimum current to voltage conversion for fast response lateral effect photodiodes. Also, a discussion of the merits of techniques for position calculation, to increase the dynamic range of the system and reduce both costs and circuit complexity, is presented.
3-D Shape Measurement By Active Triangulation Using An Array Of Coded Light Stripes
Brian F. Alexander, Kim Chew Ng
A 3-D shape measurement system using multiple light stripe active triangulation is described. The system employs a liquid crystal light valve mounted in a conventional projector to code, or label, an array of 64 stripes of light projected onto the scene to be measured. The scene is viewed by a camera displaced from the projector. Firstly the stripes are located to an accuracy of approximately 0.1 pixel in an image digitized with all the stripes turned on. The stripes are then coded by the projection of a sequence of six patterns of stripes using the light valve. In each pattern the intensity of each stripe, on or off, indicates one bit in a six bit code assigned to the stripe. Each located stripe can therefore be identified by determining the six bit number defined by its intensity in the images of the coding patterns. Once the stripes have been identified triangulation can be performed.
Phase Detected Triangulation: A New Twist On An Old Technology
Leonard H. Bieman, Kevin G. Harding, Mark Michniewicz, et al.
A new approach to measuring range using triangulation is presented. In this approach, the triangulation angle is determined by spatially modulating the intensity of light in front of a single channel photodetector and then measuring the phase of the detector output signal. Using a single detector instead of the commonly used linear array or lateral effect photodivide, allows an increased signal to noise ratio and the possibility of faster measurement rates. Two possible application areas discussed are long range measurement (greater than one meter) and measurement of mirror like surfaces.