Share Email Print

Proceedings Paper

High Speed Rangefinder
Author(s): Kazuo Araki; Yukio Sato; Srinivasan Parthasarathy
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

We present a new type of high speed range finder system that is based on the principle of triangulation range-finding. One of the unique elements of this system is a novel custom range sensor. This sensor consists of a 2D array of discrete photo-detectors. Each photo-detector is attached to an individual memory element. A slit-ray is used to illuminate the object which is then imaged by the sensor. The slit-ray is scanned at a constant angular velocity, so elapsed time is a direct function of the direction of the slit source. This elapsed time is latched into each individual memory element when the corresponding detector is triggered. The system can acquire the basic data required for range computation without repeatedly scanning the sensor many times. The slit-ray scans the entire object once at high speed. The resulting reflected energy strip sweeps across the sensor triggering the photo-detectors in succession. The expected time to acquire the data is approximately 1 millisecond for a 100x100 pixel range data. The sensor is scanned only once at the end of data acquisition for transferring the stored data to a host processing computer. The range information for each pixel is obtained from the location of the pixel and the value of time (direction of the slit source) stored in the attached memory element. We have implemented this system in an abbreviated manner to verify the method. The implementation uses a 47 x 47 array of photo-transistors. Because of the practical difficulty of hooking up the entire array to individual memories and the magnitude of the hardware involved, the implementation uses only 47 memories corresponding to a row at a time. The sensor is energized a row at a time and the laser scanned. This yields one row of data at a time as we described before. In order to obtain the whole image, we repeat this procedure as many times as we have rows, i.e, 47 times. This is not due to any inherent limitation of the method, but due to implementational difficulties in the experimental system. This can be rectified when the sensor is emitted to custom VLSI hardware. The time to completely obtain a frame of data (47 x47) is approximately 80 milliseconds. The depth measurment error is less than 1.0%.

Paper Details

Date Published: 12 March 1988
PDF: 5 pages
Proc. SPIE 0850, Optics, Illumination, and Image Sensing for Machine Vision II, (12 March 1988); doi: 10.1117/12.942876
Show Author Affiliations
Kazuo Araki, Nagoya Institute of Technology (Japan)
Yukio Sato, University of California (United States)
Srinivasan Parthasarathy, University of California (United States)

Published in SPIE Proceedings Vol. 0850:
Optics, Illumination, and Image Sensing for Machine Vision II
Donald J. Svetkoff, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?