Share Email Print

Proceedings Paper

VLSI sensor/processor circuitry for autonomous robots
Author(s): Daniel J. Friedman; James J. Clark
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

A CMOS circuit has been developed which integrates sensors and processing circuitry aimed at implementing autonomous robot perception and control functions. The sensors are an array of photodetectors and the processing circuitry analyzes the array data to extract a basic set of sensory primitives. In addition, the processing circuitry provides a low-resolution determination of the location of any brightness edges which cross the array. Ultimately, this sensor-processor circuitry will be used as part of an overall integrated sensorimotor system for autonomous robots. In the complete system, individual sensorimotor units will produce motion requests for the robot as a whole and an operating system, serving in part as a motion request handler, will arbitrate among suggested motions. The nature of the motion requests will be dependent both on sensor input and on the current goals of the robot. Ideally, the entire set of sensors, the processing circuitry, and the operating system will reside on a single VLSI chip. The current chip achieves many of the objectives of the complete integrated sensorimotor system, namely, it acquires sensory information, manipulates that data, and ultimately provides a digital output signal set which could serve as a motor signal set. Much of the on- chip processing is done by sensory primitive modules which calculate spatial convolutions of the sensor array data. Convolution kernels which were actually implemented were chosen based primarily on their usefulness in solving low-level vision problems. Specific kernels on the current chip include discrete approximations to the x-direction first derivative operator, the y-direction first derivative operator, and the laplacian operator. The spatial convolution function is achieved using current mode analog signal processing techniques. The output of the spatial convolution modules is piped into a higher level module which generates an estimate of the location of brightness edges which cross the array. This location estimate, which takes the form of a set of digital signals, can be readily translated into a (motor system dependent) motion request format, if indeed it cannot be used directly for this purpose. Location estimation, although it is the only higher level function implemented on the current chip, is just one example of a useful sensory primitive-based function. Additional higher level modules could be used to implement alternate functions which estimate other important environmental properties.

Paper Details

Date Published: 1 March 1992
PDF: 12 pages
Proc. SPIE 1614, Optics, Illumination, and Image Sensing for Machine Vision VI, (1 March 1992); doi: 10.1117/12.57972
Show Author Affiliations
Daniel J. Friedman, Harvard Univ. (United States)
James J. Clark, Harvard Univ. (United States)

Published in SPIE Proceedings Vol. 1614:
Optics, Illumination, and Image Sensing for Machine Vision VI
Donald J. Svetkoff, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?