Share Email Print
cover

Proceedings Paper

Self-organizing neural network for motion detection by a moving observer
Author(s): George K. Knopf
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The ability to rapidly detect moving objects while dynamically exploring a work environment is an essential characteristic of any active vision system. However, many of the proposed computer vision paradigms are unable to efficiently deal with the complexities of real world situations because they employ algorithms that attempt to accurately reconstruct structure- from-motion. An alternative view is to employ algorithms that only compute the minimal amount of information necessary to solve the task at hand. One method of qualitatively detecting independently moving objects by a moving camera (or observer) is based on the notion that the projected velocity of any point on a spherical image is constrained to lie on a one-dimensional locus in a local 2-D velocity space. The velocities along this locus, called a constraint ray, correspond to the rotational and translational motion of the observer. If the observer motion is known a priori, then any object moving independently through the rigid 3- D environment will exhibit a projected velocity that does not fall on this locus. As a result, the independently moving object can be detected using a clustering algorithm. In this paper, a hybrid neural network architecture is proposed for discriminating between flow velocities that are caused by camera movement and by object motion. The computing architecture is essentially a two stage process. In the first stage, a self-organizing neural network is used to learn the constraint parameters associated with typical observer movements by moving the camera apparatus through a stationary environment. Once the observer movements have been adequately learned by the self-organizing neural network, the corresponding synaptic weight values are used to program a modified radial basis function (RBF) network. During the second stage, the RBF network architecture acts as a constraint region classifier by employing clustering strategies to label incomplete motion field information (i.e. the velocity component that is parallel to the spatial gradient).

Paper Details

Date Published: 10 October 1994
PDF: 13 pages
Proc. SPIE 2353, Intelligent Robots and Computer Vision XIII: Algorithms and Computer Vision, (10 October 1994); doi: 10.1117/12.188907
Show Author Affiliations
George K. Knopf, Univ. of Western Ontario (Canada)


Published in SPIE Proceedings Vol. 2353:
Intelligent Robots and Computer Vision XIII: Algorithms and Computer Vision
David P. Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top