Share Email Print

Proceedings Paper

A Computational Model for Dynamic Vision
Author(s): Saied Moezzi; Terry E. Weymouth
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, we use the relative distances of objects to a set of referents and encode this information in image registered maps. To illustrate the efficacy of the method, we have chosen to apply it to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

Paper Details

Date Published: 1 March 1990
PDF: 12 pages
Proc. SPIE 1198, Sensor Fusion II: Human and Machine Strategies, (1 March 1990); doi: 10.1117/12.969964
Show Author Affiliations
Saied Moezzi, University of Michigan (United States)
Terry E. Weymouth, University of Michigan (United States)

Published in SPIE Proceedings Vol. 1198:
Sensor Fusion II: Human and Machine Strategies
Paul S. Schenker, Editor(s)

© SPIE. Terms of Use
Back to Top