Share Email Print

Proceedings Paper

Synchronizing real and predicted synthetic video imagery for localization of a robot to a 3D environment
Author(s): Damian M. Lyons; Sirhan Chaudhry; D. Paul Benjamin
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

A mobile robot moving in an environment in which there are other moving objects and active agents, some of which may represent threats and some of which may represent collaborators, needs to be able to reason about the potential future behaviors of those objects and agents. In previous work, we presented an approach to tracking targets with complex behavior, leveraging a 3D simulation engine to generate predicted imagery and comparing that against real imagery. We introduced an approach to compare real and simulated imagery using an affine image transformation that maps the real scene to the synthetic scene in a robust fashion. In this paper, we present an approach to continually synchronize the real and synthetic video by mapping the affine transformation yielded by the real/synthetic image comparison to a new pose for the synthetic camera. We show a series of results for pairs of real and synthetic scenes containing objects including similar and different scenes.

Paper Details

Date Published: 18 January 2010
PDF: 8 pages
Proc. SPIE 7539, Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques, 75390S (18 January 2010); doi: 10.1117/12.838642
Show Author Affiliations
Damian M. Lyons, Fordham Univ. (United States)
Sirhan Chaudhry, Fordham Univ. (United States)
D. Paul Benjamin, Pace Univ. (United States)

Published in SPIE Proceedings Vol. 7539:
Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques
David P. Casasent; Ernest L. Hall; Juha Röning, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?