Share Email Print
cover

Proceedings Paper

Grounding language in perception
Author(s): Jeffrey Mark Siskind
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

We describe an implemented computer program that recognizes the occurrence of simple spatial motion events in simulated video input. The program receives an animated line-drawing as input and produces as output a semantic representation of the events occurring in that movie. We suggest that the notions of support, contact, and attachment are crucial to specifying many simple spatial motion event types and present a logical notation for describing classes of events that incorporates such notions as primitives. We then suggest that the truth values of such primitives can be recovered from perceptual input by a process of counterfactual simulation, predicting the effect of hypothetical changes to the world on the immediate future. Finally, we suggest that such counterfactual simulation is performed using knowledge of naive physical constraints such as substantiality, continuity, gravity, and ground plane. We describe the algorithms that incorporate these ideas in the program and illustrate the operation of the program on sample input.

Paper Details

Date Published: 20 August 1993
PDF: 14 pages
Proc. SPIE 2055, Intelligent Robots and Computer Vision XII: Algorithms and Techniques, (20 August 1993); doi: 10.1117/12.150137
Show Author Affiliations
Jeffrey Mark Siskind, Univ. of Pennsylvania (Canada)


Published in SPIE Proceedings Vol. 2055:
Intelligent Robots and Computer Vision XII: Algorithms and Techniques
David P. Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top