Share Email Print
cover

Proceedings Paper

Scene understanding based on network-symbolic models
Author(s): Gary Kuvich
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

New generations of smart weapons and unmanned vehicles must have reliable perceptual systems that are similar to human vision. Instead of precise computations of 3-dimensional models, a network-symbolic system converts image information into an “understandable” Network-Symbolic format, which is similar to relational knowledge models. Logic of visual scenes can be captured in the Network-Symbolic models and used for the disambiguation of visual information. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a “raster” into a “vector” representation that can be better interpreted by higher-level knowledge structures. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views.

Paper Details

Date Published: 25 May 2005
PDF: 13 pages
Proc. SPIE 5817, Visual Information Processing XIV, (25 May 2005); doi: 10.1117/12.603023
Show Author Affiliations
Gary Kuvich, Smart Computer Vision Systems (United States)


Published in SPIE Proceedings Vol. 5817:
Visual Information Processing XIV
Zia-ur Rahman; Robert A. Schowengerdt; Stephen E. Reichenbach, Editor(s)

© SPIE. Terms of Use
Back to Top