Share Email Print
cover

Proceedings Paper

Challenges of vision theory: self-organization of neural mechanisms for stable steering of object-grouping data in visual motion perception
Author(s): Jonathan A. Marshall
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Psychophysical studies on motion perception suggest that human visual systems perform certain nonlocal operations. In some cases, data about one part of an image can influence the processing or perception of data about another part of the image, across a long spatial range. In others, data about nearby parts of an image can fail to influence one another strongly, despite their proximity. Several types of nonlocal interaction may underlie cortical processing for accurate, stable perception of visual motion, depth, and form: (1) trajectory-specific propagation of computed moving stimulus information to successive image locations where a stimulus is predicted to appear; (2) grouping operations (establishing linkages among perceptually related data); (3) scission operations (breaking linkages between unrelated data); and (4) steering operations, whereby visible portions of a visual group or object can control the representations of invisible or occluded portions of the same group. Nonlocal interactions like these could be mediated by long-range excitatory horizontal intrinsic connections (LEHICs), discovered in visual cortex of several animal species. LEHICs often span great distances across cortical image space. Typically, they have been found to interconnect regions of like specificity with regard to certain receptive field attributes, e.g., stimulus orientation. It has recently been shown that several visual processing mechanisms can self-organize in model recurrent neural networks using unsupervised `EXIN' (excitatory + inhibitory) learning rules. Because the same rules are used in each case, EXIN networks provide a means to unify explanations of how different visual processing modules acquire their structure and function. EXIN networks learn to multiplex (or represent simultaneously) multiple spatially overlapping components of complex scenes, in a context-sensitive fashion. Modeled LEHICs have been used together with the EXIN learning rules to show how visual experience can shape neural mechanisms for nonlocal, context-sensitive processing of visual motion data.

Paper Details

Date Published: 1 October 1991
PDF: 16 pages
Proc. SPIE 1569, Stochastic and Neural Methods in Signal Processing, Image Processing, and Computer Vision, (1 October 1991); doi: 10.1117/12.48379
Show Author Affiliations
Jonathan A. Marshall, Univ. of North Carolina/Chapel Hill (United States)


Published in SPIE Proceedings Vol. 1569:
Stochastic and Neural Methods in Signal Processing, Image Processing, and Computer Vision
Su-Shing Chen, Editor(s)

© SPIE. Terms of Use
Back to Top