Share Email Print
cover

Proceedings Paper

Neural Network Approach To Sensory Fusion
Author(s): John C Pearson; Jack J Gelfand; W E Sullivan; Richard M Peterson; Clay D Spence
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

We present a neural network model for sensory fusion based on the design of the visual/acoustic target localiza-tion system of the barn owl. This system adaptively fuses its separate visual and acoustic representations of object position into a single joint representation used for head orientation. The building block in this system, as in much of the brain, is the neuronal map. Neuronal maps are large arrays of locally interconnected neurons that represent information in a map-like form, that is, parameter values are systematically encoded by the position of neural activation in the array. The computational load is distributed to a hierarchy of maps, and the computation is performed in stages by transforming the representation from map to map via the geometry of the projections between the maps and the local interactions within the maps. For example, azimuthal position is computed from the frequency and binaural phase information encoded in the signals of the acoustic sensors, while elevation is computed in a separate stream using binaural intensity information. These separate streams are merged in their joint projection onto the external nucleus of the inferior colliculus, a two dimensional array of cells which contains a map of acoustic space. This acoustic map, and the visual map of the retina, jointly project onto the optic tectum, creating a fused visual/acoustic representation of position in space that is used for object localization. In this paper we describe our mathematical model of the stage of visual/acoustic fusion in the optic tectum. The model assumes that the acoustic projection from the external nucleus onto the tectum is roughly topographic and one-to-many, while the visual projection from the retina onto the tectum is topographic and one-to-one. A simple process of self-organization alters the strengths of the acoustic connections, effectively forming a focused beam of strong acoustic connections whose inputs are coincident with the visual inputs. Computer simulations demonstrate how this mechanism can account for the existing experimental data on adaptive fusion and makes sharp predictions for experimental test.

Paper Details

Date Published: 9 August 1988
PDF: 6 pages
Proc. SPIE 0931, Sensor Fusion, (9 August 1988); doi: 10.1117/12.946654
Show Author Affiliations
John C Pearson, SRI International (United States)
Jack J Gelfand, SRI International (United States)
W E Sullivan, Princeton University (United States)
Richard M Peterson, Princeton University (United States)
Clay D Spence, SRI International (United States)


Published in SPIE Proceedings Vol. 0931:
Sensor Fusion
Charles B. Weaver, Editor(s)

© SPIE. Terms of Use
Back to Top