Share Email Print

Proceedings Paper

Learning spatially coherent properties of the visual world in connectionist networks
Author(s): Suzanna Becker; Geoffrey E. Hinton
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In the unsupervised learning paradigm, a network of neuron-like units is presented with an ensemble of input patterns from a structured environment, such as the visual world, and learns to represent the regularities in that input. The major goal in developing unsupervised learning algorithms is to find objective functions that characterize the quality of the network's representation without explicitly specifying the desired outputs of any of the units. The sort of objective functions considered cause a unit to become tuned to spatially coherent features of visual images (such as texture, depth, shading, and surface orientation), by learning to predict the outputs of other units which have spatially adjacent receptive fields. Simulations show that using an information-theoretic algorithm called IMAX, a network can be trained to represent depth by observing random dot stereograms of surfaces with continuously varying disparities. Once a layer of depth-tuned units has developed, subsequent layers are trained to perform surface interpolation of curved surfaces, by learning to predict the depth of one image region based on depth measurements in surrounding regions. An extension of the basic model allows a population of competing neurons to learn a distributed code for disparity, which naturally gives rise to a representation of discontinuities.

Paper Details

Date Published: 1 October 1991
PDF: 9 pages
Proc. SPIE 1569, Stochastic and Neural Methods in Signal Processing, Image Processing, and Computer Vision, (1 October 1991); doi: 10.1117/12.48380
Show Author Affiliations
Suzanna Becker, Univ. of Toronto (Canada)
Geoffrey E. Hinton, Univ. of Toronto (Canada)

Published in SPIE Proceedings Vol. 1569:
Stochastic and Neural Methods in Signal Processing, Image Processing, and Computer Vision
Su-Shing Chen, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?