Share Email Print

Proceedings Paper

Translation invariance in a network of oscillatory units
Author(s): A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

One of the important features of the human visual system is that it is able to recognize objects in a scale and translational invariant manner. However, achieving this desirable behavior through biologically realistic networks is a challenge. The synchronization of neuronal firing patterns has been suggested as a possible solution to the binding problem (where a biological mechanism is sought to explain how features that represent an object can be scattered across a network, and yet be unified). This observation has led to neurons being modeled as oscillatory dynamical units. It is possible for a network of these dynamical units to exhibit synchronized oscillations under the right conditions. These network models have been applied to solve signal deconvolution or blind source separation problems. However, the use of the same network to achieve properties that the visual sytem exhibits, such as scale and translational invariance have not been fully explored. Some approaches investigated in the literature (Wallis, 1996) involve the use of non-oscillatory elements that are arranged in a hierarchy of layers. The objects presented are allowed to move, and the network utilizes a trace learning rule, where a time averaged output value is used to perform Hebbian learning with respect to the input value. This is a modification of the standard Hebbian learning rule, which typically uses instantaneous values of the input and output. In this paper we present a network of oscillatory amplitude-phase units connected in two layers. The types of connections include feedforward, feedback and lateral. The network consists of amplitude-phase units that can exhibit synchronized oscillations. We have previously shown that such a network can segment the components of each input object that most contribute to its classification. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. We extend the ability of this network to address the problem of translational invariance. We show that by adopting a specific treatment of the phase values of the output layer, the network exhibits translational invariant object representation. The scheme used in training is as follows. The network is presented with an input, which then moves. During the motion the amplitude and phase of the upper layer units is not reset, but continues with the past value before the introduction of the object in the new position. Only the input layer is changed instantaneously to reflect the moving object. The network behavior is such that it categorizes the translated objects with the same label as the stationary object, thus establishing an invariant categorization with respect to translation. This is a promising result as it uses the same framework of oscillatory units that achieves synchrony, and introduces motion to achieve translational invariance.

Paper Details

Date Published: 16 February 2006
PDF: 9 pages
Proc. SPIE 6064, Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, 60641I (16 February 2006); doi: 10.1117/12.648229
Show Author Affiliations
A. Ravishankar Rao, IBM Thomas J. Watson Research Ctr. (United States)
Guillermo A. Cecchi, IBM Thomas J. Watson Research Ctr. (United States)
Charles C. Peck, IBM Thomas J. Watson Research Ctr. (United States)
James R. Kozloski, IBM Thomas J. Watson Research Ctr. (United States)

Published in SPIE Proceedings Vol. 6064:
Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning
Nasser M. Nasrabadi; Edward R. Dougherty; Jaakko T. Astola; Syed A. Rizvi; Karen O. Egiazarian, Editor(s)

© SPIE. Terms of Use
Back to Top