
Proceedings Paper
Connectionist model for object recognitionFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
An application of neural networks is the recognition of objects under translation, rotation, and scale change. Most existing networks for invariant object recognition require a huge number of connections and/or processing units. In this paper, we propose a new connectionist model for invariant object recognition for binary images with a reasonable network size. The network consists of five stages. The first stage shifts the object so that the centroid of the object coincides with the center of the image plane. The second stage is a variation of the polar- coordinate transformation used to obtain two N-dimensional representations of the input object. In this stage, the 0 axis is represented by the positions of the output units; therefore, any rotation of the original object becomes a cyclic shift of the output values of this stage. The third stage is a variation of the rapid transform, which provides invariant representations of cyclic-shift inputs. The next stage normalizes the outputs of the rapid transform to obtain scale invariance. The final stage is a nearest neighbor classifier. We tested the performance of the network for character recognition and good results were obtained with only one pattern per class in training.
Paper Details
Date Published: 16 September 1992
PDF: 8 pages
Proc. SPIE 1709, Applications of Artificial Neural Networks III, (16 September 1992); doi: 10.1117/12.139998
Published in SPIE Proceedings Vol. 1709:
Applications of Artificial Neural Networks III
Steven K. Rogers, Editor(s)
PDF: 8 pages
Proc. SPIE 1709, Applications of Artificial Neural Networks III, (16 September 1992); doi: 10.1117/12.139998
Show Author Affiliations
Shingchern D. You, Univ. of California/Davis (United States)
Gary E. Ford, Univ. of California/Davis (United States)
Published in SPIE Proceedings Vol. 1709:
Applications of Artificial Neural Networks III
Steven K. Rogers, Editor(s)
© SPIE. Terms of Use
