Share Email Print
cover

Proceedings Paper

Uniformly sparse neural networks
Author(s): Siamack Haghighi
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Application of neural networks to problems with a large number of sensory inputs is severely limited when the processing elements (PEs) need to be fully connected. This paper presents a new network model in which a trade off between the number of connections to a node and the number of processing layers can be made. This trade off is an important issue in the VLSI implementation of neural networks. The performance and capability of a hierarchical pyramidal network architecture of limited fan-in PE layers is analyzed. Analysis of this architecture requires the development of a new learning rule, since each PE has access to limited information about the entire network input. A spatially local unsupervised training rule is developed in which each PE optimizes the fraction of its output variance contributed by input correlations, resulting in PEs behaving as adaptive local correlation detectors. It is also shown that the output of a PE optimally represents the mutual information among the inputs to that PE. Applications of the developed model in image compression and motion detection are presented.

Paper Details

Date Published: 1 July 1992
PDF: 10 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140147
Show Author Affiliations
Siamack Haghighi, Intel Corp. (United States)


Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top