Share Email Print
cover

Proceedings Paper

Capacity of feedforward networks with shared weights
Author(s): Martin A. Kraaijveld; Robert P. W. Duin
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

In pattern recognition it is a well-known fact that the number of free parameters of a classification function should not be too large, since the parameters have to be estimated from a finite learning set. For multi-layer feedforward network classifiers, this implies that the number of weights and units should be limited. However, a fundamentally different approach to decrease the number of free parameters in such networks, suggested by Rumelhart and applied by le Cun, is by sharing the same weights with multiple units. This was motivated by the fact that translation invariance could be obtained by this technique. In this paper, we discuss how this weight sharing technique influences the capacity or Vapnik-Chervonenkis dimension of the network. First, an upper bound is derived for the number of dichotomies that can be induced with a layer of units with shared weights. Then, we apply this result to bound the capacity of a simple class of weight-sharing networks. The results show that the capacity of a network with shared weights is still linear in the number of free parameters. Another remarkable outcome is either that the weight sharing technique is a very effective way of decreasing the capacity of a network, or that the existing bounds for the capacity of multi- layer feedforward networks considerably overestimate the capacity.

Paper Details

Date Published: 1 July 1992
PDF: 9 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140107
Show Author Affiliations
Martin A. Kraaijveld, Delft Univ. of Technology (Netherlands)
Robert P. W. Duin, Delft Univ. of Technology (Netherlands)


Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top