Share Email Print
cover

Proceedings Paper

Improving learning speed in multilayer perceptrons through principal component analysis
Author(s): Francesco Masulli; Massimo Penna
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

This paper describes an application of Principal Component Analysis to the speeding-up the learning of a Multi-Layer Perceptron (MLP). A training algorithm, called the Incremental Input Dimensionality (IID) method, is presented that is constituted by some training steps, in each of which the dimension of the principal component subspace is increased. For each training step, some training epochs (presentations of the training set), using the Back- Propagation algorithm, are performed in order to reduce the mean square error on the test set. In this way, the last training step is performed with a subspace corresponding to the assigned reconstruction error. The performance of the MLP using IID, in the case of handwritten digit classification, are reported. For our data-base a choice of a reconstruction error rate of 5% in the IID algorithm implies a maximum dimension of the principal components subspace equal to 37. In the experiments reported in this paper, Back-Propagation using IID has turned out to be faster than standard Back-Propagation with a speed-up of about 73%. Moreover, as the IID method concerns only data representation, it can be combined with other speed-up techniques for MLP learning, and can be used by other classifiers.

Paper Details

Date Published: 22 March 1996
PDF: 11 pages
Proc. SPIE 2760, Applications and Science of Artificial Neural Networks II, (22 March 1996); doi: 10.1117/12.235905
Show Author Affiliations
Francesco Masulli, Univ. di Genova (Italy)
Massimo Penna, Univ. di Genova (Italy)


Published in SPIE Proceedings Vol. 2760:
Applications and Science of Artificial Neural Networks II
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top