Share Email Print

Proceedings Paper

Principal component training of multilayer perceptron neural networks
Author(s): Gwong Chain Sun; Darrel L. Chenoweth
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper addresses the problem of training a multi-layer perceptron neural network for use in statistical pattern recognition applications. In particular it suggests a method for training such a network which significantly reduces the number of iterations that usually accompanies the use of the back propagation learning algorithm. The use of principal component analysis is proposed, and an example is given that demonstrates significant improvements in convergence speed as well as the number of hidden layer neurons needed, while maintaining accuracy comparable to that of a conventional perceptron network trained using back propagation. The accuracy obtained by the principal component trained network is also compared to that of a Bayes classifier used as a reference for evaluating accuracies. in addition, a cursory examination of the network performance with uniformly distributed feature classes is included. This work is still of a preliminary nature, but the initial examples we have considered suggest the method has promise for statistical classification applications.

Paper Details

Date Published: 1 July 1992
PDF: 6 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140104
Show Author Affiliations
Gwong Chain Sun, Univ. of Louisville (United States)
Darrel L. Chenoweth, Univ. of Louisville (United States)

Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?