Share Email Print
cover

Proceedings Paper

Multiple optimal learning factors for feed-forward networks
Author(s): Sanjeev S. Malalur; Michael T. Manry
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

A batch training algorithm for feed-forward networks is proposed which uses Newton's method to estimate a vector of optimal learning factors, one for each hidden unit. Backpropagation, using this learning factor vector, is used to modify the hidden unit's input weights. Linear equations are then solved for the network's output weights. Elements of the new method's Gauss-Newton Hessian matrix are shown to be weighted sums of elements from the total network's Hessian. In several examples, the new method performs better than backpropagation and conjugate gradient, with similar numbers of required multiplies. The method performs as well as or better than Levenberg-Marquardt, with several orders of magnitude fewer multiplies due to the small size of its Hessian.

Paper Details

Date Published: 12 April 2010
PDF: 12 pages
Proc. SPIE 7703, Independent Component Analyses, Wavelets, Neural Networks, Biosystems, and Nanoengineering VIII, 77030F (12 April 2010); doi: 10.1117/12.850873
Show Author Affiliations
Sanjeev S. Malalur, The Univ. of Texas at Arlington (United States)
Michael T. Manry, The Univ. of Texas at Arlington (United States)


Published in SPIE Proceedings Vol. 7703:
Independent Component Analyses, Wavelets, Neural Networks, Biosystems, and Nanoengineering VIII
Harold H. Szu; F. Jack Agee, Editor(s)

© SPIE. Terms of Use
Back to Top