Share Email Print

Journal of Electronic Imaging

Neurodynamics of learning and network performance
Author(s): Charles L. Wilson; James L. Blue; Omid M. Omidvar
Format Member Price Non-Member Price
PDF $20.00 $25.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

A simple dynamic model of a neural network is presented. Using the dynamic model of a neural network, we improve the performance of a three-layer multilayer perceptron (MLP). The dynamic model of a MLP is used to make fundamental changes in the network optimization strategy. These changes are: Neuron activation functions are used, which reduce the probability of singular Jacobians; Successive regularization is used to constrain the volume of the weight space being minimized; Boltzmann pruning is used to constrain the dimension of the weight space; and prior class probabilities are used to normalize all error calculations, so that statistically significant samples of rare but important classes can be included without distortion of the error surface. All four of these changes are made in the inner loop of a conjugate gradient optimization iteration and are intended to simplify the training dynamics ofthe optimization. On handprinted digits and fingerprint classification problems, these modifications improve error-reject performance by factors between 2 and 4 and reduce network size by 40 to 60%.

Paper Details

Date Published: 1 July 1997
PDF: 7 pages
J. Electron. Imag. 6(3) doi: 10.1117/12.272656
Published in: Journal of Electronic Imaging Volume 6, Issue 3
Show Author Affiliations
Charles L. Wilson, National Institute of Standards and Technology (United States)
James L. Blue, National Institute of Standards and Technology (United States)
Omid M. Omidvar, Univ. of the District of Columbia (United States)

© SPIE. Terms of Use
Back to Top