Share Email Print

Proceedings Paper

Empirical estimation of generalization ability of neural networks
Author(s): Dilip Sarkar
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This work concentrates on a novel method for empirical estimation of generalization ability of neural networks. Given a set of training (and testing) data, one can choose a network architecture (number of layers, number of neurons in each layer etc.), an initialization method, and a learning algorithm to obtain a network. One measure of performance of the trained network is how closely its actual output approximates the desired output for an input that it has never seen before. Current methods provide a `number' that indicates the estimation of generalization ability of the network. However, this number provides no further information to understand the contributing factors when generalization stability is not very good. The proposed method uses a number of parameters to define generalization ability. A set of values of these parameters provides an estimate of generalization ability. In addition, a value of each parameter indicate the contribution of such factors as network architecture, initialization method, and training data set etc. Furthermore, a method has been developed to verify the validity of estimated values of the parameters.

Paper Details

Date Published: 22 March 1996
PDF: 7 pages
Proc. SPIE 2760, Applications and Science of Artificial Neural Networks II, (22 March 1996); doi: 10.1117/12.235979
Show Author Affiliations
Dilip Sarkar, Univ. of Miami (United States)

Published in SPIE Proceedings Vol. 2760:
Applications and Science of Artificial Neural Networks II
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top