Share Email Print
cover

Proceedings Paper

Minimum number of hidden neurons does not necessarily provide the best generalization
Author(s): Jason M. Kinser
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The quality of a feedforward neural network that allows it to associate data not used in training is called generalization. A common method of creating the desired network is for the user to select the network architecture and allowing a training algorithm to evolve the synaptic weights between the neurons. A popular belief is that the network with the fewest number of hidden neurons that correctly learns a sufficient training set is a network with better generalization. This paper will contradict that belief. The optimization of generalization requires that the network not assume information that does not exist in the training data. Unfortunately, a network with the minimum number of hidden neurons may require assumptions of information that does not exist. The network then skews the surface that maps the input space to the output space in order to accommodate the minimum architecture which then sacrifices generalization.

Paper Details

Date Published: 30 March 2000
PDF: 7 pages
Proc. SPIE 4055, Applications and Science of Computational Intelligence III, (30 March 2000); doi: 10.1117/12.380567
Show Author Affiliations
Jason M. Kinser, George Mason Univ. (United States)


Published in SPIE Proceedings Vol. 4055:
Applications and Science of Computational Intelligence III
Kevin L. Priddy; Paul E. Keller; David B. Fogel, Editor(s)

© SPIE. Terms of Use
Back to Top