Share Email Print
cover

Proceedings Paper

Collapsing multiple hidden layers in feedforward neural networks to a single hidden layer
Author(s): Jeffrey L. Blue; Lawrence O. Hall
Format Member Price Non-Member Price
PDF $17.00 $21.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Feed forward neural networks are often configured with multiple hidden layers. The relative simplicity of a single hidden layer may allow the use of a broader number of algorithms than could be used with multiple hidden layers. The hardware mapping may also be simplified with all networks implemented in a single hidden layer. An algorithm is presented which will collapse a network to a single hidden layer. The algorithm replaces links between hidden layers with new units whose links bypass hidden layers. These new units have link weights calculated so that the new unit approximates the replaced link's contribution to the network. Trials with a variety of configurations and data sets demonstrate the validity and effectiveness of the concept.

Paper Details

Date Published: 22 March 1996
PDF: 9 pages
Proc. SPIE 2760, Applications and Science of Artificial Neural Networks II, (22 March 1996); doi: 10.1117/12.235964
Show Author Affiliations
Jeffrey L. Blue, Univ. of South Florida (United States)
Lawrence O. Hall, Univ. of South Florida (United States)


Published in SPIE Proceedings Vol. 2760:
Applications and Science of Artificial Neural Networks II
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top