Share Email Print

Proceedings Paper

Design techniques for the control of errors in backpropagation neural networks
Author(s): Daniel C. St. Clair; Gerald E. Peterson; Stephen Aylward; William E. Bond
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

A significant problem in the design and construction of an artificial neural network for function approximation is limiting the magnitude and variance of errors when the network is used in the field. Network errors can occur when the training data does not faithfully represent the required function due to noise or low sampling rates, when the network's flexibility does not match the variability of the data, or when the input data to the resultant network is noisy. This paper reports on several experiments whose purpose was to rank the relative significance of these error sources and thereby find neural network design principles for limiting the magnitude and variance of network errors.

Paper Details

Date Published: 19 August 1993
PDF: 12 pages
Proc. SPIE 1966, Science of Artificial Neural Networks II, (19 August 1993); doi: 10.1117/12.152636
Show Author Affiliations
Daniel C. St. Clair, Univ. of Missouri/Rolla (United States)
Gerald E. Peterson, McDonnell Douglas Aerospace (United States)
Stephen Aylward, McDonnell Douglas Aerospace (United States)
William E. Bond, McDonnell Douglas Aerospace (United States)

Published in SPIE Proceedings Vol. 1966:
Science of Artificial Neural Networks II
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top