Share Email Print

Proceedings Paper

Interdependencies in data preprocessing, training methods, and neural network topology generation
Author(s): Stephan Rudolph; Steffen Brueckner
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Artificial neural networks are adaptive methods which can be trained to approximate a functional relationship implicitly encoded in training data. A large variety of neural network types (e.g. linear versus non-linear) gives rise to principal questions about the appropriateness of data pre-processing techniques, training methodologies, the resulting neural net-work topology and possible interdependencies thereof. The a posteriori interpretation of the numerical results gives hints for some guidelines for neural network applications in engineering applications. Data pre-processing techniques are a powerful means for pre-structuring the problem setting of function approximation through an adaptive training procedure. Especially integral transforms may change the nature of the training problem significantly without loss of generality if carefully selected and represent an excellent opportunity to incorporate additional knowledge about the process to improve the training and the result interpretation. Some numerical examples from engineering domains are used to illustrate the theoretical arguments in the context of a practical setting.

Paper Details

Date Published: 11 March 2002
PDF: 10 pages
Proc. SPIE 4739, Applications and Science of Computational Intelligence V, (11 March 2002); doi: 10.1117/12.458702
Show Author Affiliations
Stephan Rudolph, Univ. Stuttgart (Germany)
Steffen Brueckner, Univ. Stuttgart (Germany)

Published in SPIE Proceedings Vol. 4739:
Applications and Science of Computational Intelligence V
Kevin L. Priddy; Paul E. Keller; Peter J. Angeline, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?