Share Email Print
cover

Proceedings Paper

Neural network correspondencies of engineering principles
Author(s): Georg Schneider; Detlef Korte; Stephan Rudolph
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Applications of neural networks have been reported on a lot in recent years, but the research on how to find reliable guidelines to design neural networks is still in its infancy. This work intends to provide some ideas on how to find useful predefined network structures for at least certain parts of the neural net. By breaking off to a certain extend the so-called black-box character of the neural net, the performance of the networks can be improved and the solutions of the nets get more transparent and understandable at the same time. Additionally, the ability of the neural nets to generalize from some training patterns to unlearned data regions is improved substantially. In this work two commonly used engineering principles such as the technique of dimensional analysis and the Laplace- transformation are used to identify suitable topologies for neural networks. The integration of the dimensional analysis in the context of feed-forward neural networks is presented. In the second part of this work the use of the Laplace- transformation in neural networks is demonstrated. Even though at the moment the application of this technique has been shown in a linear time-invariant process, a future use of this method in nonlinear system is considered.

Paper Details

Date Published: 30 March 2000
PDF: 10 pages
Proc. SPIE 4055, Applications and Science of Computational Intelligence III, (30 March 2000); doi: 10.1117/12.380581
Show Author Affiliations
Georg Schneider, Univ. Stuttgart (Germany)
Detlef Korte, Univ. Stuttgart (Germany)
Stephan Rudolph, Univ. Stuttgart (Germany)


Published in SPIE Proceedings Vol. 4055:
Applications and Science of Computational Intelligence III
Kevin L. Priddy; Paul E. Keller; David B. Fogel, Editor(s)

© SPIE. Terms of Use
Back to Top