Share Email Print

Proceedings Paper

Theory of networks for learning
Author(s): Barbara Moore
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Many neural networks are constructed to learn an input-output mapping from examples. This problem is related to classical approximation techniques including regularization theory. Regularization is equivalent to a class of threelayer networks which we call regularization networks or Hyper Basis Functions. The strong theoretical foundation of regularization networks provides us with a better understanding of why they work and how to best choose a specific network and parameters for a given problem. Classical regularization theory can be extended in order to improve the quality of learning performed by Hyper Basis Functions. For example the centers of the basis functions and the norm weights can be optimized. Many Radial Basis Functions often used for function interpolation are provably Hyper Basis Functions. 1.

Paper Details

Date Published: 1 August 1990
PDF: 9 pages
Proc. SPIE 1294, Applications of Artificial Neural Networks, (1 August 1990); doi: 10.1117/12.21153
Show Author Affiliations
Barbara Moore, Massachusetts Institute of Technology (United States)

Published in SPIE Proceedings Vol. 1294:
Applications of Artificial Neural Networks
Steven K. Rogers, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?