Share Email Print
cover

Proceedings Paper

Optimal robustness in noniterative learning
Author(s): Chia-Lun John Hu
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

If M given training patterns are not extremely similar, the analog N-vectors representing them are generally separable in the N-space. Then a one-layered binary perceptron containing P neurons (P equals >log2M) is generally sufficient to do the pattern recognition job. The connection matrix between the input (linear) layer and the neuron layer can be calculated in a noniterative manner. Real-time pattern recognition experiments implementing this theoretical result were reported in this and other national conferences last year. It is demonstrated in these experiments that the noniterative training is very fast, (can be done in real time), and the recognition of the untrained patterns is very robust and very accurate. The present paper concentrates at the theoretical foundation of this noniteratively trained perceptron. The theory starts from an N-dimension Euclidean-geometry approach. An optimally robust learning scheme is then derived. The robustness and the speed of this optimal learning scheme are to be compared with those of the conventional iterative learning schemes.

Paper Details

Date Published: 6 April 1995
PDF: 6 pages
Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); doi: 10.1117/12.205178
Show Author Affiliations
Chia-Lun John Hu, Southern Illinois Univ./Carbondale (United States)


Published in SPIE Proceedings Vol. 2492:
Applications and Science of Artificial Neural Networks
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top