Share Email Print
cover

Proceedings Paper

Deterministic learning theory and a parallel cascaded one-step learning machine
Author(s): Chia-Lun John Hu
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

For a one-layered hard-limited perceptron, it is well known that if the training set given is not linearly separable in the state space, the machine just cannot learn no matter what learning method we use. This separability property is generally studied from a geometrical point of view. This paper reports the derivation of an algebraic criterion of the separability of a given mapping set. Then a one-step learning method is derived which will either instruct the machine to find the required weight matrix in one non-iterative step, or inform the teacher that the given mapping set is inseparable or not learnable no matter what learning rule is used. A parallelly cascaded two- layered perceptron is then derived which may surpass all these learning difficulties.

Paper Details

Date Published: 1 July 1992
PDF: 4 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140127
Show Author Affiliations
Chia-Lun John Hu, Southern Illinois Univ./Carbondale (United States)


Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top