Share Email Print

Proceedings Paper

Deterministic learning theory and a parallel cascaded one-step learning machine
Author(s): Chia-Lun John Hu
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

For a one-layered hard-limited perceptron, it is well known that if the training set given is not linearly separable in the state space, the machine just cannot learn no matter what learning method we use. This separability property is generally studied from a geometrical point of view. This paper reports the derivation of an algebraic criterion of the separability of a given mapping set. Then a one-step learning method is derived which will either instruct the machine to find the required weight matrix in one non-iterative step, or inform the teacher that the given mapping set is inseparable or not learnable no matter what learning rule is used. A parallelly cascaded two- layered perceptron is then derived which may surpass all these learning difficulties.

Paper Details

Date Published: 1 July 1992
PDF: 4 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992);
Show Author Affiliations
Chia-Lun John Hu, Southern Illinois Univ./Carbondale (United States)

Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?