Share Email Print

Proceedings Paper

Nonseparable data models for a single-layer perceptron
Author(s): John J. Shynk; Neil J. Bershad
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper describes two nonseparable data models that can be used to study the convergence properties of perceptron learning algorithms. A system identification formulation generates the training signal, with an input that is a zero-mean Gaussian random vector. One model is based on a two-layer perceptron configuration, while the second model has only one layer but with a multiplicative output node. The analysis in this paper focuses on Rosenblatt's training procedure, although the approach can be applied to other learning algorithms. Some examples of the performance surfaces are presented to illustrate possible convergence points of the algorithm for both nonseparable data models.

Paper Details

Date Published: 1 July 1992
PDF: 10 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140095
Show Author Affiliations
John J. Shynk, Univ. of California/Santa Barbara (United States)
Neil J. Bershad, Univ. of California/Irvine (United States)

Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?