Share Email Print
cover

Proceedings Paper

Simplified learning algorithms for two-layer neural networks
Author(s): Eduard Avedyan; Andrey Kerbelev; Ilya Levin; Yakov Tsypkin
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Multilayer neural networks are widely applied in fields of pattern recognition, speech processing, optimization problems, non-linear identification, non-linear adaptive control and other applications. They are trained usually by the error back-propagation algorithm. The main calculation problem of the algorithm is the goal function gradient searching implemented successively backward from the output layer. Two-layer neural networks can solve the approximation problem for a complicated non-linear function of many variables, as well as be effectively applied to automatic control problems, namely for the non-linear dynamic object identification. Calculation of the goal function gradient can be performed directly for two- layer neural networks, omitting the error back-propagation procedure, while a large number of calculations on each step remain. A training procedure simplified from the calculation point of view aimed at hardware implementation is suggested below for two-layer neural networks.

Paper Details

Date Published: 7 December 1994
PDF: 8 pages
Proc. SPIE 2430, Optical Memory & Neural Networks '94: Optical Neural Networks, (7 December 1994); doi: 10.1117/12.195589
Show Author Affiliations
Eduard Avedyan, Institute for Control Sciences (Russia)
Andrey Kerbelev, Institute for Control Sciences (Russia)
Ilya Levin, Institute for Control Sciences (Russia)
Yakov Tsypkin, Institute for Control Sciences (Russia)


Published in SPIE Proceedings Vol. 2430:
Optical Memory & Neural Networks '94: Optical Neural Networks
Andrei L. Mikaelian, Editor(s)

© SPIE. Terms of Use
Back to Top