Share Email Print

Proceedings Paper

Recurrent back-propagation and Newton algorithms for training recurrent neural networks
Author(s): Chung-Ming Kuan; Kurt Hornik; Tung Liu
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In this paper the recurrent back-propagation and Newton algorithms for an important class of recurrent networks and their convergence properties are discussed. To ensure proper convergence behavior, recurrent connections must be suitably constrained during the learning process. Simulation results demonstrate that the algorithms with the suggested constraint have superior performance.

Paper Details

Date Published: 1 February 1994
PDF: 10 pages
Proc. SPIE 2093, Substance Identification Analytics, (1 February 1994); doi: 10.1117/12.172502
Show Author Affiliations
Chung-Ming Kuan, Univ. of Illinois (United States)
Kurt Hornik, Technische Univ. Wien (Austria)
Tung Liu, Ball State Univ. (United States)

Published in SPIE Proceedings Vol. 2093:
Substance Identification Analytics
James L. Flanagan; Richard J. Mammone; Albert E. Brandenstein; Edward Roy Pike M.D.; Stelios C. A. Thomopoulos; Marie-Paule Boyer; H. K. Huang; Osman M. Ratib, Editor(s)

© SPIE. Terms of Use
Back to Top