Share Email Print
cover

Proceedings Paper

Recurrent network training with the decoupled-extended-Kalman-filter algorithm
Author(s): Gintaras V. Puskorius; Lee A. Feldkamp
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In this paper we describe the extension of our decoupled extended Kalman filter (DEKF) training algorithm to networks with internal recurrent (or feedback) connections; we call the resulting algorithm dynamic DEKF (or DDEKF for short). Analysis of DDEKF's computational complexity and empirical evidence suggest significant computational and performance advantages in comparison to training algorithms based exclusively upon gradient descent. We demonstrate DDEKF's effectiveness by training networks with recurrent connections for four different classes of problems. First, DDEKF is used to train a recurrent network that produces as its output a delayed copy of its input. Second, recurrent networks are trained by DDEKF to recognize sequences of events with arbitrarily long time delays between the events. Third, DDEKF is applied to the training of identification networks to act as models of the input-output behavior for nonlinear dynamical systems. We conclude the paper with a brief discussion of the extension of DDEKF to the training of neural controllers with internal feedback connections.

Paper Details

Date Published: 1 July 1992
PDF: 13 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140113
Show Author Affiliations
Gintaras V. Puskorius, Ford Motor Co. (United States)
Lee A. Feldkamp, Ford Motor Co. (United States)


Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top