Share Email Print

Proceedings Paper

Sequence learning with recurrent networks: analysis of internal representations
Author(s): Joydeep Ghosh; Vijay Karamcheti
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The recognition and learning of temporal sequences is fundamental to cognitive processing. Several recurrent networks attempt to encode past history through feedback connections from `context units.' However, the internal representations formed by these networks is not well understood. In this paper, we use cluster analysis to interpret the hidden unit encodings formed when a network with context units is trained to recognize strings from a finite state machine. If the number of hidden units is small, the network forms fuzzy representations of the underlying machine states. With more hidden units, different representations may evolve for alternative paths to the same state. Thus, appropriate network size is indicated by the complexity of the underlying finite state machine. The analysis of internal representations can be used for modeling of an unknown system based on observation of its output sequences.

Paper Details

Date Published: 1 July 1992
PDF: 12 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140112
Show Author Affiliations
Joydeep Ghosh, Univ. of Texas/Austin (United States)
Vijay Karamcheti, Univ. of Illinois/Urbana-Champaign (United States)

Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top