Share Email Print
cover

Proceedings Paper

Hybrid training procedure applied to recurrent neural networks
Author(s): Xavier Loiseau; Jan Sendler
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The analysis of stochastic time series has shown to be a problem of major importance in the last several years, especially in the field of speech and handwriting recognition. Two approaches in particular have been proposed to achieve this goal, one based on hidden Markov modeling and the other using recurrent neural networks (RNNs). An internal class representation can easily be found using an hidden Markov model (HMM) trained by an expectation maximation algorithm. This can be regarded as a major advantage of the HMM approach. Since likelihood is being maximized (ML criterion), the classifier shows less discriminant power. In contrast, an RNN can be trained including counter examples. This improves the influence of features which discriminate competing classes. A gradient descent algorithm is applied to minimize the mean square error (MMSE criterion). The initial lack of internal representation of a class leads to serious convergence problems as the sequences lengthen. A new hybrid approach to the training of RNNs is investigated, combining the advantages of the previous methods. Various samples of handwritten letters are used to adapt the RNN, in order to examine the convergence and the discriminating ability of the new algorithm.

Paper Details

Date Published: 22 March 1996
PDF: 8 pages
Proc. SPIE 2760, Applications and Science of Artificial Neural Networks II, (22 March 1996); doi: 10.1117/12.235936
Show Author Affiliations
Xavier Loiseau, Technical Univ. of Darmstadt (Germany)
Jan Sendler, Technical Univ. of Darmstadt (Germany)


Published in SPIE Proceedings Vol. 2760:
Applications and Science of Artificial Neural Networks II
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top