Share Email Print
cover

Proceedings Paper

Continuous Chinese sign language recognition with CNN-LSTM
Author(s): Su Yang; Qing Zhu
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.

Paper Details

Date Published: 21 July 2017
PDF: 7 pages
Proc. SPIE 10420, Ninth International Conference on Digital Image Processing (ICDIP 2017), 104200F (21 July 2017); doi: 10.1117/12.2281671
Show Author Affiliations
Su Yang, Beijing Univ. of Technology (China)
Qing Zhu, Beijing Univ. of Technology (China)


Published in SPIE Proceedings Vol. 10420:
Ninth International Conference on Digital Image Processing (ICDIP 2017)
Charles M. Falco; Xudong Jiang, Editor(s)

© SPIE. Terms of Use
Back to Top