Share Email Print
cover

Proceedings Paper

Neural networks for sign language translation
Author(s): Beth J. Wilson; Gretel Anspach
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

A neural network is used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular hybrid design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. One of the neural networks described here is used to classify video images of handshapes into their linguistic counterpart in American Sign Language. The video image is preprocessed to yield Fourier descriptors that encode the shape of the hand silhouette. These descriptors are then used as inputs to a neural network that classifies their shapes. The network is trained with various examples from different signers and is tested with new images from new signers. The results have shown that for coarse handshape classes, the network is invariant to the type of camera used to film the various signers and to the segmentation technique.

Paper Details

Date Published: 2 September 1993
PDF: 11 pages
Proc. SPIE 1965, Applications of Artificial Neural Networks IV, (2 September 1993); doi: 10.1117/12.152560
Show Author Affiliations
Beth J. Wilson, Raytheon Co. (United States)
Gretel Anspach, Raytheon Co. (United States)


Published in SPIE Proceedings Vol. 1965:
Applications of Artificial Neural Networks IV
Steven K. Rogers, Editor(s)

© SPIE. Terms of Use
Back to Top