Share Email Print
cover

Proceedings Paper • new

Multi- perspective gesture recognition based on convolutional neural network
Author(s): Dongdong Li; Limin Zhang; Xiangyang Deng
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Gesture recognition is widely used in life, but single-angle gesture recognition has certain limitations. In this paper,we use the Multi-Perspective Static Gesture Database for experimental training. This database has 24 letter gestures (except j gesture and z gesture) in the international sign language alphabet. We use the self-designed convolution network to train each picture. The feature map is obtained by multiplying the feature map by the trained weights, and the information amount of each picture is obtained, and the information amount is combined with the prediction probability of each picture for each gesture, and the predicted probability after the combination can be obtained. The largest prediction probability is the predicted gesture. By contrast, the prediction accuracy of the combination of the four angles is higher than that of the single image. At the same time, the paper also expands, chooses two angles combined with three angles to compare with single angle, the prediction accuracy is also higher than the accuracy of single prediction, which shows that the method is effective, using multi-angle gesture pictures Combining can improve prediction accuracy.

Paper Details

Date Published: 14 August 2019
PDF: 10 pages
Proc. SPIE 11179, Eleventh International Conference on Digital Image Processing (ICDIP 2019), 111791I (14 August 2019); doi: 10.1117/12.2539941
Show Author Affiliations
Dongdong Li, Naval Aviation Univ. (China)
Limin Zhang, Naval Aviation Univ. (China)
Xiangyang Deng, Naval Aviation Univ. (China)


Published in SPIE Proceedings Vol. 11179:
Eleventh International Conference on Digital Image Processing (ICDIP 2019)
Jenq-Neng Hwang; Xudong Jiang, Editor(s)

© SPIE. Terms of Use
Back to Top