
Proceedings Paper
Deep learning based hand gesture recognition in complex scenesFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Recently, region-based convolutional neural networks(R-CNNs) have achieved significant success in the field of object detection, but their accuracy is not too high for small objects and similar objects, such as the gestures. To solve this problem, we present an online hard example testing(OHET) technology to evaluate the confidence of the R-CNNs' outputs, and regard those outputs with low confidence as hard examples. In this paper, we proposed a cascaded networks to recognize the gestures. Firstly, we use the region-based fully convolutional neural network(R-FCN), which is capable of the detection for small object, to detect the gestures, and then use the OHET to select the hard examples. To enhance the accuracy of the gesture recognition, we re-classify the hard examples through VGG-19 classification network to obtain the final output of the gesture recognition system. Through the contrast experiments with other methods, we can see that the cascaded networks combined with the OHET reached to the state-of-the-art results of 99.3% mAP on small and similar gestures in complex scenes.
Paper Details
Date Published: 8 March 2018
PDF: 7 pages
Proc. SPIE 10609, MIPPR 2017: Pattern Recognition and Computer Vision, 106090V (8 March 2018); doi: 10.1117/12.2284977
Published in SPIE Proceedings Vol. 10609:
MIPPR 2017: Pattern Recognition and Computer Vision
Zhiguo Cao; Yuehuang Wang; Chao Cai, Editor(s)
PDF: 7 pages
Proc. SPIE 10609, MIPPR 2017: Pattern Recognition and Computer Vision, 106090V (8 March 2018); doi: 10.1117/12.2284977
Show Author Affiliations
Published in SPIE Proceedings Vol. 10609:
MIPPR 2017: Pattern Recognition and Computer Vision
Zhiguo Cao; Yuehuang Wang; Chao Cai, Editor(s)
© SPIE. Terms of Use
