Share Email Print
cover

Proceedings Paper • new

High reality image generation for DNN learning based on varying pixel intensity value model depend on each camera: the last 1% accuracy improvement
Author(s): Yusuke Kamiya; Nobuyuki Shinohara; Manabu Hashimoto
Format Member Price Non-Member Price
PDF $17.00 $21.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In object recognition using deep neural networks (DNNs) in the field of industry, the recognition accuracy rate decreases because of the differences in characteristics between the camera for learning and the camera for recognition. In this research, we solve this problem by statistically modeling the varying pixel intensity value of each recognition camera on the basis of actual acquired learning images. Here, the characteristics of generated images must be similar to images captured by the recognition camera. By using the statistical model, already-captured learning image sets can be converted to virtual images, which are accurately captured by the recognition camera. Through experiments using actual images, we confirmed that the recognition accuracy rate by our method is at least 1.0% higher than that of the conventional method based on Gaussian noise.

Paper Details

Date Published: 22 March 2019
PDF: 6 pages
Proc. SPIE 11049, International Workshop on Advanced Image Technology (IWAIT) 2019, 1104945 (22 March 2019); doi: 10.1117/12.2521615
Show Author Affiliations
Yusuke Kamiya, Chukyo Univ. (Japan)
Nobuyuki Shinohara, Chukyo Univ. (Japan)
Manabu Hashimoto, Chukyo Univ. (Japan)


Published in SPIE Proceedings Vol. 11049:
International Workshop on Advanced Image Technology (IWAIT) 2019
Qian Kemao; Kazuya Hayase; Phooi Yee Lau; Wen-Nung Lie; Yung-Lyul Lee; Sanun Srisuk; Lu Yu, Editor(s)

© SPIE. Terms of Use
Back to Top