Share Email Print

Proceedings Paper

An investigation of multiplication error tolerances in CNN and SIFT
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

A computer vision computation requires high number of multiplications causing a bottleneck. Based on the work of Zhenhong Liu, the multiplications in these algorithms do not always require high precision provided by the processors. As a result, we can reduce computation redundancy by means of multiplication approximation. Following this approach, in this paper, we investigate two major algorithms namely convolutional neural network (CNN) and scale-invariant features transform (SIFT) to find their error tolerances due to multiplication approximation. A multiplication approximation is done by injecting a random value to each of precise multiplication value. The INRIA and OXFORD datasets were used in the SIFT algorithm analysis while the CIFAR-10 and MNIST datasets were applied for the CNN experiments. The results showed that SIFT can withstand only small percents of multiplication approximation while CNN can tolerate over 30% of multiplication approximation.

Paper Details

Date Published: 22 March 2019
PDF: 5 pages
Proc. SPIE 11049, International Workshop on Advanced Image Technology (IWAIT) 2019, 110493L (22 March 2019); doi: 10.1117/12.2521564
Show Author Affiliations
Chanon Khongprasongsiri, King Mongkut's Univ. of Technology Thonburi (Thailand)
Watcharapan Suwansantisuk, King Mongkut's Univ. of Technology Thonburi (Thailand)
Pinit Kumhom, King Mongkut's Univ. of Technology Thonburi (Thailand)

Published in SPIE Proceedings Vol. 11049:
International Workshop on Advanced Image Technology (IWAIT) 2019
Qian Kemao; Kazuya Hayase; Phooi Yee Lau; Wen-Nung Lie; Yung-Lyul Lee; Sanun Srisuk; Lu Yu, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?