Share Email Print

Proceedings Paper

Quantization of deep convolutional networks
Author(s): Yea-Shuan Huang; Charles Djimy Slot; Chang Wu Yu
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In recent years increasingly complex architectures for deep convolutional networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Implementation of quantized DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we experiment with three different quantizers for the implementation of DCNs. We denote them by min-max quantizer (MMQ), average quantizer (AQ) and histogram average quantizer (HAQ). We used a set of 8 different bit-widths (i.e one, two, …, eight bits) to quantize each DCN’s weight to run our experiments. Experimental results show that due to the non-destructive effect on the original distribution of HAQ, it outperforms both MMQ and AQ.

Paper Details

Date Published: 27 November 2019
PDF: 6 pages
Proc. SPIE 11321, 2019 International Conference on Image and Video Processing, and Artificial Intelligence, 113212R (27 November 2019); doi: 10.1117/12.2549445
Show Author Affiliations
Yea-Shuan Huang, Chung Hua Univ. (Taiwan)
Charles Djimy Slot, Chung Hua Univ. (Taiwan)
Chang Wu Yu, Chung Hua Univ. (Taiwan)

Published in SPIE Proceedings Vol. 11321:
2019 International Conference on Image and Video Processing, and Artificial Intelligence
Ruidan Su, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?