Share Email Print
cover

Proceedings Paper • new

Group binary weight networks
Author(s): Kailing Guo; Yicai Yang; Xiaofen Xing; Xiangmin Xu
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In recent years, quantizing the weights of a deep neural network draws increasing attention in the area of network compression. An efficient and popular way to quantize the weight parameters is to replace a filter with the product of binary values and a real-valued scaling factor. However, the quantization error of such binarization method raises as the number of a filter's parameter increases. To reduce quantization error in existing network binarization methods, we propose group binary weight networks (GBWN), which divides the channels of each filter into groups and every channel in the same group shares the same scaling factor. We binarize the popular network architectures VGG, ResNet and DesneNet, and verify the performance on CIFAR10, CIFAR100, Fashion-MNIST, SVHN and ImageNet datasets. Experiment results show that GBWN achieves considerable accuracy increment compared to recent network binarization methods, including BinaryConnect, Binary Weight Networks and Stochastic Quantization Binary Weight Networks.

Paper Details

Date Published: 31 July 2019
PDF: 6 pages
Proc. SPIE 11198, Fourth International Workshop on Pattern Recognition, 1119812 (31 July 2019); doi: 10.1117/12.2540888
Show Author Affiliations
Kailing Guo, South China Univ. of Technology (China)
Yicai Yang, South China Univ. of Technology (China)
Xiaofen Xing, South China Univ. of Technology (China)
Xiangmin Xu, South China Univ. of Technology (China)


Published in SPIE Proceedings Vol. 11198:
Fourth International Workshop on Pattern Recognition
Xudong Jiang; Zhenxiang Chen; Guojian Chen, Editor(s)

© SPIE. Terms of Use
Back to Top