Share Email Print

Proceedings Paper

Deep convolutional neural networks compression method based on linear representation of kernels
Author(s): Ruobing Chen; Yefei Chen; Jianbo Su
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Convolutional Neural Networks (CNNs) are getting larger and deeper, and thus becoming harder to be deployed on systems with limited resources. Though convolutional filters benefit from the concept of receptive field, they still take up lots of resources to store these parameters in the large amounts of filters. Therefore, a compression method of pre-trained CNN models using "Linear Representation" of convolutional kernels is introduced in this paper. First, a codebook of template kernels "Kt". are generated by conducting unsupervised clustering on all convolutional kernels, with Pearson Correlation Coefficient set as distance. Then all the convolutional kernels are represented by the closest templates using linear fitting function a • Kt + b , which means that only two parameters and a codebook index are enough to represent a kernel. After that, the model is retrained with fixed template kernels and only two related parameters need to be finetuned for each kernel. Experiments show that convolutional kernels of a large CNN model can be represented using only a small amount of templates. Thus, this method can reach a compression rate of convolutional layers near 4×, with tiny impact on precision after retraining. Nevertheless, the proposed method can be performed with other compression approaches to get higher compression rate.

Paper Details

Date Published: 15 March 2019
PDF: 8 pages
Proc. SPIE 11041, Eleventh International Conference on Machine Vision (ICMV 2018), 110412N (15 March 2019); doi: 10.1117/12.2522992
Show Author Affiliations
Ruobing Chen, Shanghai Jiao Tong Univ. (China)
Yefei Chen, Shanghai Jiao Tong Univ. (China)
Jianbo Su, Shanghai Jiao Tong Univ. (China)

Published in SPIE Proceedings Vol. 11041:
Eleventh International Conference on Machine Vision (ICMV 2018)
Antanas Verikas; Dmitry P. Nikolaev; Petia Radeva; Jianhong Zhou, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?