
Proceedings Paper
An efficient accelerator unit for sparse convolutional neural networkFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Convolutional neural network is widely used in image recognition. The associated model is computationally demanding. Several solutions are proposed to accelerate its computation. Sparse neural network is an effective way to reduce the computational complexity of neural networks. However, most of the current acceleration programs do not make full use of this feature. In this paper, we design an acceleration unit, using FPGA as the hardware platform. The accelerator unit achieves parallel acceleration through multiple CU models. It eliminates the unnecessary operations by the Match model to improve efficiency. The experimental results show that when the sparsity is ninety percent, the performance can be increased to 3.2 times.
Paper Details
Date Published: 9 August 2018
PDF: 5 pages
Proc. SPIE 10806, Tenth International Conference on Digital Image Processing (ICDIP 2018), 108061Z (9 August 2018); doi: 10.1117/12.2503042
Published in SPIE Proceedings Vol. 10806:
Tenth International Conference on Digital Image Processing (ICDIP 2018)
Xudong Jiang; Jenq-Neng Hwang, Editor(s)
PDF: 5 pages
Proc. SPIE 10806, Tenth International Conference on Digital Image Processing (ICDIP 2018), 108061Z (9 August 2018); doi: 10.1117/12.2503042
Show Author Affiliations
Yulin Zhao, Institute of Acoustics (China)
Univ. of Chinese Academy of Sciences (China)
Donghui Wang, Institute of Acoustics (China)
Univ. of Chinese Academy of Sciences (China)
Univ. of Chinese Academy of Sciences (China)
Donghui Wang, Institute of Acoustics (China)
Univ. of Chinese Academy of Sciences (China)
Leiou Wang, Institute of Acoustics (China)
Univ. of Chinese Academy of Sciences (China)
Univ. of Chinese Academy of Sciences (China)
Published in SPIE Proceedings Vol. 10806:
Tenth International Conference on Digital Image Processing (ICDIP 2018)
Xudong Jiang; Jenq-Neng Hwang, Editor(s)
© SPIE. Terms of Use
