Share Email Print

Proceedings Paper

CPU architecture for a fast and energy-saving calculation of convolution neural networks
Author(s): Florian J. Knoll; Michael Grelcke; Vitali Czymmek; Tim Holtorf; Stephan Hussmann
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

One of the most difficult problem in the use of artificial neural networks is the computational capacity. Although large search engine companies own specially developed hardware to provide the necessary computing power, for the conventional user only remains the state of the art method, which is the use of a graphic processing unit (GPU) as a computational basis. Although these processors are well suited for large matrix computations, they need massive energy. Therefore a new processor on the basis of a field programmable gate array (FPGA) has been developed and is optimized for the application of deep learning. This processor is presented in this paper. The processor can be adapted for a particular application (in this paper to an organic farming application). The power consumption is only a fraction of a GPU application and should therefore be well suited for energy-saving applications.

Paper Details

Date Published: 26 June 2017
PDF: 9 pages
Proc. SPIE 10335, Digital Optical Technologies 2017, 103351M (26 June 2017); doi: 10.1117/12.2270282
Show Author Affiliations
Florian J. Knoll, West Coast Univ. of Applied Sciences (Germany)
Michael Grelcke, West Coast Univ. of Applied Sciences (Germany)
Vitali Czymmek, West Coast Univ. of Applied Sciences (Germany)
Tim Holtorf, West Coast Univ. of Applied Sciences (Germany)
Stephan Hussmann, West Coast Univ. of Applied Sciences (Germany)

Published in SPIE Proceedings Vol. 10335:
Digital Optical Technologies 2017
Bernard C. Kress; Peter Schelkens, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?