Share Email Print
cover

Proceedings Paper

Implementation of neural network hardware based on a floating point operation in an FPGA
Author(s): Jeong-Seob Kim; Seul Jung
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

This paper presents a hardware design and implementation of the radial basis function (RBF) neural network (NN) by the hardware description language. Due to its nonlinear characteristics, it is very difficult to implement for a system with integer-based operation. To develop nonlinear functions such sigmoid functions or exponential functions, floating point operations are required. The exponential function is designed based on the 32bit single-precision floating-point format. In addition, to update weights in the network, the back-propagation algorithm is also implemented in the hardware. Most operations are performed in the floating-point based arithmetic unit and accomplished sequentially by the instruction order stored in ROM. The NN is implemented and tested on the Altera FPGA "Cyclone2 EP2C70F672C8" for nonlinear classifications.

Paper Details

Date Published: 9 January 2008
PDF: 6 pages
Proc. SPIE 6794, ICMIT 2007: Mechatronics, MEMS, and Smart Materials, 679451 (9 January 2008); doi: 10.1117/12.784122
Show Author Affiliations
Jeong-Seob Kim, Chungnam National Univ. (South Korea)
Seul Jung, Chungnam National Univ. (South Korea)


Published in SPIE Proceedings Vol. 6794:
ICMIT 2007: Mechatronics, MEMS, and Smart Materials
Minoru Sasaki; Gisang Choi Sang; Zushu Li; Ryojun Ikeura; Hyungki Kim; Fangzheng Xue, Editor(s)

© SPIE. Terms of Use
Back to Top