Share Email Print
cover

Proceedings Paper

High-speed VMEbus-based analog neurocomputing architecture for image classification
Author(s): Mua D. Tran; Tuan A. Duong; Raoul Tawel; Taher Daud; Anilkumar P. Thakoor
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

To fully exploit the real-time computational capabilities of neural networks (NN) -- as applied to image processing applications -- a high performance VMEbus based analog neurocomputing architecture (VMENA) is developed. The inherent parallelism of an analog VLSI NN embodiment enables a fully parallel and hence high speed and high-throughput hardware implementation of NN architectures. The VMEbus interface is specifically chosen to overcome the limited bandwidth of the PC host computer industrial standard architecture (ISA) bus. The NN board is built around cascadable VLSI NN chips (32 X 32 synapse chips and 32 X 32 neuron/synapse composite chips) for a total of 64 neurons and over 8 K synapses. Under software control, the system architecture could be flexibly reconfigured from feedback to feedforward and vice versa, and once selected, the NN topology (i.e. the number of neurons per input, hidden, and output layer and the number of layers) could be carved out from the set of neuron and synapse resources. An efficient hardware-in-the-loop cascade backpropagation (CBP) learning algorithm is implemented on the hardware. This supervised learning algorithm allows the network architecture to dynamically evolve by adding hidden neurons while modulating their synaptic weights using standard gradient-descent backpropagation. As a demonstration, the NN hardware system is applied to a computationally intensive map-data classification problem. Training sets ranging in size from 50 to 2500 pixels are utilized to train the network, and the best result for the hardware-in-the-loop learning is found to be comparable to the best result of the software NN simulation. Once trained, the VMENA subsystem is capable of processing at approximately 75,000 feedforward passes/second, resulting in over twofold computational throughput improvement relative to the ISAbus based neural network architecture.

Paper Details

Date Published: 6 April 1995
PDF: 13 pages
Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); doi: 10.1117/12.205135
Show Author Affiliations
Mua D. Tran, Jet Propulsion Lab. (United States)
Tuan A. Duong, Jet Propulsion Lab. (United States)
Raoul Tawel, Jet Propulsion Lab. (United States)
Taher Daud, Jet Propulsion Lab. (United States)
Anilkumar P. Thakoor, Jet Propulsion Lab. (United States)


Published in SPIE Proceedings Vol. 2492:
Applications and Science of Artificial Neural Networks
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top