Share Email Print
cover

Proceedings Paper

Performance aspects of mapping neural networks onto a massively parallel SIMD computer
Author(s): Andreas Zell; Michael C. Vogt; Niels Mache; Markus Huttel
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In this paper we present and compare three different massively parallel implementations of multilayer feedforward neural networks on a MasPar MP-1216, a parallel SIMD computer with 16,384 processors. For multilayer feedforward networks we have obtained sustained rates of up to 348 MCPS and 129 MCUPS with backpropagation, a high mark for general purpose SIMD computers. After a brief introduction to SNNS, the paper first focuses on the problems of mapping neural networks to parallel hardware. Different aspects of parallelism are presented. Two combinations of unit and training pattern parallelism were implemented as well as link and training pattern parallelism. We describe the implementation problems in obtaining high propagation rates on a SIMD machine and problems with the resulting learning algorithms in general.

Paper Details

Date Published: 2 September 1993
PDF: 11 pages
Proc. SPIE 1965, Applications of Artificial Neural Networks IV, (2 September 1993); doi: 10.1117/12.152537
Show Author Affiliations
Andreas Zell, Univ. Stuttgart (Germany)
Michael C. Vogt, Univ. Stuttgart (United States)
Niels Mache, Univ. Stuttgart (Germany)
Markus Huttel, Univ. Stuttgart (Germany)


Published in SPIE Proceedings Vol. 1965:
Applications of Artificial Neural Networks IV
Steven K. Rogers, Editor(s)

© SPIE. Terms of Use
Back to Top