Share Email Print

Proceedings Paper

Improving neural network performance by adapting node nonlinearities
Author(s): Frans M. Coetzee; Virginia L. Stonick
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

It is known that using an infinite number of hidden layer nodes feedforward neural networks can approximate any continuous function with compact support arbitrarily well using very simple node nonlinearities. We investigate whether network architectures can be found that use more complicated node nonlinearities to achieve better approximation using a restricted number of nodes. Two methods are proposed, one based on modifying standard backpropagation networks, and one based on Kolmogorov's theorem. The feasibility of these networks is evaluated by considering their performance when predicting chaotic time series and memorizing the XOR mapping.

Paper Details

Date Published: 16 December 1992
PDF: 12 pages
Proc. SPIE 1766, Neural and Stochastic Methods in Image and Signal Processing, (16 December 1992); doi: 10.1117/12.130830
Show Author Affiliations
Frans M. Coetzee, Carnegie Mellon Univ. (United States)
Virginia L. Stonick, Carnegie Mellon Univ. (United States)

Published in SPIE Proceedings Vol. 1766:
Neural and Stochastic Methods in Image and Signal Processing
Su-Shing Chen, Editor(s)

© SPIE. Terms of Use
Back to Top