Share Email Print
cover

Proceedings Paper

Feedforward neural nets and one-dimensional representation
Author(s): Laurence C. W. Dixon; David Mills
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Feedforward nets can be trained to represent any continuous function, and training is equivalent to solving a nonlinear optimization problem. Unfortunately, it frequently leads to an error function with a Hessian matrix that is effectively singular at the solution. Traditional quadratic based optimization algorithms do not perform superlinearly on functions with a singular Hessian, but results on univariate functions show that even so they are more efficient and reliable than backpropagation. A feedforward net is used to represent a superposition of its own sigmoid activation function. The results identify some conditions for which the Hessian of the error function is effectively singular.

Paper Details

Date Published: 1 July 1992
PDF: 10 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140100
Show Author Affiliations
Laurence C. W. Dixon, Hatfield Polytechnic (United Kingdom)
David Mills, Hatfield Polytechnic (United Kingdom)


Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top