
Proceedings Paper
Feedforward neural nets and one-dimensional representationFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Feedforward nets can be trained to represent any continuous function, and training is equivalent to solving a nonlinear optimization problem. Unfortunately, it frequently leads to an error function with a Hessian matrix that is effectively singular at the solution. Traditional quadratic based optimization algorithms do not perform superlinearly on functions with a singular Hessian, but results on univariate functions show that even so they are more efficient and reliable than backpropagation. A feedforward net is used to represent a superposition of its own sigmoid activation function. The results identify some conditions for which the Hessian of the error function is effectively singular.
Paper Details
Date Published: 1 July 1992
PDF: 10 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140100
Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)
PDF: 10 pages
Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140100
Show Author Affiliations
Laurence C. W. Dixon, Hatfield Polytechnic (United Kingdom)
David Mills, Hatfield Polytechnic (United Kingdom)
Published in SPIE Proceedings Vol. 1710:
Science of Artificial Neural Networks
Dennis W. Ruck, Editor(s)
© SPIE. Terms of Use
