Share Email Print
cover

Proceedings Paper

Jeffreys' prior for layered neural networks
Author(s): Yoichi Motomura
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In this paper, Jeffreys' prior for neural networks is discussed in the framework of the Bayesian statistics. For a good performance of generalization, the regularization methods which reduce both cost function and regularization term are commonly used. In the Bayesian statistics, the regularization term can be naturally derived from prior distribution of parameters. Jeffreys' prior is known as a typical non-informative objective prior. In the case of neural networks, however, it is not easy to express Jeffreys' prior as a simple function of parameters. In this paper, some numerical analysis of Jeffreys' prior for neural networks is given. The approximation of Jeffreys' prior is given from a parameter transformation getting to make Jeffreys' prior as a simple function. Some learning techniques are also discussed as applications of these results.

Paper Details

Date Published: 6 April 1995
PDF: 10 pages
Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); doi: 10.1117/12.205194
Show Author Affiliations
Yoichi Motomura, Electrotechnical Lab. (Japan)


Published in SPIE Proceedings Vol. 2492:
Applications and Science of Artificial Neural Networks
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top