Share Email Print

Proceedings Paper

Application of simulated annealing to the backpropagation model improves convergence
Author(s): Charles B. Owen; Adel M. Abunawass
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The Backpropagation technique for supervised learning of internal representations in multi- layer artificial neural networks is an effective approach for solution of the gradient descent problem. However, as a primarily deterministic solution, it will attempt to take the best path to the nearest minimum, whether global or local. If a local minimum is reached, the network will fail to learn or will learn a poor approximation of the solution. This paper describes a novel approach to the Backpropagation model based on Simulated Annealing. This modified learning model is designed to provide an effective means of escape from local minima. The system is shown to converge more reliably and much faster than traditional noise insertion techniques. Due to the characteristics of the cooling schedule, the system also demonstrates a more consistent training profile.

Paper Details

Date Published: 19 August 1993
PDF: 8 pages
Proc. SPIE 1966, Science of Artificial Neural Networks II, (19 August 1993); doi: 10.1117/12.152626
Show Author Affiliations
Charles B. Owen, Western Illinois Univ. (United States)
Adel M. Abunawass, Western Illinois Univ. (United States)

Published in SPIE Proceedings Vol. 1966:
Science of Artificial Neural Networks II
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top