Share Email Print
cover

Proceedings Paper

Neural networks: different problems require different learning rate adaptive methods
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In a previous study, a new adaptive method (AM) was developed to adjust the learning rate in artificial neural networks: the generalized no-decrease adaptive method (GNDAM). The GNDAM is fundamentally different from other traditional AMs. Instead of using the derivative sign of a given weight to adjust its learning rate, this AM is based on a trial and error heuristic where global learning rates are adjusted according to the error rates produced by two identical networks using different learning rates. This AM was developed to solve a particular task: the orientation detection of an image defined by texture (the texture task). This new task is also fundamentally different from other traditional ones since its data set is infinite, each pattern is a template used to generate stimuli that the network learns to classify. In the previous study, the GNDAM showed its strength over standard backpropagation for this particular task. The present study compares this new AM to other traditional AMs on the texture task and other benchmark tasks. The results showed that some AMs work well for some tasks while others work better for other tasks. However, all of them failed to achieve a good performance on all tasks.

Paper Details

Date Published: 28 May 2004
PDF: 12 pages
Proc. SPIE 5298, Image Processing: Algorithms and Systems III, (28 May 2004); doi: 10.1117/12.527094
Show Author Affiliations
Remy Allard, Univ. de Montreal (Canada)
Jocelyn Faubert, Univ. de Montreal (Canada)


Published in SPIE Proceedings Vol. 5298:
Image Processing: Algorithms and Systems III
Edward R. Dougherty; Jaakko T. Astola; Karen O. Egiazarian, Editor(s)

© SPIE. Terms of Use
Back to Top