Share Email Print
cover

Proceedings Paper

Adaptive learning rate method based on Nesterov accelerated gradient
Author(s): Zhenxing Xu; Ping Yang; Bing Xu; Heping Li
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

A kind of stochastic gradient descent method of self-adaptive learning rate is proposed in this thesis. This method is based on the optimization algorithm Nesterov accelerated gradient (NAG). First second derivative approximation of cost function is executed, then the final update orientation is corrected through self-adaptive learning rate, and the convergence of the method is analyzed theoretically. This method required no manual adjustment of the learning rate and is robust in the selection of noise gradient information and hyper-parameters, featuring high computation efficiency and small memory overhead. Finally, a comparison is made between this method and other stochastic gradient descent methods through MNIST digital classification task, and the experiment result showed that Adan worked well with the faster rate of convergence and is better than other stochastic gradient descent optimization methods.

Paper Details

Date Published: 24 October 2017
PDF: 6 pages
Proc. SPIE 10462, AOPC 2017: Optical Sensing and Imaging Technology and Applications, 104622R (24 October 2017); doi: 10.1117/12.2284928
Show Author Affiliations
Zhenxing Xu, Institute of Optics and Electronics (China)
Univ. of Electronic Science and Technology of China (China)
Univ. of Chinese Academy of Sciences (China)
Ping Yang, Institute of Optics and Electronics (China)
Univ. of Chinese Academy of Sciences (China)
Bing Xu, Institute of Optics and Electronics (China)
Univ. of Chinese Academy of Sciences (China)
Heping Li, Univ. of Electronic Science and Technology of China (China)


Published in SPIE Proceedings Vol. 10462:
AOPC 2017: Optical Sensing and Imaging Technology and Applications
Yadong Jiang; Haimei Gong; Weibiao Chen; Jin Li, Editor(s)

© SPIE. Terms of Use
Back to Top