Share Email Print

Proceedings Paper

Convergence rates of finite difference stochastic approximation algorithms part I: general sampling
Author(s): Liyi Dai
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. The analysis is carried out under a general framework covering a wide range of updating scenarios. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences.

Paper Details

Date Published: 19 May 2016
PDF: 11 pages
Proc. SPIE 9871, Sensing and Analysis Technologies for Biomedical and Cognitive Applications 2016, 98710L (19 May 2016); doi: 10.1117/12.2228250
Show Author Affiliations
Liyi Dai, U.S. Army Research Office (United States)

Published in SPIE Proceedings Vol. 9871:
Sensing and Analysis Technologies for Biomedical and Cognitive Applications 2016
Liyi Dai; Yufeng Zheng; Henry Chu; Anke D. Meyer-Bäse, Editor(s)

© SPIE. Terms of Use
Back to Top