Skip to main content

Research Repository

Advanced Search

Exponential Concentration in Stochastic Approximation

Law, Kody J. H.; Walton, Neil; Yang, Shangda

Exponential Concentration in Stochastic Approximation Thumbnail


Authors

Kody J. H. Law

Shangda Yang



Abstract

We analyze the behavior of stochastic approximation algorithms where iterates, in expectation, progress towards an objective at each step. When progress is proportional to the step size of the algorithm, we prove exponential concentration bounds. These tail-bounds contrast asymptotic normality results, which are more frequently associated with stochastic approximation. The methods that we develop rely on a proof of geometric ergodicity. This extends the result of Markov chains due to Hajek (1982) to stochastic approximation algorithms. We apply our results to several different stochastic approximation algorithms, specifically Projected Stochastic Gradient Descent, Kiefer-Wolfowitz, and Stochastic Frank-Wolfe algorithms. When applicable, our results prove faster O(1/t) and linear convergence rates for Projected Stochastic Gradient Descent with a non-vanishing gradient.

Citation

Law, K. J. H., Walton, N., & Yang, S. (online). Exponential Concentration in Stochastic Approximation. Operations Research, https://doi.org/10.1287/opre.2023.0425

Journal Article Type Article
Acceptance Date Feb 10, 2025
Online Publication Date Apr 22, 2025
Deposit Date May 1, 2025
Publicly Available Date May 7, 2025
Journal Operations Research
Print ISSN 0030-364X
Electronic ISSN 1526-5463
Publisher Institute for Operations Research and Management Sciences
Peer Reviewed Peer Reviewed
DOI https://doi.org/10.1287/opre.2023.0425
Keywords stochastic approximation, projected stochastic gradient descent, concentration bounds
Public URL https://durham-repository.worktribe.com/output/3805743

Files






You might also like



Downloadable Citations