Convergence of Stochastic Proximal Gradient Algorithm
Author(s)
Rosasco, Lorenzo; Villa, Silvia; Vũ, Bằng C
Download245_2019_9617_ReferencePDF.pdf (492.8Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Abstract
We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate is available and a relaxation step is allowed. We establish convergence rates for function values in the convex case, as well as almost sure convergence and convergence rates for the iterates under further convexity assumptions. Our analysis avoid averaging the iterates and error summability assumptions which might not be satisfied in applications, e.g. in machine learning. Our proofing technique extends classical ideas from the analysis of deterministic proximal gradient algorithms.
Date issued
2019-10-15Department
Center for Brains, Minds, and MachinesPublisher
Springer US