Show simple item record

dc.contributor.authorRosasco, Lorenzo
dc.contributor.authorVilla, Silvia
dc.contributor.authorVũ, Bằng C
dc.date.accessioned2021-09-20T17:30:33Z
dc.date.available2021-09-20T17:30:33Z
dc.date.issued2019-10-15
dc.identifier.urihttps://hdl.handle.net/1721.1/131842
dc.description.abstractAbstract We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate is available and a relaxation step is allowed. We establish convergence rates for function values in the convex case, as well as almost sure convergence and convergence rates for the iterates under further convexity assumptions. Our analysis avoid averaging the iterates and error summability assumptions which might not be satisfied in applications, e.g. in machine learning. Our proofing technique extends classical ideas from the analysis of deterministic proximal gradient algorithms.en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttps://doi.org/10.1007/s00245-019-09617-7en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSpringer USen_US
dc.titleConvergence of Stochastic Proximal Gradient Algorithmen_US
dc.typeArticleen_US
dc.contributor.departmentCenter for Brains, Minds, and Machines
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2020-10-28T04:27:59Z
dc.language.rfc3066en
dc.rights.holderSpringer Science+Business Media, LLC, part of Springer Nature
dspace.embargo.termsY
dspace.date.submission2020-10-28T04:27:59Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record