Show simple item record

dc.contributor.authorChen, Annie I.
dc.contributor.authorOzdaglar, Asuman E.
dc.date.accessioned2014-09-30T18:19:17Z
dc.date.available2014-09-30T18:19:17Z
dc.date.issued2012-10
dc.identifier.isbn978-1-4673-4539-2
dc.identifier.isbn978-1-4673-4537-8
dc.identifier.isbn978-1-4673-4538-5
dc.identifier.urihttp://hdl.handle.net/1721.1/90490
dc.description.abstractWe present a distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology. The local objectives have distinct differentiable components, but they share a common nondifferentiable component, which has a favorable structure suitable for effective computation of the proximal operator. In our method, each agent iteratively updates its estimate of the global minimum by optimizing its local objective function, and exchanging estimates with others via communication in the network. Using Nesterov-type acceleration techniques and multiple communication steps per iteration, we show that this method converges at the rate 1/k (where k is the number of communication rounds between the agents), which is faster than the convergence rate of the existing distributed methods for solving this problem. The superior convergence rate of our method is also verified by numerical experiments.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Career Grant DMI-0545910)en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research. Multidisciplinary University Research Initiative (FA9550-09-1-0538)en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/Allerton.2012.6483273en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleA fast distributed proximal-gradient methoden_US
dc.typeArticleen_US
dc.identifier.citationChen, Annie I., and Asuman Ozdaglar. “A Fast Distributed Proximal-Gradient Method.” 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (October 2012).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.mitauthorChen, Annie I.en_US
dc.contributor.mitauthorOzdaglar, Asuman E.en_US
dc.relation.journalProceedings of the 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsChen, Annie I.; Ozdaglar, Asumanen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-1827-1285
dc.identifier.orcidhttps://orcid.org/0000-0001-8415-8953
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record