Show simple item record

dc.contributor.authorJadbabaie, Ali
dc.contributor.authorYun, Chulhee
dc.contributor.authorSra, Suvrit
dc.date.accessioned2021-11-03T12:57:28Z
dc.date.available2021-11-03T12:57:28Z
dc.date.issued2019
dc.identifier.urihttps://hdl.handle.net/1721.1/137171
dc.description.abstract© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. We provide a theoretical algorithm for checking local optimality and escaping saddles at nondifferentiable points of empirical risks of two-layer ReLU networks. Our algorithm receives any parameter value and returns: local minimum, second-order stationary point, or a strict descent direction. The presence of M data points on the nondifferentiability of the ReLU divides the parameter space into at most 2M regions, which makes analysis difficult. By exploiting polyhedral geometry, we reduce the total computation down to one convex quadratic program (QP) for each hidden node, O(M) (in)equality tests, and one (or a few) nonconvex QP. For the last QP, we show that our specific problem can be solved efficiently, in spite of nonconvexity. In the benign case, we solve one equality constrained QP, and we prove that projected gradient descent solves it exponentially fast. In the bad case, we have to solve a few more inequality constrained QPs, but we prove that the time complexity is exponential only in the number of inequality constraints. Our experiments show that either benign case or bad case with very few inequality constraints occurs, implying that our algorithm is efficient in most cases.en_US
dc.language.isoen
dc.relation.isversionofhttps://openreview.net/forum?id=HylTXn0qYXen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleEfficiently testing local optimality and escaping saddles for RELu networksen_US
dc.typeArticleen_US
dc.identifier.citationJadbabaie, Ali, Yun, Chulhee and Sra, Suvrit. 2019. "Efficiently testing local optimality and escaping saddles for RELu networks." 7th International Conference on Learning Representations, ICLR 2019.
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systems
dc.contributor.departmentMassachusetts Institute of Technology. Institute for Data, Systems, and Society
dc.relation.journal7th International Conference on Learning Representations, ICLR 2019en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-04-15T18:15:59Z
dspace.orderedauthorsYun, C; Sra, S; Jadbabaie, Aen_US
dspace.date.submission2021-04-15T18:16:00Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record