Show simple item record

dc.contributor.authorYun, Chulhee
dc.contributor.authorSra, Suvrit
dc.contributor.authorJadbabaie-Moghadam, Ali
dc.date.accessioned2022-01-06T20:15:51Z
dc.date.available2021-11-04T15:57:17Z
dc.date.available2022-01-06T20:15:51Z
dc.date.issued2019-12
dc.identifier.urihttps://hdl.handle.net/1721.1/137354.2
dc.description.abstract© 2019 Neural information processing systems foundation. All rights reserved. Recent results in the literature indicate that a residual network (ResNet) composed of a single residual block outperforms linear predictors, in the sense that all local minima in its optimization landscape are at least as good as the best linear predictor. However, these results are limited to a single residual block (i.e., shallow ResNets), instead of the deep ResNets composed of multiple residual blocks. We take a step towards extending this result to deep ResNets. We start by two motivating examples. First, we show that there exist datasets for which all local minima of a fully-connected ReLU network are no better than the best linear predictor, whereas a ResNet has strictly better local minima. Second, we show that even at the global minimum, the representation obtained from the residual block outputs of a 2-block ResNet do not necessarily improve monotonically over subsequent blocks, which highlights a fundamental difficulty in analyzing deep ResNets. Our main theorem on deep ResNets shows under simple geometric conditions that, any critical point in the optimization landscape is either (i) at least as good as the best linear predictor; or (ii) the Hessian at this critical point has a strictly negative eigenvalue. Notably, our theorem shows that a chain of multiple skip-connections can improve the optimization landscape, whereas existing results study direct skip-connections to the last hidden layer or output layer. Finally, we complement our results by showing benign properties of the “near-identity regions” of deep ResNets, showing depth-independent upper bounds for the risk attained at critical points as well as the Rademacher complexity.en_US
dc.language.isoen
dc.relation.isversionofhttps://papers.nips.cc/paper/2019/hash/661c1c090ff5831a647202397c61d73c-Abstract.htmlen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleAre deep ResNets provably better than linear predictors?en_US
dc.typeArticleen_US
dc.identifier.citation2019. "Are deep ResNets provably better than linear predictors?." Advances in Neural Information Processing Systems, 32.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Civil and Environmental Engineeringen_US
dc.relation.journalAdvances in Neural Information Processing Systemsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-03-26T14:19:09Z
dspace.orderedauthorsYun, C; Sra, S; Jadbabaie, Aen_US
dspace.date.submission2021-03-26T14:19:10Z
mit.journal.volume32en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusReady for Final Reviewen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version