MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Small ReLU networks are powerful memorizers: A tight analysis of memorization capacity

Author(s)
Yun, Chulhee; Sra, Suvrit; Jadbabaie, Ali
Thumbnail
DownloadPublished version (311.1Kb)
Publisher Policy

Publisher Policy

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.

Terms of use
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Metadata
Show full item record
Abstract
© 2019 Neural information processing systems foundation. All rights reserved. We study finite sample expressivity, i.e., memorization power of ReLU networks. Recent results require N hidden nodes to memorize/interpolate arbitrary N data points. In contrast, by exploiting depth, we show that 3-layer ReLU networks with ?(vN) hidden nodes can perfectly memorize most datasets with N points. We also prove that width T(vN) is necessary and sufficient for memorizing N data points, proving tight bounds on memorization capacity. The sufficiency result can be extended to deeper networks; we show that an L-layer network with W parameters in the hidden layers can memorize N data points if W = ?(N). Combined with a recent upper bound O(WLlog W) on VC dimension, our construction is nearly tight for any fixed L. Subsequently, we analyze memorization capacity of residual networks under a general position assumption; we prove results that substantially reduce the known requirement of N hidden nodes. Finally, we study the dynamics of stochastic gradient descent (SGD), and show that when initialized near a memorizing global minimum of the empirical risk, SGD quickly finds a nearby point with much smaller empirical risk.
Date issued
2019
URI
https://hdl.handle.net/1721.1/137480
Department
Massachusetts Institute of Technology. Laboratory for Information and Decision Systems; Massachusetts Institute of Technology. Institute for Data, Systems, and Society
Journal
Advances in Neural Information Processing Systems
Citation
Yun, Chulhee, Sra, Suvrit and Jadbabaie, Ali. 2019. "Small ReLU networks are powerful memorizers: A tight analysis of memorization capacity." Advances in Neural Information Processing Systems, 32.
Version: Final published version

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.