Overparameterized neural networks implement associative memory
Author(s)
Radhakrishnan, Adityanarayanan; Belkin, Mikhail; Uhler, Caroline
DownloadPublished version (15.27Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
© 2020 National Academy of Sciences. All rights reserved. Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. We provide empirical evidence that 1) overparameterized autoencoders store training samples as attractors and thus iterating the learned map leads to sample recovery, and that 2) the same mechanism allows for encoding sequences of examples and serves as an even more efficient mechanism for memory than autoencoding. Theoretically, we prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.
Date issued
2020Department
Massachusetts Institute of Technology. Laboratory for Information and Decision Systems; Massachusetts Institute of Technology. Institute for Data, Systems, and SocietyJournal
Proceedings of the National Academy of Sciences of the United States of America
Publisher
Proceedings of the National Academy of Sciences