Difference between memory and prediction in linear recurrent networks
Author(s)
Marzen, Sarah E.
DownloadPhysRevE.96.032308.pdf (244.7Kb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Recurrent networks are trained to memorize their input better, often in the hopes that such training will increase the ability of the network to predict. We show that networks designed to memorize input can be arbitrarily bad at prediction. We also find, for several types of inputs, that one-node networks optimized for prediction are nearly at upper bounds on predictive capacity given by Wiener filters and are roughly equivalent in performance to randomly generated five-node networks. Our results suggest that maximizing memory capacity leads to very different networks than maximizing predictive capacity and that optimizing recurrent weights can decrease reservoir size by half an order of magnitude.
Date issued
2017-09Department
Massachusetts Institute of Technology. Department of PhysicsJournal
Physical Review E
Publisher
American Physical Society
Citation
Marzen, Sarah et al. "Difference between memory and prediction in linear recurrent networks." Physical Review E 96, 3 (September 2017): 032308 © 2017 American Physical Society
Version: Final published version
ISSN
2470-0045
2470-0053