Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
Author(s)
Liao, Qianli; Poggio, Tomaso![Thumbnail](/bitstream/handle/1721.1/102238/CBMM-Memo-047.pdf.jpg?sequence=5&isAllowed=y)
DownloadCBMM-Memo-047.pdf (1.287Mb)
Terms of use
Metadata
Show full item recordAbstract
We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the effectiveness of the architectures by testing them on the CIFAR-10 dataset.
Date issued
2016-04-12Publisher
Center for Brains, Minds and Machines (CBMM), arXiv
Citation
arXiv:1604.03640v1
Series/Report no.
CBMM Memo Series;047
Keywords
Residual Networks (ResNet), Recurrent Neural Networks (RNNs), primate visual cortex, CIFAR-10
Collections
The following license files are associated with this item: