dc.contributor.author | Liao, Qianli | |
dc.contributor.author | Poggio, Tomaso | |
dc.date.accessioned | 2016-04-14T16:44:39Z | |
dc.date.available | 2016-04-14T16:44:39Z | |
dc.date.issued | 2016-04-12 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/102238 | |
dc.description.abstract | We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the effectiveness of the architectures by testing them on the CIFAR-10 dataset. | en_US |
dc.description.sponsorship | This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM), arXiv | en_US |
dc.relation.ispartofseries | CBMM Memo Series;047 | |
dc.rights | Attribution-NonCommercial-ShareAlike 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/us/ | * |
dc.subject | Residual Networks (ResNet) | en_US |
dc.subject | Recurrent Neural Networks (RNNs) | en_US |
dc.subject | primate visual cortex | en_US |
dc.subject | CIFAR-10 | en_US |
dc.title | Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |
dc.identifier.citation | arXiv:1604.03640v1 | en_US |