Show simple item record

dc.contributor.authorLiao, Qianli
dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2016-04-14T16:44:39Z
dc.date.available2016-04-14T16:44:39Z
dc.date.issued2016-04-12
dc.identifier.urihttp://hdl.handle.net/1721.1/102238
dc.description.abstractWe discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the effectiveness of the architectures by testing them on the CIFAR-10 dataset.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216.en_US
dc.language.isoen_USen_US
dc.publisherCenter for Brains, Minds and Machines (CBMM), arXiven_US
dc.relation.ispartofseriesCBMM Memo Series;047
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectResidual Networks (ResNet)en_US
dc.subjectRecurrent Neural Networks (RNNs)en_US
dc.subjectprimate visual cortexen_US
dc.subjectCIFAR-10en_US
dc.titleBridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortexen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US
dc.identifier.citationarXiv:1604.03640v1en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record