Show simple item record

dc.contributor.authorCheney, Nicholas
dc.contributor.authorSchrimpf, Martin
dc.contributor.authorKreiman, Gabriel
dc.date.accessioned2017-04-07T15:25:52Z
dc.date.available2017-04-07T15:25:52Z
dc.date.issued2017-04-03
dc.identifier.urihttp://hdl.handle.net/1721.1/107935
dc.description.abstractDeep convolutional neural networks are generally regarded as robust function approximators. So far, this intuition is based on perturbations to external stimuli such as the images to be classified. Here we explore the robustness of convolutional neural networks to perturbations to the internal weights and architecture of the network itself. We show that convolutional networks are surprisingly robust to a number of internal perturbations in the higher convolutional layers but the bottom convolutional layers are much more fragile. For instance, Alexnet shows less than a 30% decrease in classification performance when randomly removing over 70% of weight connections in the top convolutional or dense layers but performance is almost at chance with the same perturbation in the first convolutional layer. Finally, we suggest further investigations which could continue to inform the robustness of convolutional networks to internal perturbations.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.language.isoen_USen_US
dc.publisherCenter for Brains, Minds and Machines (CBMM), arXiven_US
dc.relation.ispartofseriesCBMM Memo Series;065
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.titleOn the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbationsen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US
dc.identifier.citationarXiv:1703.08245en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record