Show simple item record

dc.contributor.authorPoggio, Tomaso
dc.contributor.authorMhaskar, Hrushikesh
dc.contributor.authorRosasco, Lorenzo
dc.contributor.authorMiranda, Brando
dc.contributor.authorLiao, Qianli
dc.date.accessioned2016-11-28T17:38:30Z
dc.date.available2016-11-28T17:38:30Z
dc.date.issued2016-11-23
dc.identifier.urihttp://hdl.handle.net/1721.1/105443
dc.description.abstract[formerly titled "Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: a Review"] The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216.en_US
dc.language.isoen_USen_US
dc.publisherCenter for Brains, Minds and Machines (CBMM), arXiven_US
dc.relation.ispartofseriesCBMM Memo Series;058
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectDeep Learningen_US
dc.subjectdeep convolutional networksen_US
dc.titleTheory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?en_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US
dc.identifier.citationarXiv:1611.00740v5en_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record