Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?
Author(s)
Poggio, Tomaso; Mhaskar, Hrushikesh; Rosasco, Lorenzo; Miranda, Brando; Liao, Qianli
DownloadCBMM-Memo-058v5.pdf (2.449Mb)
Additional downloads
Terms of use
Metadata
Show full item recordAbstract
[formerly titled "Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: a Review"]
The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.
Date issued
2016-11-23Publisher
Center for Brains, Minds and Machines (CBMM), arXiv
Citation
arXiv:1611.00740v5
Series/Report no.
CBMM Memo Series;058
Keywords
Deep Learning, deep convolutional networks
Collections
The following license files are associated with this item: