Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
Author(s)
Mhaskar, Hrushikesh; Rosasco, Lorenzo; Miranda, Brando; Liao, Qianli; Poggio, Tomaso A
Download11633_2017_Article_1054.pdf (1.675Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.
Date issued
2017-03Department
Center for Brains, Minds and Machines at MIT; Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; McGovern Institute for Brain Research at MITJournal
International Journal of Automation and Computing
Publisher
Institute of Automation, Chinese Academy of Sciences
Citation
Poggio, Tomaso, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. “Why and When Can Deep-but Not Shallow-Networks Avoid the Curse of Dimensionality: A Review.” International Journal of Automation and Computing (March 14, 2017).
Version: Author's final manuscript
ISSN
1476-8186
1751-8520