Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
Author(s)Mhaskar, Hrushikesh; Rosasco, Lorenzo; Miranda, Brando; Liao, Qianli; Poggio, Tomaso A
MetadataShow full item record
The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.
DepartmentCenter for Brains, Minds and Machines at MIT; Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; McGovern Institute for Brain Research at MIT
International Journal of Automation and Computing
Institute of Automation, Chinese Academy of Sciences
Poggio, Tomaso, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. “Why and When Can Deep-but Not Shallow-Networks Avoid the Curse of Dimensionality: A Review.” International Journal of Automation and Computing (March 14, 2017).
Author's final manuscript