Deep vs. shallow networks: An approximation theory perspective
Author(s)
Mhaskar, HN; Poggio, T
DownloadAccepted version (960.2Kb)
Terms of use
Metadata
Show full item recordAbstract
© 2016 World Scientific Publishing Company. The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function - the ReLU function - used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning.
Date issued
2016Department
Center for Brains, Minds, and Machines; McGovern Institute for Brain Research at MITJournal
Analysis and Applications
Publisher
World Scientific Pub Co Pte Lt
Citation
Mhaskar, H. N., and T. Poggio. "Deep Vs. Shallow Networks: An Approximation Theory Perspective." Analysis and Applications 14 6 (2016): 829-48.
Version: Author's final manuscript