An analysis of training and generalization errors in shallow and deep networks
Author(s)
Mhaskar, Hrushikesh; Poggio, Tomaso
DownloadCBMM-Memo-076.pdf (772.6Kb)
Terms of use
Metadata
Show full item recordAbstract
An open problem around deep networks is the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we explain this phenomenon when each unit evaluates a trigonometric polynomial. It is well understood in the theory of function approximation that approximation by trigonometric polynomials is a “role model” for many other processes of approximation that have inspired many theoretical constructions also in the context of approximation by neural and RBF networks. In this paper, we argue that the maximum loss functional is necessary to measure the generalization error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error, and how much error to expect at which test data. An interesting feature of our new method is that the variance in the training data is no longer an insurmountable lower bound on the generalization error.
Date issued
2018-02-20Publisher
Center for Brains, Minds and Machines (CBMM), arXiv.org
Citation
arXiv:1802.06266
Series/Report no.
CBMM Memo Series;076
Keywords
Deep learning, generalization error, interpolatory approximation
Collections
The following license files are associated with this item: