Show simple item record

dc.contributor.authorMhaskar, H.N.
dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2019-05-31T15:32:43Z
dc.date.available2019-05-31T15:32:43Z
dc.date.issued2019-05-30
dc.identifier.urihttps://hdl.handle.net/1721.1/121183
dc.description.abstractThis paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.publisherCenter for Brains, Minds and Machines (CBMM), arXiv.orgen_US
dc.relation.ispartofseriesCBMM Memo Series;098
dc.titleAn analysis of training and generalization errors in shallow and deep networksen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record