Now showing items 101-103 of 103
Classical generalization bounds are surprisingly tight for Deep Networks
(Center for Brains, Minds and Machines (CBMM), 2018-07-11)
Deep networks are usually trained and tested in a regime in which the training classification error is not a good predictor of the test error. Thus the consensus has been that generalization, defined as convergence of the ...
Predicting Actions Before They Occur
(Center for Brains, Minds and Machines (CBMM), 2015-10-26)
Humans are experts at reading others’ actions in social contexts. They efficiently process others’ movements in real-time to predict intended goals. Here we designed a two-person reaching task to investigate real-time body ...
Theory of Deep Learning III: explaining the non-overfitting puzzle
THIS MEMO IS REPLACED BY CBMM MEMO 90 A main puzzle of deep networks revolves around the absence of overfitting despite overparametrization and despite the large capacity demonstrated by zero training error on randomly ...