Recent Submissions

  • Implicit dynamic regularization in deep networks 

    Poggio, Tomaso; Liao, Qianli (Center for Brains, Minds and Machines (CBMM), 2020-08-17)
    Square loss has been observed to perform well in classification tasks, at least as well as crossentropy. However, a theoretical justification is lacking. Here we develop a theoretical analysis for the square loss that also ...
  • On the Capability of Neural Networks to Generalize to Unseen Category-Pose Combinations 

    Madan, Spandan; Henry, Timothy; Dozier, Jamell; Ho, Helen; Bhandari, Nishchal; e.a. (Center for Brains, Minds and Machines (CBMM), 2020-07-17)
    Recognizing an object’s category and pose lies at the heart of visual understanding. Recent works suggest that deep neural networks (DNNs) often fail to generalize to category-pose combinations not seen during training. ...
  • Loss landscape: SGD can have a better view than GD 

    Poggio, Tomaso; Cooper, Yaim (Center for Brains, Minds and Machines (CBMM), 2020-07-01)
    Consider a loss function L = 􏰀ni=1 l2i with li = f(xi) − yi, where f(x) is a deep feedforward network with R layers, no bias terms and scalar output. Assume the network is overparametrized that is, d >> n, where d is the ...

View more