Now showing items 51-55 of 55
From Associative Memories to Deep Networks
(Center for Brains, Minds and Machines (CBMM), 2021-01-12)
About fifty years ago, holography was proposed as a model of associative memory. Associative memories with similar properties were soon after implemented as simple networks of threshold neurons by Willshaw and Longuet-Higgins. ...
Dreaming with ARC
(Center for Brains, Minds and Machines (CBMM), 2020-11-23)
Current machine learning algorithms are highly specialized to whatever it is they are meant to do –– e.g. playing chess, picking up objects, or object recognition. How can we extend this to a system that could solve a ...
Implicit dynamic regularization in deep networks
(Center for Brains, Minds and Machines (CBMM), 2020-08-17)
Square loss has been observed to perform well in classification tasks, at least as well as crossentropy. However, a theoretical justification is lacking. Here we develop a theoretical analysis for the square loss that also ...
Cross-validation Stability of Deep Networks
(Center for Brains, Minds and Machines (CBMM), 2021-02-09)
Recent theoretical results show that gradient descent on deep neural networks under exponential loss functions locally maximizes classification margin, which is equivalent to minimizing the norm of the weight matrices under ...
The Effects of Image Distribution and Task on Adversarial Robustness
(Center for Brains, Minds and Machines (CBMM), 2021-02-18)
In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular ε-interval [ε_0, ε_1] (interval of adversarial perturbation strengths) ...