Now showing items 1-6 of 6
Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results
(Center for Brains, Minds and Machines (CBMM), 2018-11-02)
Deep neural networks have been shown to predict neural responses in higher visual cortex. The mapping from the model to a neuron in the brain occurs through a linear combination of many units in the model, leaving open the ...
Foveation-based Mechanisms Alleviate Adversarial Examples
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-19)
We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in ...
The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors
(Center for Brains, Minds and Machines (CBMM), 2018-11-01)
The digital information age has generated new outlets for content creators to publish so-called “fake news”, a new form of propaganda that is intentionally designed to mislead the reader. With the widespread effects of the ...
Theory of Deep Learning III: explaining the non-overfitting puzzle
THIS MEMO IS REPLACED BY CBMM MEMO 90 A main puzzle of deep networks revolves around the absence of overfitting despite overparametrization and despite the large capacity demonstrated by zero training error on randomly ...
Do Neural Networks for Segmentation Understand Insideness?
(Center for Brains, Minds and Machines (CBMM), 2020-04-04)
The insideness problem is an image segmentation modality that consists of determining which pixels are inside and outside a region. Deep Neural Networks (DNNs) excel in segmentation benchmarks, but it is unclear that they ...
On the Capability of Neural Networks to Generalize to Unseen Category-Pose Combinations
(Center for Brains, Minds and Machines (CBMM), 2020-07-17)
Recognizing an object’s category and pose lies at the heart of visual understanding. Recent works suggest that deep neural networks (DNNs) often fail to generalize to category-pose combinations not seen during training. ...