Search
Now showing items 1-10 of 17
Group Invariant Deep Representations for Image Instance Retrieval
(Center for Brains, Minds and Machines (CBMM), 2016-01-11)
Most image instance retrieval pipelines are based on comparison of vectors known as global image descriptors between a query image and the database images. Due to their success in large scale image classification, ...
Foveation-based Mechanisms Alleviate Adversarial Examples
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-19)
We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in ...
Learning Real and Boolean Functions: When Is Deep Better Than Shallow
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-03-08)
We describe computational tasks - especially in vision - that correspond to compositional/hierarchical functions. While the universal approximation property holds both for hierarchical and shallow networks, we prove that ...
Probing the compositionality of intuitive functions
(Center for Brains, Minds and Machines (CBMM), 2016-05-26)
How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into ...
Contrastive Analysis with Predictive Power: Typology Driven Estimation of Grammatical Error Distributions in ESL
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-06-05)
This work examines the impact of crosslinguistic transfer on grammatical errors in English as Second Language (ESL) texts. Using a computational framework that formalizes the theory of Contrastive Analysis (CA), we demonstrate ...
Where do hypotheses come from?
(Center for Brains, Minds and Machines (CBMM), 2016-10-24)
Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available ...
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-11-23)
[formerly titled "Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: a Review"]
The paper reviews and extends an emerging body of theoretical results on deep learning including the ...
Fast, invariant representation for human action in the visual system
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-06)
The ability to recognize the actions of others from visual input is essential to humans' daily lives. The neural computations underlying action recognition, however, are still poorly understood. We use magnetoencephalography ...
Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-04-12)
We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with ...
View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-06-03)
The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identity-preserving ...