Browsing CBMM Memo Series by Title
Now showing items 97113 of 113

Theoretical Issues in Deep Networks
(Center for Brains, Minds and Machines (CBMM), 20190817)While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the ... 
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?
(Center for Brains, Minds and Machines (CBMM), arXiv, 20161123)[formerly titled "Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: a Review"] The paper reviews and extends an emerging body of theoretical results on deep learning including the ... 
Theory II: Landscape of the Empirical Risk in Deep Learning
(Center for Brains, Minds and Machines (CBMM), arXiv, 20170330)Previous theoretical work on deep learning and neural network optimization tend to focus on avoiding saddle points and local minima. However, the practical observation is that, at least for the most successful Deep ... 
Theory IIIb: Generalization in Deep Networks
(Center for Brains, Minds and Machines (CBMM), arXiv.org, 20180629)The general features of the optimization problem for the case of overparametrized nonlinear networks have been clear for a while: SGD selects with high probability global minima vs local minima. In the overparametrized ... 
Theory of Deep Learning IIb: Optimization Properties of SGD
(Center for Brains, Minds and Machines (CBMM), 20171227)In Theory IIb we characterize with a mix of theory and experiments the optimization of deep convolutional networks by Stochastic Gradient Descent. The main new result in this paper is theoretical and experimental evidence ... 
Theory of Deep Learning III: explaining the nonoverfitting puzzle
(arXiv, 20171230)THIS MEMO IS REPLACED BY CBMM MEMO 90 A main puzzle of deep networks revolves around the absence of overfitting despite overparametrization and despite the large capacity demonstrated by zero training error on randomly ... 
Theory of Intelligence with Forgetting: Mathematical Theorems Explaining Human Universal Forgetting using “Forgetting Neural Networks”
(Center for Brains, Minds and Machines (CBMM), 20171205)In [42] we suggested that any memory stored in the human/animal brain is forgotten following the Ebingghaus curve – in this followon paper, we define a novel algebraic structure, a Forgetting Neural Network, as a simple ... 
Towards a Programmer’s Apprentice (Again)
(Center for Brains, Minds and Machines (CBMM), 20150403)Programmers are loathe to interrupt their workflow to document their design rationale, leading to frequent errors when software is modified—often much later and by different programmers. A Pro grammer’s Assistant could ... 
Universal Dependencies for Learner English
(Center for Brains, Minds and Machines (CBMM), arXiv, 20160801)We introduce the Treebank of Learner English (TLE), the first publicly available syntactic treebank for English as a Second Language (ESL). The TLE provides manually annotated POS tags and Universal Dependency (UD) trees ... 
Unsupervised learning of clutterresistant visual representations from natural videos
(Center for Brains, Minds and Machines (CBMM), arXiv, 20150427)Populations of neurons in inferotemporal cortex (IT) maintain an explicit code for object identity that also tolerates transformations of object appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning ... 
Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?
(Center for Brains, Minds and Machines (CBMM), arXiv, 20140312)The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n → ∞). The next phase is likely to focus on algorithms capable of learning from very few ... 
UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS
(Center for Brains, Minds and Machines (CBMM), arXiv, 20151215)The ability to predict future states of the environment is a central pillar of intelligence. At its core, effective prediction requires an internal model of the world and an understanding of the rules by which the world ... 
Viewtolerant face recognition and Hebbian learning imply mirrorsymmetric neural tuning to head orientation
(Center for Brains, Minds and Machines (CBMM), arXiv, 20160603)The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identitypreserving ... 
Visual concepts and compositional voting
(Center for Brains, Minds and Machines (CBMM), 20180327)It is very attractive to formulate vision in terms of pattern theory [26], where patterns are defined hierarchically by compositions of elementary building blocks. But applying pattern theory to real world images is very ... 
What am I searching for?
(Center for Brains, Minds and Machines (CBMM), arXiv.org, 20180731)Can we infer intentions and goals from a person's actions? As an example of this family of problems, we consider here whether it is possible to decipher what a person is searching for by decoding their eye movement behavior. ... 
When Computer Vision Gazes at Cognition
(Center for Brains, Minds and Machines (CBMM), arXiv, 20141212)Joint attention is a core, earlydeveloping form of social interaction. It is based on our ability to discriminate the third party objects that other people are looking at. While it has been shown that people can accurately ... 
Where do hypotheses come from?
(Center for Brains, Minds and Machines (CBMM), 20161024)Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available ...