Search
Now showing items 1-10 of 18
On Invariance and Selectivity in Representation Learning
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-03-23)
We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one ...
Towards a Programmer’s Apprentice (Again)
(Center for Brains, Minds and Machines (CBMM), 2015-04-03)
Programmers are loathe to interrupt their workflow to document their design rationale, leading to frequent errors when software is modified—often much later and by different programmers. A Pro- grammer’s Assistant could ...
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-07)
In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. ...
Unsupervised learning of clutter-resistant visual representations from natural videos
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-04-27)
Populations of neurons in inferotemporal cortex (IT) maintain an explicit code for object identity that also tolerates transformations of object appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning ...
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex
(Center for Brains, Minds and Machines (CBMM), bioRxiv, 2015-04-26)
Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to ...
Semantic Part Segmentation using Compositional Model combining Shape and Appearance
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-06-08)
In this paper, we study the problem of semantic part segmentation for animals. This is more challenging than standard object detection, object segmentation and pose estimation tasks because semantic parts of animals often ...
I-theory on depth vs width: hierarchical function composition
(Center for Brains, Minds and Machines (CBMM), 2015-12-29)
Deep learning networks with convolution, pooling and subsampling are a special case of hierar- chical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can ...
UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-12-15)
The ability to predict future states of the environment is a central pillar of intelligence. At its core, effective prediction requires an internal model of the world and an understanding of the rules by which the world ...
The infancy of the human brain
(Center for Brains, Minds and Machines (CBMM), Neuron, 2015-10-07)
The human infant brain is the only known machine able to master a natural language and develop explicit, symbolic, and communicable systems of knowledge that deliver rich representations of the external world. With the ...
A Review of Relational Machine Learning for Knowledge Graphs
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-03-23)
Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, ...