Now showing items 1-6 of 6
Unsupervised learning of clutter-resistant visual representations from natural videos
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-04-27)
Populations of neurons in inferotemporal cortex (IT) maintain an explicit code for object identity that also tolerates transformations of object appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning ...
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex
(Center for Brains, Minds and Machines (CBMM), bioRxiv, 2015-04-26)
Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to ...
A Review of Relational Machine Learning for Knowledge Graphs
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-03-23)
Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, ...
Parsing Occluded People by Flexible Compositions
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-06-01)
This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior ...
Holographic Embeddings of Knowledge Graphs
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-11-16)
Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn ...
Deep Convolutional Networks are Hierarchical Kernel Machines
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-08-05)
We extend i-theory to incorporate not only pooling but also rectifying nonlinearities in an extended HW module (eHW) designed for supervised learning. The two operations roughly correspond to invariance and selectivity, ...