Now showing items 1-4 of 4
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-07)
In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. ...
Unsupervised learning of clutter-resistant visual representations from natural videos
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-04-27)
Populations of neurons in inferotemporal cortex (IT) maintain an explicit code for object identity that also tolerates transformations of object appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning ...
Semantic Part Segmentation using Compositional Model combining Shape and Appearance
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-06-08)
In this paper, we study the problem of semantic part segmentation for animals. This is more challenging than standard object detection, object segmentation and pose estimation tasks because semantic parts of animals often ...
Parsing Occluded People by Flexible Compositions
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-06-01)
This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior ...