Advanced Search
DSpace@MIT

Browsing Center for Brains, Minds & Machines by Title

Research and Teaching Output of the MIT Community

Browsing Center for Brains, Minds & Machines by Title

Sort by: Order: Results:

  • Liao, Qianli; Poggio, Tomaso (2017-12-31)
    We provide more detailed explanation of the ideas behind a recent paper on “Object-Oriented Deep Learning” [1] and extend it to handle 3D inputs/outputs. Similar to [1], every layer of the system takes in a list of ...
  • Amir, Nadav; Besold, Tarek R.; Camoriano, Rafaello; Erdogan, Goker; Flynn, Thomas; Gillary, Grant; Gomez, Jesse; Herbert-Voss, Ariel; Hotan, Gladia; Kadmon, Jonathan; Linderman, Scott W.; Liu, Tina T.; Marantan, Andrew; Olson, Joseph; Orchard, Garrick; Pal, Dipan K.; Pasquale, Giulia; Sanders, Honi; Silberer, Carina; Smith, Kevin A.; de Brito, Carols Stein N.; Suchow, Jordan W.; Tessler, M. H.; Viejo, Guillaume; Walker, Drew; Wehbe, Leila (Center for Brains, Minds and Machines (CBMM), 2014-09-26)
    A compilation of abstracts from the student projects of the 2014 Brains, Minds, and Machines Summer School, held at Woods Hole Marine Biological Lab, May 29 - June 12, 2014.
  • Mhaskar, Hrushikesh; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv.org, 2018-02-20)
    An open problem around deep networks is the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we explain this phenomenon when each unit ...
  • Berzak, Yevgeni; Huang, Yan; Barbu, Andrei; Korhonen, Anna; Katz, Boris (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-09-21)
    Published in the Proceedings of EMNLP 2016 We present a study on two key characteristics of human syntactic annotations: anchoring and agreement. Anchoring is a well-known cognitive bias in human decision making, where ...
  • Liao, Qianli; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-04-12)
    We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with ...
  • Lake, Brenden M.; Ullman, Tomer D.; Tenenbaum, Joshua B.; Gershman, Samuel J. (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-04-01)
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object ...
  • Liao, Qianli; Leibo, Joel Z; Mroueh, Youssef; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-03-27)
    The standard approach to unconstrained face recognition in natural photographs is via a detection, alignment, recognition pipeline. While that approach has achieved impressive results, there are several reasons to be ...
  • Yuille, Alan L.; Mottaghi, Roozbeh (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-05)
    This paper performs a complexity analysis of a class of serial and parallel compositional models of multiple objects and shows that they enable efficient representation and rapid inference. Compositional models are generative ...
  • Barbu, Andrei; Narayanaswamy, Siddharth; Xiong, Caiming; Corso, Jason J.; Fellbaum, Christiane D.; Hanson, Catherine; Hanson, Stephen Jose; Helie, Sebastien; Malaia, Evguenia; Pearlmutter, Barak A.; Siskind, Jeffrey Mark; Talavage, Thomas Michael; Wilbur, Ronnie B. (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-07-14)
    How does the human brain represent simple compositions of constituents: actors, verbs, objects, directions, and locations? Subjects viewed videos during neuroimaging (fMRI) sessions from which sentential descriptions of ...
  • Poggio, Tomaso; Mutch, Jim; Isik, Leyla (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-06-06)
    We develop a sampling extension of M-theory focused on invariance to scale and translation. Quite surprisingly, the theory predicts an architecture of early vision with increasing receptive field sizes and a high resolution ...
  • Goodman, Noah D.; Tenenbaum, Joshua B.; Gerstenberg, Tobias (Center for Brains, Minds and Machines (CBMM), 2014-06-14)
    Knowledge organizes our understanding of the world, determining what we expect given what we have already seen. Our predictive representations have two key properties: they are productive, and they are graded. Productive ...
  • Adler, Amir; Wax, Mati (Center for Brains, Minds and Machines (CBMM), 2018-04-12)
    We present a novel convex-optimization-based approach to the solutions of a family of problems involving constant modulus signals. The family of problems includes the constant modulus and the constrained constant modulus, ...
  • Berzak, Yevgeni; Reichart, Roi; Katz, Boris (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-06-05)
    This work examines the impact of crosslinguistic transfer on grammatical errors in English as Second Language (ESL) texts. Using a computational framework that formalizes the theory of Contrastive Analysis (CA), we demonstrate ...
  • Mao, Junhua; Xu, Wei; Yang, Yi; Wang, Jiang; Huang, Zhiheng; Yuille, Alan L. (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-07)
    In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. ...
  • Anselmi, Fabio; Rosasco, Lorenzo; Tan, Cheston; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-08-05)
    We extend i-theory to incorporate not only pooling but also rectifying nonlinearities in an extended HW module (eHW) designed for supervised learning. The two operations roughly correspond to invariance and selectivity, ...
  • Yuille, Alan L.; Liu, Chenxi (Center for Brains, Minds and Machines (CBMM), 2018-05-10)
    This is an opinion paper about the strengths and weaknesses of Deep Nets. They are at the center of recent progress on Artificial Intelligence and are of growing importance in Cognitive Science and Neuroscience since they ...
  • Lotter, William; Kreiman, Gabriel; Cox, David (Center for Brains, Minds and Machines (CBMM), arXiv, 2017-03-01)
    While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning—leveraging unlabeled examples to learn about the structure of a domain — remains ...
  • Shen, Wei; Guo, Yilu; Wang, Yan; Zhao, Kai; Wang, Bo; Yuille, Alan L. (Center for Brains, Minds and Machines (CBMM), 2018-06-01)
    Age estimation from facial images is typically cast as a nonlinear regression problem. The main challenge of this problem is the facial feature space w.r.t. ages is inhomogeneous, due to the large variation in facial ...
  • Zhang, Chiyuan; Evangelopoulos, Georgios; Voinea, Stephen; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-17-03)
    Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this ...
  • Zhang, Zhishuai; Xie, Cihang; Wang, Jianyu; Xie, Lingxi; Yuille, Alan L. (Center for Brains, Minds and Machines (CBMM), 2018-06-19)
    In this paper, we study the task of detecting semantic parts of an object, e.g., a wheel of a car, under partial occlusion. We propose that all models should be trained without seeing occlusions while being able to transfer ...
MIT-Mirage