Now showing items 1-20 of 104

    • 3D Object-Oriented Learning: An End-to-end Transformation-Disentangled 3D Representation 

      Liao, Qianli; Poggio, Tomaso (2017-12-31)
      We provide more detailed explanation of the ideas behind a recent paper on “Object-Oriented Deep Learning” [1] and extend it to handle 3D inputs/outputs. Similar to [1], every layer of the system takes in a list of ...
    • Abstracts of the 2014 Brains, Minds, and Machines Summer School 

      Amir, Nadav; Besold, Tarek R.; Camoriano, Rafaello; Erdogan, Goker; Flynn, Thomas; e.a. (Center for Brains, Minds and Machines (CBMM), 2014-09-26)
      A compilation of abstracts from the student projects of the 2014 Brains, Minds, and Machines Summer School, held at Woods Hole Marine Biological Lab, May 29 - June 12, 2014.
    • An analysis of training and generalization errors in shallow and deep networks 

      Mhaskar, Hrushikesh; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv.org, 2018-02-20)
      An open problem around deep networks is the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we explain this phenomenon when each unit ...
    • An analysis of training and generalization errors in shallow and deep networks 

      Mhaskar, H.N.; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv.org, 2019-05-30)
      This paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze ...
    • Anchoring and Agreement in Syntactic Annotations 

      Berzak, Yevgeni; Huang, Yan; Barbu, Andrei; Korhonen, Anna; Katz, Boris (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-09-21)
      Published in the Proceedings of EMNLP 2016 We present a study on two key characteristics of human syntactic annotations: anchoring and agreement. Anchoring is a well-known cognitive bias in human decision making, where ...
    • Biologically-Plausible Learning Algorithms Can Scale to Large Datasets 

      Xiao, Will; Chen, Honglin; Liao, Qianli; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2018-09-27)
      The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feed- back pathways. To address ...
    • Biologically-plausible learning algorithms can scale to large datasets 

      Xiao, Will; Chen, Honglin; Liao, Qianli; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv.org, 2018-11-08)
      The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address ...
    • Brain Signals Localization by Alternating Projections 

      Adler, Amir; Wax, Mati; Pantazis, Dimitrios (Center for Brains, Minds and Machines (CBMM), arXiv, 2019-08-29)
      We present a novel solution to the problem of localization of brain signals. The solution is sequential and iterative, and is based on minimizing the least-squares (LS) criterion by the alternating projection (AP) algorithm, ...
    • Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex 

      Liao, Qianli; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-04-12)
      We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with ...
    • Building machines that learn and think like people 

      Lake, Brenden M.; Ullman, Tomer D.; Tenenbaum, Joshua B.; Gershman, Samuel J. (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-04-01)
      Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object ...
    • Can a biologically-plausible hierarchy e ectively replace face detection, alignment, and recognition pipelines? 

      Liao, Qianli; Leibo, Joel Z; Mroueh, Youssef; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-03-27)
      The standard approach to unconstrained face recognition in natural photographs is via a detection, alignment, recognition pipeline. While that approach has achieved impressive results, there are several reasons to be ...
    • Classical generalization bounds are surprisingly tight for Deep Networks 

      Liao, Qianli; Miranda, Brando; Hidary, Jack; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2018-07-11)
      Deep networks are usually trained and tested in a regime in which the training classification error is not a good predictor of the test error. Thus the consensus has been that generalization, defined as convergence of the ...
    • Complexity of Representation and Inference in Compositional Models with Part Sharing 

      Yuille, Alan L.; Mottaghi, Roozbeh (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-05)
      This paper performs a complexity analysis of a class of serial and parallel compositional models of multiple objects and shows that they enable efficient representation and rapid inference. Compositional models are generative ...
    • The Compositional Nature of Event Representations in the Human Brain 

      Barbu, Andrei; Narayanaswamy, Siddharth; Xiong, Caiming; Corso, Jason J.; Fellbaum, Christiane D.; e.a. (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-07-14)
      How does the human brain represent simple compositions of constituents: actors, verbs, objects, directions, and locations? Subjects viewed videos during neuroimaging (fMRI) sessions from which sentential descriptions of ...
    • Computational role of eccentricity dependent cortical magnification 

      Poggio, Tomaso; Mutch, Jim; Isik, Leyla (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-06-06)
      We develop a sampling extension of M-theory focused on invariance to scale and translation. Quite surprisingly, the theory predicts an architecture of early vision with increasing receptive field sizes and a high resolution ...
    • Concepts in a Probabilistic Language of Thought 

      Goodman, Noah D.; Tenenbaum, Joshua B.; Gerstenberg, Tobias (Center for Brains, Minds and Machines (CBMM), 2014-06-14)
      Knowledge organizes our understanding of the world, determining what we expect given what we have already seen. Our predictive representations have two key properties: they are productive, and they are graded. Productive ...
    • Constant Modulus Algorithms via Low-Rank Approximation 

      Adler, Amir; Wax, Mati (Center for Brains, Minds and Machines (CBMM), 2018-04-12)
      We present a novel convex-optimization-based approach to the solutions of a family of problems involving constant modulus signals. The family of problems includes the constant modulus and the constrained constant modulus, ...
    • Contrastive Analysis with Predictive Power: Typology Driven Estimation of Grammatical Error Distributions in ESL 

      Berzak, Yevgeni; Reichart, Roi; Katz, Boris (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-06-05)
      This work examines the impact of crosslinguistic transfer on grammatical errors in English as Second Language (ESL) texts. Using a computational framework that formalizes the theory of Contrastive Analysis (CA), we demonstrate ...
    • Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) 

      Mao, Junhua; Xu, Wei; Yang, Yi; Wang, Jiang; Huang, Zhiheng; e.a. (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-07)
      In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. ...
    • Deep Convolutional Networks are Hierarchical Kernel Machines 

      Anselmi, Fabio; Rosasco, Lorenzo; Tan, Cheston; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-08-05)
      We extend i-theory to incorporate not only pooling but also rectifying nonlinearities in an extended HW module (eHW) designed for supervised learning. The two operations roughly correspond to invariance and selectivity, ...