Now showing items 80-99 of 109

    • Scene Graph Parsing as Dependency Parsing 

      Wang, Yu-Siang; Liu, Chenxi; Zeng, Xiaohui; Yuille, Alan L. (Center for Brains, Minds and Machines (CBMM), 2018-05-10)
      In this paper, we study the problem of parsing structured knowledge graphs from textual descrip- tions. In particular, we consider the scene graph representation that considers objects together with their attributes and ...
    • The Secrets of Salient Object Segmentation 

      Li, Yin; Hou, Xiaodi; Koch, Christof; Rehg, James M.; Yuille, Alan L. (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-06-13)
      In this paper we provide an extensive evaluation of fixation prediction and salient object segmentation algorithms as well as statistics of major datasets. Our analysis identifies serious design flaws of existing salient ...
    • Seeing is Worse than Believing: Reading People’s Minds Better than Computer-Vision Methods Recognize Actions 

      Barbu, Andrei; Barrett, Daniel P.; Chen, Wei; Narayanaswamy, Siddharth; Xiong, Caiming; e.a. (2015-12-10)
      We had human subjects perform a one-out-of-six class action recognition task from video stimuli while undergoing functional magnetic resonance imaging (fMRI). Support-vector machines (SVMs) were trained on the recovered ...
    • Seeing What You’re Told: Sentence-Guided Activity Recognition In Video 

      Siddharth, Narayanaswamy; Barbu, Andrei; Siskind, Jeffrey Mark (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-05-29)
      We present a system that demonstrates how the compositional structure of events, in concert with the compositional structure of language, can interplay with the underlying focusing mechanisms in video action recognition, ...
    • Semantic Part Segmentation using Compositional Model combining Shape and Appearance 

      Wang, Jianyu; Yuille, Alan L. (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-06-08)
      In this paper, we study the problem of semantic part segmentation for animals. This is more challenging than standard object detection, object segmentation and pose estimation tasks because semantic parts of animals often ...
    • Sensitivity to Timing and Order in Human Visual Cortex. 

      Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.; Kreiman, Gabriel (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-04-25)
      Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual ...
    • Simultaneous whole‐animal 3D imaging of neuronal activity using light‐field microscopy 

      Prevedel, Robert; Yoon, Young-Gyu; Hoffman, Maximilian; Pak, Nikita; Wetzstein, Gordon; e.a. (Center for Brains, Minds and Machines (CBMM), 2014-05-18)
      High-speed, large-scale three-dimensional (3D) imaging of neuronal activity poses a major challenge in neuroscience. Here we demonstrate simultaneous functional imaging of neuronal activity at single-neuron resolution in ...
    • Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results 

      Arend, Luke; Han, Yena; Schrimpf, Martin; Bashivan, Pouya; Kar, Kohitij; e.a. (Center for Brains, Minds and Machines (CBMM), 2018-11-02)
      Deep neural networks have been shown to predict neural responses in higher visual cortex. The mapping from the model to a neuron in the brain occurs through a linear combination of many units in the model, leaving open the ...
    • Single-Shot Object Detection with Enriched Semantics 

      Zhang, Zhishuai; Qiao, Siyuan; Xie, Cihang; Shen, Wei; Wang, Bo; e.a. (Center for Brains, Minds and Machines (CBMM), 2018-06-19)
      We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic ...
    • Spatiotemporal interpretation features in the recognition of dynamic images 

      Ben-Yosef, Guy; Kreiman, Gabriel; Ullman, Shimon (Center for Brains, Minds and Machines (CBMM), 2018-11-21)
      Objects and their parts can be visually recognized and localized from purely spatial information in static images and also from purely temporal information as in the perception of biological motion. Cortical regions have ...
    • Stable Foundations for Learning: a foundational framework for learning theory in both the classical and modern regime. 

      Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2020-03-25)
      We consider here the class of supervised learning algorithms known as Empirical Risk Minimization (ERM). The classical theory by Vapnik and others characterize universal consistency of ERM in the classical regime in which ...
    • Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning 

      Liao, Qianli; Kawaguchi, Kenji; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-10-19)
      We systematically explored a spectrum of normalization algorithms related to Batch Normalization (BN) and propose a generalized formulation that simultaneously solves two major limitations of BN: (1) online learning and ...
    • Symmetry Regularization 

      Anselmi, Fabio; Evangelopoulos, Georgios; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2017-05-26)
      The properties of a representation, such as smoothness, adaptability, generality, equivari- ance/invariance, depend on restrictions imposed during learning. In this paper, we propose using data symmetries, in the sense of ...
    • Theoretical Issues in Deep Networks 

      Poggio, Tomaso; Banburski, Andrzej; Liao, Qianli (Center for Brains, Minds and Machines (CBMM), 2019-08-17)
      While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the ...
    • Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality? 

      Poggio, Tomaso; Mhaskar, Hrushikesh; Rosasco, Lorenzo; Miranda, Brando; Liao, Qianli (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-11-23)
      [formerly titled "Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: a Review"] The paper reviews and extends an emerging body of theoretical results on deep learning including the ...
    • Theory II: Landscape of the Empirical Risk in Deep Learning 

      Poggio, Tomaso; Liao, Qianli (Center for Brains, Minds and Machines (CBMM), arXiv, 2017-03-30)
      Previous theoretical work on deep learning and neural network optimization tend to focus on avoiding saddle points and local minima. However, the practical observation is that, at least for the most successful Deep ...
    • Theory IIIb: Generalization in Deep Networks 

      Poggio, Tomaso; Liao, Qianli; Miranda, Brando; Burbanski, Andrzej; Hidary, Jack (Center for Brains, Minds and Machines (CBMM), arXiv.org, 2018-06-29)
      The general features of the optimization problem for the case of overparametrized nonlinear networks have been clear for a while: SGD selects with high probability global minima vs local minima. In the overparametrized ...
    • Theory of Deep Learning IIb: Optimization Properties of SGD 

      Zhang, Chiyuan; Liao, Qianli; Rakhlin, Alexander; Miranda, Brando; Golowich, Noah; e.a. (Center for Brains, Minds and Machines (CBMM), 2017-12-27)
      In Theory IIb we characterize with a mix of theory and experiments the optimization of deep convolutional networks by Stochastic Gradient Descent. The main new result in this paper is theoretical and experimental evidence ...
    • Theory of Deep Learning III: explaining the non-overfitting puzzle 

      Poggio, Tomaso; Kawaguchi, Kenji; Liao, Qianli; Miranda, Brando; Rosasco, Lorenzo; e.a. (arXiv, 2017-12-30)
      THIS MEMO IS REPLACED BY CBMM MEMO 90 A main puzzle of deep networks revolves around the absence of overfitting despite overparametrization and despite the large capacity demonstrated by zero training error on randomly ...
    • Theory of Intelligence with Forgetting: Mathematical Theorems Explaining Human Universal Forgetting using “Forgetting Neural Networks” 

      Cano-Córdoba, Felipe; Sarma, Sanjay; Subirana, Brian (Center for Brains, Minds and Machines (CBMM), 2017-12-05)
      In [42] we suggested that any memory stored in the human/animal brain is forgotten following the Ebingghaus curve – in this follow-on paper, we define a novel algebraic structure, a Forgetting Neural Network, as a simple ...