Now showing items 42-61 of 141

    • Dreaming with ARC 

      Banburski, Andrzej; Ghandi, Anshula; Alford, Simon; Dandekar, Sylee; Chin, Peter; e.a. (Center for Brains, Minds and Machines (CBMM), 2020-11-23)
      Current machine learning algorithms are highly specialized to whatever it is they are meant to do –– e.g. playing chess, picking up objects, or object recognition. How can we extend this to a system that could solve a ...
    • The Effects of Image Distribution and Task on Adversarial Robustness 

      Kunhardt, Owen; Deza, Arturo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-02-18)
      In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular ε-interval [ε_0, ε_1] (interval of adversarial perturbation strengths) ...
    • Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas 

      Kuo, Yen-Ling; Katz, Boris; Barbu, Andrei (Center for Brains, Minds and Machines (CBMM), The Ninth International Conference on Learning Representations (ICLR), 2020-10-25)
      We demonstrate a reinforcement learning agent which uses a compositional recurrent neural network that takes as input an LTL formula and determines satisfying actions. The input LTL formulas have never been seen before, ...
    • Exact Equivariance, Disentanglement and Invariance of Transformations 

      Liao, Qianli; Poggio, Tomaso (2017-12-31)
      Invariance, equivariance and disentanglement of transformations are important topics in the field of representation learning. Previous models like Variational Autoencoder [1] and Generative Adversarial Networks [2] attempted ...
    • An Exit Strategy from the Covid-19 Lockdown based on Risk-sensitive Resource Allocation 

      Shalev-Shwartz, Shai; Shashua, Amnon (Center for Brains, Minds and Machines (CBMM), 2020-04-15)
      We propose an exit strategy from the COVID-19 lockdown, which is based on a risk-sensitive levels of social distancing. At the heart of our approach is the realization that the most effective, yet limited in number, resources ...
    • Fast, invariant representation for human action in the visual system 

      Isik, Leyla; Tacchetti, Andrea; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-06)
      The ability to recognize the actions of others from visual input is essential to humans' daily lives. The neural computations underlying action recognition, however, are still poorly understood. We use magnetoencephalography ...
    • Feature learning in deep classifiers through Intermediate Neural Collapse 

      Rangamani, Akshay; Lindegaard, Marius; Galanti, Tomer; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2023-02-27)
      In this paper, we conduct an empirical study of the feature learning process in deep classifiers. Recent research has identified a training phenomenon called Neural Collapse (NC), in which the top-layer feature embeddings ...
    • For interpolating kernel machines, the minimum norm ERM solution is the most stable 

      Rangamani, Akshay; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2020-06-22)
      We study the average CVloo stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm has the best CVloo stability, which in turn is controlled ...
    • Foveation-based Mechanisms Alleviate Adversarial Examples 

      Lou, Yan; Boix, Xavier; Roig, Gemma; Poggio, Tomaso; Zhao, Qi (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-19)
      We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in ...
    • From Associative Memories to Deep Networks 

      Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-01-12)
      About fifty years ago, holography was proposed as a model of associative memory. Associative memories with similar properties were soon after implemented as simple networks of threshold neurons by Willshaw and Longuet-Higgins. ...
    • From Marr’s Vision to the Problem of Human Intelligence 

      Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-09-01)
    • Full interpretation of minimal images 

      Ben-Yosef, Guy; Assif, Liav; Ullman, Shimon (Center for Brains, Minds and Machines (CBMM), 2017-02-08)
      The goal in this work is to model the process of ‘full interpretation’ of object images, which is the ability to identify and localize all semantic features and parts that are recognized by human observers. The task is ...
    • The Genesis Story Understanding and Story Telling System A 21st Century Step toward Artificial Intelligence 

      Winston, Patrick Henry (Center for Brains, Minds and Machines (CBMM), 2014-06-10)
      Story understanding is an important differentiator of human intelligence, perhaps the most important differentiator. The Genesis system was built to model and explore aspects of story understanding using simply expressed, ...
    • Group Invariant Deep Representations for Image Instance Retrieval 

      Morère, Olivier; Veillard, Antoine; Lin, Jie; Petta, Julie; Chandrasekhar, Vijay; e.a. (Center for Brains, Minds and Machines (CBMM), 2016-01-11)
      Most image instance retrieval pipelines are based on comparison of vectors known as global image descriptors between a query image and the database images. Due to their success in large scale image classification, ...
    • Hierarchically Local Tasks and Deep Convolutional Networks 

      Deza, Arturo; Liao, Qianli; Banburski, Andrzej; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2020-06-24)
      The main success stories of deep learning, starting with ImageNet, depend on convolutional networks, which on certain tasks perform significantly better than traditional shallow classifiers, such as support vector machines. ...
    • Hippocampal Remapping as Hidden State Inference 

      Sanders, Honi; Wilson, Matthew A.; Gershman, Samueal J. (Center for Brains, Minds and Machines (CBMM), bioRxiv, 2019-08-22)
      Cells in the hippocampus tuned to spatial location (place cells) typically change their tuning when an animal changes context, a phenomenon known as remapping. A fundamental challenge to understanding remapping is the fact ...
    • Holographic Embeddings of Knowledge Graphs 

      Nickel, Maximilian; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-11-16)
      Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn ...
    • A Homogeneous Transformer Architecture 

      Gan, Yulu; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2023-09-18)
      While the Transformer architecture has made a substantial impact in the field of machine learning, it is unclear what purpose each component serves in the overall architecture. Heterogeneous nonlinear circuits such as ...
    • How Important is Weight Symmetry in Backpropagation? 

      Liao, Qianli; Leibo, Joel Z.; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-11-29)
      Gradient backpropagation (BP) requires symmetric feedforward and feedback connections—the same weights must be used for forward and backward passes. This “weight transport problem” [1] is thought to be one of the main ...
    • Human-Machine CRFs for Identifying Bottlenecks in Holistic Scene Understanding 

      Mottaghi, Roozbeh; Fidler, Sanja; Yuille, Alan L.; Urtasun, Raquel; Parikh, Devi (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-06-15)
      Recent trends in image understanding have pushed for holistic scene understanding models that jointly reason about various tasks such as object detection, scene recognition, shape analysis, contextual reasoning, and local ...