Now showing items 33-52 of 142

    • A Definition of General Problem Solving 

      Liao, Qianli (2020-07-13)
      What is general intelligence? What does it mean by general problem solving? We attempt to give a definition of general problem solving, characterize the common process of problem solving and provide a basic algorithm that ...
    • Detect What You Can: Detecting and Representing Objects using Holistic Models and Body Parts 

      Chen, Xianjie; Mottaghi, Roozbeh; Liu, Xiaobai; Fidler, Sanja; Urtasun, Raquel; e.a. (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-06-10)
      Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples ...
    • Detecting Semantic Parts on Partially Occluded Objects 

      Wang, Jianyu; Xe, Cihang; Zhang, Zhishuai; Zhu, Jun; Xie, Lingxi; e.a. (Center for Brains, Minds and Machines (CBMM), 2017-09-04)
      In this paper, we address the task of detecting semantic parts on partially occluded objects. We consider a scenario where the model is trained using non-occluded images but tested on occluded images. The motivation is ...
    • Discriminate-and-Rectify Encoders: Learning from Image Transformation Sets 

      Tachetti, Andrea; Voinea, Stephen; Evangelopoulos, Georgios (Center for Brains, Minds and Machines (CBMM), arXiv, 2017-03-13)
      The complexity of a learning task is increased by transformations in the input space that preserve class identity. Visual object recognition for example is affected by changes in viewpoint, scale, illumination or planar ...
    • Do Deep Neural Networks Suffer from Crowding? 

      Volokitin, Anna; Roig, Gemma; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2017-06-26)
      Crowding is a visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called flankers, are placed close to it. In this work, we study the ...
    • Do Neural Networks for Segmentation Understand Insideness? 

      Villalobos, Kimberly; Štih, Vilim; Ahmadinejad, Amineh; Sundaram, Shobhita; Dozier, Jamell; e.a. (Center for Brains, Minds and Machines (CBMM), 2020-04-04)
      The insideness problem is an image segmentation modality that consists of determining which pixels are inside and outside a region. Deep Neural Networks (DNNs) excel in segmentation benchmarks, but it is unclear that they ...
    • Do You See What I Mean? Visual Resolution of Linguistic Ambiguities 

      Berzak, Yevgeni; Barbu, Andrei; Harari, Daniel; Katz, Boris; Ullman, Shimon (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-06-10)
      Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception. In this work, we present a novel task for grounded language understanding: disambiguating a ...
    • Double descent in the condition number 

      Poggio, Tomaso; Kur, Gil; Banburski, Andrzej (Center for Brains, Minds and Machines (CBMM), 2019-12-04)
      In solving a system of n linear equations in d variables Ax=b, the condition number of the (n,d) matrix A measures how much errors in the data b affect the solution x. Bounds of this type are important in many inverse ...
    • Dreaming with ARC 

      Banburski, Andrzej; Ghandi, Anshula; Alford, Simon; Dandekar, Sylee; Chin, Peter; e.a. (Center for Brains, Minds and Machines (CBMM), 2020-11-23)
      Current machine learning algorithms are highly specialized to whatever it is they are meant to do –– e.g. playing chess, picking up objects, or object recognition. How can we extend this to a system that could solve a ...
    • The Effects of Image Distribution and Task on Adversarial Robustness 

      Kunhardt, Owen; Deza, Arturo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-02-18)
      In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular ε-interval [ε_0, ε_1] (interval of adversarial perturbation strengths) ...
    • Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas 

      Kuo, Yen-Ling; Katz, Boris; Barbu, Andrei (Center for Brains, Minds and Machines (CBMM), The Ninth International Conference on Learning Representations (ICLR), 2020-10-25)
      We demonstrate a reinforcement learning agent which uses a compositional recurrent neural network that takes as input an LTL formula and determines satisfying actions. The input LTL formulas have never been seen before, ...
    • Exact Equivariance, Disentanglement and Invariance of Transformations 

      Liao, Qianli; Poggio, Tomaso (2017-12-31)
      Invariance, equivariance and disentanglement of transformations are important topics in the field of representation learning. Previous models like Variational Autoencoder [1] and Generative Adversarial Networks [2] attempted ...
    • An Exit Strategy from the Covid-19 Lockdown based on Risk-sensitive Resource Allocation 

      Shalev-Shwartz, Shai; Shashua, Amnon (Center for Brains, Minds and Machines (CBMM), 2020-04-15)
      We propose an exit strategy from the COVID-19 lockdown, which is based on a risk-sensitive levels of social distancing. At the heart of our approach is the realization that the most effective, yet limited in number, resources ...
    • Fast, invariant representation for human action in the visual system 

      Isik, Leyla; Tacchetti, Andrea; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-06)
      The ability to recognize the actions of others from visual input is essential to humans' daily lives. The neural computations underlying action recognition, however, are still poorly understood. We use magnetoencephalography ...
    • Flexible Intelligence 

      Liao, Qianli (2020-06-18)
      We discuss the problem of flexibility in intelligence, a relatively little-studied topic in machine learning and AI. Flexibility can be understood as out-of-distribution generalization, and it can be achieved by converting ...
    • For interpolating kernel machines, the minimum norm ERM solution is the most stable 

      Rangamani, Akshay; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2020-06-22)
      We study the average CVloo stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm has the best CVloo stability, which in turn is controlled ...
    • Foveation-based Mechanisms Alleviate Adversarial Examples 

      Lou, Yan; Boix, Xavier; Roig, Gemma; Poggio, Tomaso; Zhao, Qi (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-19)
      We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in ...
    • From Associative Memories to Deep Networks 

      Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-01-12)
      About fifty years ago, holography was proposed as a model of associative memory. Associative memories with similar properties were soon after implemented as simple networks of threshold neurons by Willshaw and Longuet-Higgins. ...
    • From Marr’s Vision to the Problem of Human Intelligence 

      Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-09-01)
    • Full interpretation of minimal images 

      Ben-Yosef, Guy; Assif, Liav; Ullman, Shimon (Center for Brains, Minds and Machines (CBMM), 2017-02-08)
      The goal in this work is to model the process of ‘full interpretation’ of object images, which is the ability to identify and localize all semantic features and parts that are recognized by human observers. The task is ...