Now showing items 37-56 of 118

    • Exact Equivariance, Disentanglement and Invariance of Transformations 

      Liao, Qianli; Poggio, Tomaso (2017-12-31)
      Invariance, equivariance and disentanglement of transformations are important topics in the field of representation learning. Previous models like Variational Autoencoder [1] and Generative Adversarial Networks [2] attempted ...
    • An Exit Strategy from the Covid-19 Lockdown based on Risk-sensitive Resource Allocation 

      Shalev-Shwartz, Shai; Shashua, Amnon (Center for Brains, Minds and Machines (CBMM), 2020-04-15)
      We propose an exit strategy from the COVID-19 lockdown, which is based on a risk-sensitive levels of social distancing. At the heart of our approach is the realization that the most effective, yet limited in number, resources ...
    • Fast, invariant representation for human action in the visual system 

      Isik, Leyla; Tacchetti, Andrea; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-06)
      The ability to recognize the actions of others from visual input is essential to humans' daily lives. The neural computations underlying action recognition, however, are still poorly understood. We use magnetoencephalography ...
    • Flexible Intelligence 

      Liao, Qianli (2020-06-18)
      We discuss the problem of flexibility in intelligence, a relatively little-studied topic in machine learning and AI. Flexibility can be understood as out-of-distribution generalization, and it can be achieved by converting ...
    • For interpolating kernel machines, the minimum norm ERM solution is the most stable 

      Rangamani, Akshay; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2020-06-22)
      We study the average CVloo stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm has the best CVloo stability, which in turn is controlled ...
    • Foveation-based Mechanisms Alleviate Adversarial Examples 

      Lou, Yan; Boix, Xavier; Roig, Gemma; Poggio, Tomaso; Zhao, Qi (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-19)
      We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in ...
    • Full interpretation of minimal images 

      Ben-Yosef, Guy; Assif, Liav; Ullman, Shimon (Center for Brains, Minds and Machines (CBMM), 2017-02-08)
      The goal in this work is to model the process of ‘full interpretation’ of object images, which is the ability to identify and localize all semantic features and parts that are recognized by human observers. The task is ...
    • The Genesis Story Understanding and Story Telling System A 21st Century Step toward Artificial Intelligence 

      Winston, Patrick Henry (Center for Brains, Minds and Machines (CBMM), 2014-06-10)
      Story understanding is an important differentiator of human intelligence, perhaps the most important differentiator. The Genesis system was built to model and explore aspects of story understanding using simply expressed, ...
    • Group Invariant Deep Representations for Image Instance Retrieval 

      Morère, Olivier; Veillard, Antoine; Lin, Jie; Petta, Julie; Chandrasekhar, Vijay; e.a. (Center for Brains, Minds and Machines (CBMM), 2016-01-11)
      Most image instance retrieval pipelines are based on comparison of vectors known as global image descriptors between a query image and the database images. Due to their success in large scale image classification, ...
    • Hierarchically Local Tasks and Deep Convolutional Networks 

      Deza, Arturo; Liao, Qianli; Banburski, Andrzej; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2020-06-24)
      The main success stories of deep learning, starting with ImageNet, depend on convolutional networks, which on certain tasks perform significantly better than traditional shallow classifiers, such as support vector machines. ...
    • Hippocampal Remapping as Hidden State Inference 

      Sanders, Honi; Wilson, Matthew A.; Gershman, Samueal J. (Center for Brains, Minds and Machines (CBMM), bioRxiv, 2019-08-22)
      Cells in the hippocampus tuned to spatial location (place cells) typically change their tuning when an animal changes context, a phenomenon known as remapping. A fundamental challenge to understanding remapping is the fact ...
    • Holographic Embeddings of Knowledge Graphs 

      Nickel, Maximilian; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-11-16)
      Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn ...
    • How Important is Weight Symmetry in Backpropagation? 

      Liao, Qianli; Leibo, Joel Z.; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-11-29)
      Gradient backpropagation (BP) requires symmetric feedforward and feedback connections—the same weights must be used for forward and backward passes. This “weight transport problem” [1] is thought to be one of the main ...
    • Human-like Learning: A Research Proposal 

      Liao, Qianli; Poggio, Tomaso (2017-09-28)
      We propose Human-like Learning, a new machine learning paradigm aiming at training generalist AI systems in a human-like manner with a focus on human-unique skills.
    • Human-Machine CRFs for Identifying Bottlenecks in Holistic Scene Understanding 

      Mottaghi, Roozbeh; Fidler, Sanja; Yuille, Alan L.; Urtasun, Raquel; Parikh, Devi (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-06-15)
      Recent trends in image understanding have pushed for holistic scene understanding models that jointly reason about various tasks such as object detection, scene recognition, shape analysis, contextual reasoning, and local ...
    • I-theory on depth vs width: hierarchical function composition 

      Poggio, Tomaso; Anselmi, Fabio; Rosasco, Lorenzo (Center for Brains, Minds and Machines (CBMM), 2015-12-29)
      Deep learning networks with convolution, pooling and subsampling are a special case of hierar- chical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can ...
    • Image interpretation above and below the object level 

      Ben-Yosef, Guy; Ullman, Shimon (Center for Brains, Minds and Machines (CBMM), 2018-05-10)
      Computational models of vision have advanced in recent years at a rapid rate, rivaling in some areas human- level performance. Much of the progress to date has focused on analyzing the visual scene at the object level – ...
    • The infancy of the human brain 

      Dehaene-Lambertz, G.; Spelke, Elizabeth S. (Center for Brains, Minds and Machines (CBMM), Neuron, 2015-10-07)
      The human infant brain is the only known machine able to master a natural language and develop explicit, symbolic, and communicable systems of knowledge that deliver rich representations of the external world. With the ...
    • The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex 

      Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), bioRxiv, 2015-04-26)
      Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to ...
    • The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors 

      O'Brien, Nicole; Latessa, Sophia; Evangelopoulos, Georgios; Boix, Xavier (Center for Brains, Minds and Machines (CBMM), 2018-11-01)
      The digital information age has generated new outlets for content creators to publish so-called “fake news”, a new form of propaganda that is intentionally designed to mislead the reader. With the widespread effects of the ...