Now showing items 1-8 of 8

    • Deep Convolutional Networks are Hierarchical Kernel Machines 

      Anselmi, Fabio; Rosasco, Lorenzo; Tan, Cheston; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-08-05)
      We extend i-theory to incorporate not only pooling but also rectifying nonlinearities in an extended HW module (eHW) designed for supervised learning. The two operations roughly correspond to invariance and selectivity, ...
    • I-theory on depth vs width: hierarchical function composition 

      Poggio, Tomaso; Anselmi, Fabio; Rosasco, Lorenzo (Center for Brains, Minds and Machines (CBMM), 2015-12-29)
      Deep learning networks with convolution, pooling and subsampling are a special case of hierar- chical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can ...
    • The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex 

      Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), bioRxiv, 2015-04-26)
      Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to ...
    • Notes on Hierarchical Splines, DCLNs and i-theory 

      Poggio, Tomaso; Rosasco, Lorenzo; Shashua, Amnon; Cohen, Nadav; Anselmi, Fabio (Center for Brains, Minds and Machines (CBMM), 2015-09-29)
      We define an extension of classical additive splines for multivariate function approximation that we call hierarchical splines. We show that the case of hierarchical, additive, piece-wise linear splines includes present-day ...
    • On Invariance and Selectivity in Representation Learning 

      Anselmi, Fabio; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-03-23)
      We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one ...
    • Representation Learning in Sensory Cortex: a theory 

      Anselmi, Fabio; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2014-11-14)
      We review and apply a computational theory of the feedforward path of the ventral stream in visual cortex based on the hypothesis that its main function is the encoding of invariant representations of images. A key ...
    • Symmetry Regularization 

      Anselmi, Fabio; Evangelopoulos, Georgios; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2017-05-26)
      The properties of a representation, such as smoothness, adaptability, generality, equivari- ance/invariance, depend on restrictions imposed during learning. In this paper, we propose using data symmetries, in the sense of ...
    • View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation 

      Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich; Anselmi, Fabio; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-06-03)
      The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identity-preserving ...