Advanced Search
DSpace@MIT

Generalization and Properties of the Neural Response

Research and Teaching Output of the MIT Community

Show simple item record

dc.contributor.advisor Tomaso Poggio
dc.contributor.author Bouvrie, Jake en_US
dc.contributor.author Poggio, Tomaso en_US
dc.contributor.author Rosasco, Lorenzo en_US
dc.contributor.author Smale, Steve en_US
dc.contributor.author Wibisono, Andre en_US
dc.contributor.other Center for Biological and Computational Learning (CBCL) en_US
dc.date.accessioned 2010-11-22T22:15:09Z
dc.date.available 2010-11-22T22:15:09Z
dc.date.issued 2010-11-19
dc.identifier.uri http://hdl.handle.net/1721.1/60024
dc.description.abstract Hierarchical learning algorithms have enjoyed tremendous growth in recent years, with many new algorithms being proposed and applied to a wide range of applications. However, despite the apparent success of hierarchical algorithms in practice, the theory of hierarchical architectures remains at an early stage. In this paper we study the theoretical properties of hierarchical algorithms from a mathematical perspective. Our work is based on the framework of hierarchical architectures introduced by Smale et al. in the paper "Mathematics of the Neural Response", Foundations of Computational Mathematics, 2010. We propose a generalized definition of the neural response and derived kernel that allows us to integrate some of the existing hierarchical algorithms in practice into our framework. We then use this generalized definition to analyze the theoretical properties of hierarchical architectures. Our analysis focuses on three particular aspects of the hierarchy. First, we show that a wide class of architectures suffers from range compression; essentially, the derived kernel becomes increasingly saturated at each layer. Second, we show that the complexity of a linear architecture is constrained by the complexity of the first layer, and in some cases the architecture collapses into a single-layer linear computation. Finally, we characterize the discrimination and invariance properties of the derived kernel in the case when the input data are one-dimensional strings. We believe that these theoretical results will provide a useful foundation for guiding future developments within the theory of hierarchical algorithms. en_US
dc.format.extent 59 p. en_US
dc.relation.ispartofseries MIT-CSAIL-TR-2010-051 en_US
dc.relation.ispartofseries CBCL-292 en_US
dc.rights Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported en
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/
dc.subject hierarchical learning en_US
dc.subject kernel methods en_US
dc.subject learning theory en_US
dc.title Generalization and Properties of the Neural Response en_US


Files in this item

Name Size Format Description
MIT-CSAIL-TR-2010 ... 729.0Kb PDF

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported Except where otherwise noted, this item's license is described as Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
MIT-Mirage