Learning with group invariant features: A Kernel perspective
Author(s)
Mroueh, Youssef; Poggio, Tomaso A; Voinea, Stephen Constantin
Download1506.02544.pdf (319.2Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We analyze in this paper a random feature map based on a theory of invariance (I-theory) introduced in [1]. More specifically, a group invariant signal signature is obtained through cumulative distributions of group-transformed random projections. Our analysis bridges invariant feature learning with kernel methods, as we show that this feature map defines an expected Haar-integration kernel that is invariant to the specified group action. We show how this non-linear random feature map approximates this group invariant kernel uniformly on a set of N points. Moreover, we show that it defines a function space that is dense in the equivalent Invariant Reproducing Kernel Hilbert Space. Finally, we quantify error rates of the convergence of the empirical risk minimization, as well as the reduction in the sample complexity of a learning algorithm using such an invariant representation for signal classification, in a classical supervised learning setting.
Date issued
2015-12Department
Massachusetts Institute of Technology. Department of Brain and Cognitive SciencesJournal
Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS '15)
Publisher
Association for Computing Machinery
Citation
Mroueh, Youssef, Stephen Voinea and Tomaso Poggio. "Learning with Group Invariant Features: A Kernel Perspective." Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1 (NIPS '15), December 7-12, 2015, Montreal, Canada, Association of Computing Machinery, December 2015. © 2015 Association of Computing Machinery ACM
Version: Author's final manuscript