Eccentricity dependent deep neural networks: Modeling invariance in human vision
Author(s)
Chen, Francis X.; Roig Noguera, Gemma; Isik, Leyla; Boix Bosch, Xavier; Poggio, Tomaso A
Downloadpaper_0.pdf (963.8Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Humans can recognize objects in a way that is invariant to scale, translation, and clutter. We use invariance theory as a conceptual basis, to computationally model this phenomenon. This theory discusses the role of eccentricity in human visual processing, and is a generalization of feedforward convolutional neural networks (CNNs). Our model explains some key psychophysical observations relating to invariant perception, while maintaining important similarities with biological neural architectures. To our knowledge, this work is the first to unify explanations of all three types of invariance, all while leveraging the power and neurological grounding of CNNs.
Date issued
2017-03Department
Center for Brains, Minds, and Machines; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2017 AAAI Spring Symposium Series, Science of Intelligence: Computational Principles of Natural and Artificial Intelligence
Publisher
Association for the Advancement of Artificial Intelligence
Citation
Chen,Francis X. et al. "Eccentricity dependent deep neural networks: Modeling invariance in human vision." 2017 AAAI Spring Symposium Series, Science of Intelligence: Computational Principles of Natural and Artificial Intelligence, March 27-29 2017, Stanford, California, Association for the Advancement of Artificial Intelligence, March 2017 © 2017 Association for the Advancement of Artificial Intelligence
Version: Author's final manuscript