Now showing items 1-5 of 5
Can a biologically-plausible hierarchy e ectively replace face detection, alignment, and recognition pipelines?
(Center for Brains, Minds and Machines (CBMM), arXiv, 2014-03-27)
The standard approach to unconstrained face recognition in natural photographs is via a detection, alignment, recognition pipeline. While that approach has achieved impressive results, there are several reasons to be ...
I-theory on depth vs width: hierarchical function composition
(Center for Brains, Minds and Machines (CBMM), 2015-12-29)
Deep learning networks with convolution, pooling and subsampling are a special case of hierar- chical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can ...
Learning Real and Boolean Functions: When Is Deep Better Than Shallow
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-03-08)
We describe computational tasks - especially in vision - that correspond to compositional/hierarchical functions. While the universal approximation property holds both for hierarchical and shallow networks, we prove that ...
A Deep Representation for Invariance And Music Classification
(Center for Brains, Minds and Machines (CBMM), arXiv, 2014-17-03)
Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this ...
Deep Convolutional Networks are Hierarchical Kernel Machines
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-08-05)
We extend i-theory to incorporate not only pooling but also rectifying nonlinearities in an extended HW module (eHW) designed for supervised learning. The two operations roughly correspond to invariance and selectivity, ...