Now showing items 1-5 of 5
Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering
The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are also ...
Bringing the Grandmother Back into the Picture: A Memory-Based View of Object Recognition
We describe experiments with a versatile pictorial prototype based learning scheme for 3D object recognition. The GRBF scheme seems to be amenable to realization in biophysical hardware because the only kind of ...
Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples
Learning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization ...
A Theory of How the Brain Might Work
I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The ...
Continuous Stochastic Cellular Automata that Have a Stationary Distribution and No Detailed Balance
Marroquin and Ramirez (1990) have recently discovered a class of discrete stochastic cellular automata with Gibbsian invariant measures that have a non-reversible dynamic behavior. Practical applications include more ...