Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples
Author(s)
Girosi, Federico; Poggio, Tomaso; Caprile, Bruno
DownloadAIM-1220.ps (3.231Mb)
Additional downloads
Metadata
Show full item recordAbstract
Learning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. In this note, we extend the theory by introducing ways of dealing with two aspects of learning: learning in the presence of unreliable examples and learning from positive and negative examples. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden.
Date issued
1990-07-01Other identifiers
AIM-1220
Series/Report no.
AIM-1220