A Statistical Learning Theory Framework for Supervised Pattern Discovery
Author(s)
Huggins, Jonathan H.; Rudin, Cynthia
DownloadPublished version (333.3Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Copyright © SIAM. This paper formalizes a latent variable inference problem we call supervised, pattern discovery, the goal of which is to find sets of observations that belong to a single "pattern." We discuss two versions of the problem and prove uniform risk bounds for both. In the first version, collections of patterns can be generated in an arbitrary manner and the data consist of multiple labeled collections. In the second version, the patterns are assumed to be generated independently by identically distributed processes. These processes are allowed to take an arbitrary form, so observations within a pattern are not in general independent of each other. The bounds for the second version of the problem are stated in terms of a new complexity measure, the quasi-Rademacher complexity.
Date issued
2014-04Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science; Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Sloan School of ManagementPublisher
Society for Industrial and Applied Mathematics
Citation
Huggins, Jonathan H. and Rudin, Cynthia. 2014. "A Statistical Learning Theory Framework for Supervised Pattern Discovery."
Version: Final published version