On Coresets for Support Vector Machines
Author(s)
Baykal, Cenk; Rus, Daniela L
DownloadSubmitted version (1.154Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We present an efficient coreset construction algorithm for large-scale Support Vector Machine (SVM) training in Big Data and streaming applications. A coreset is a small, representative subset of the original data points such that a models trained on the coreset are provably competitive with those trained on the original data set. Since the size of the coreset is generally much smaller than the original set, our preprocess-then-train scheme has potential to lead to significant speedups when training SVM models. We prove lower and upper bounds on the size of the coreset required to obtain small data summaries for the SVM problem. As a corollary, we show that our algorithm can be used to extend the applicability of any off-the-shelf SVM solver to streaming, distributed, and dynamic data settings. We evaluate the performance of our algorithm on real-world and synthetic data sets. Our experimental results reaffirm the favorable theoretical properties of our algorithm and demonstrate its practical effectiveness in accelerating SVM training.
Date issued
2020-10Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Publisher
Springer International Publishing
Citation
Tukan, Murad et al. “On Coresets for Support Vector Machines.” Paper in the Lecture Notes in Computer Science, 12337 LNCS, International Conference on Theory and Applications of Models of Computation (TAMC 2020), Changsha, China, 18-20 Oct, 2020, Springer International Publishing: 287-299 © 2020 The Author(s)
Version: Original manuscript
ISSN
0302-9743