Doing more with less: characterizing dataset downsampling for AutoML
Author(s)
Zogaj, Fatjon; Cambronero, José Pablo; Rinard, Martin C; Cito, Jürgen
DownloadPublished version (920.3Kb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
<jats:p>Automated machine learning (AutoML) promises to democratize machine learning by automatically generating machine learning pipelines with little to no user intervention. Typically, a search procedure is used to repeatedly generate and validate candidate pipelines, maximizing a predictive performance metric, subject to a limited execution time budget. While this approach to generating candidates works well for small tabular datasets, the same procedure does not directly scale to larger tabular datasets with 100,000s of observations, often producing fewer candidate pipelines and yielding lower performance, given the same execution time budget. We carry out an extensive empirical evaluation of the impact that downsampling - reducing the number of rows in the input tabular dataset - has on the pipelines produced by a genetic-programming-based AutoML search for classification tasks.</jats:p>
Date issued
2021Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings of the VLDB Endowment
Publisher
VLDB Endowment
Citation
Zogaj, Fatjon, Cambronero, José Pablo, Rinard, Martin C and Cito, Jürgen. 2021. "Doing more with less: characterizing dataset downsampling for AutoML." Proceedings of the VLDB Endowment, 14 (11).
Version: Final published version