Machine Learning for Out of Distribution Database Workloads
Author(s)
Negi, Parimarjan![Thumbnail](/bitstream/handle/1721.1/153835/negi-pnegi-phd-eecs-2024-thesis.pdf.jpg?sequence=3&isAllowed=y)
DownloadThesis PDF (10.71Mb)
Advisor
Alizadeh, Mohammad
Terms of use
Metadata
Show full item recordAbstract
DBMS query optimizers are designed using several heuristics to make decisions, such as simplifying assumptions in cardinality estimation, or cost model assumptions for predicting query latencies. With the rise of cloud first DBMS architectures, it is now possible to collect massive amounts of data on executed queries. This gives a way to improve the DBMS heuristics using models that utilize this execution history. In particular, such models can be specialized to particular workloads — thus, it may be possible to do much better than average by learning patterns, such as some joins are always unexpectedly slow, or some tables are always much larger than expected. This can be very beneficial for performance, however, deploying ML systems in the real world has a catch: it is hard to avoid Out of Distribution (OoD) scenarios in the real workloads. ML models often fail in surprising ways in OoD scenarios, and this is an active area of research in the broader ML community. In this thesis, we introduce several such OoD scenarios in the context of database workloads, and show that ML models can easily fail catastrophically in such cases. These range from new query patterns, such as a new column, or new join, to execution time variance across different hardware and system loads. In each case, we use database specific knowledge to develop techniques that get us ML models with more reliable and robust performance in OoD setting.
Date issued
2024-02Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology