MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Addressing misspecification in contextual optimization

Author(s)
Bennouna, Omar
Thumbnail
DownloadThesis PDF (413.6Kb)
Advisor
Ozdaglar, Asuman
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
We study the predict-then-optimize framework approach, which combines machine learning and a downstream optimization task. This approach entails forecasting unknown parameters of an optimization problem and then resolving the optimization task based on these predictions. For example, consider an energy allocation problem when the energy cost in different areas is uncertain. Despite the absence of precise energy cost values at the time of problem-solving, machine learning models are employed to predict these costs, and the resulting optimization problem, which consists for example of minimizing energy costs while meeting some minimal requirements, is solved using state-of-the-art optimization algorithms. When the chosen hypothesis set is well-specified (i.e. it contains the ground truth predictor), the SLO (Sequential Learning and Optimization) approach performs best among state of the art methods, and has provable performance guarantees. In the misspecified setting (i.e. the hypothesis set does not contain the ground truth predictor), the ILO (Integrated Learning and Optimization) approach seems to have better behavior in practice, but does not enjoy theoretical optimality guarantees. We focus on the misspecified setting. In this case, there is no known algorithm that rigorously solves this prediction problem. We provide a tractable ILO algorithm which successfully finds an optimal solution in this setting. Our approach consists of minimizing a surrogate loss which enjoys theoretical optimality guarantees as well as good behavior in practice. In particular, we show that our approach experimentally outperforms SLO and previous ILO methods in the misspecified setting.
Date issued
2024-05
URI
https://hdl.handle.net/1721.1/156138
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.