MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Tight mixed-integer optimization formulations for prescriptive trees

Author(s)
Biggs, Max; Perakis, Georgia
Thumbnail
Download10994_2025_Article_6771.pdf (3.829Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
We focus on modeling the relationship between an input feature vector and the predicted outcome of a trained decision tree using mixed-integer optimization. This can be used in many practical applications where a decision tree or a tree ensemble is incorporated into an optimization problem to model the predicted outcomes of a decision. We propose novel tight mixed-integer optimization formulations for this problem. Existing formulations can be shown to have linear relaxations that have fractional extreme points, even for the simple case of modeling a single decision tree or a very large number of constraints, which leads to slow solve times in practice. A formulation we propose, based on a projected union of polyhedra approach, is ideal (i.e., the extreme points of the linear relaxation are integer when required) for a single decision tree. Although the formulation is generally not ideal for tree ensembles, it generally has fewer extreme points, leading to a faster time to solve. We also study formulations with a binary representation of the feature vector and present multiple approaches to tighten existing formulations. We show that fractional extreme points are removed when multiple splits are on the same feature. At an extreme, we prove that this results in an ideal formulation for a tree ensemble modeling a one-dimensional feature vector. Building on this result, we also show that these additional constraints result in significantly tighter linear relaxations when the feature vector is low dimensional.
Date issued
2025-05-29
URI
https://hdl.handle.net/1721.1/163756
Department
Sloan School of Management
Journal
Machine Learning
Publisher
Springer US
Citation
Biggs, M., Perakis, G. Tight mixed-integer optimization formulations for prescriptive trees. Mach Learn 114, 156 (2025).
Version: Final published version

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.