Show simple item record

dc.contributor.advisorDimitris Bertsimas.en_US
dc.contributor.authorDunn, Jack Williamen_US
dc.contributor.otherMassachusetts Institute of Technology. Operations Research Center.en_US
dc.date.accessioned2018-11-28T15:25:46Z
dc.date.available2018-11-28T15:25:46Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/119280
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 217-226).en_US
dc.description.abstractFor the past 30 years, decision tree methods have been one of the most widely-used approaches in machine learning across industry and academia, due in large part to their interpretability. However, this interpretability comes at a price--the performance of classical decision tree methods is typically not competitive with state-of-the-art methods like random forests and gradient boosted trees. A key limitation of classical decision tree methods is their use of a greedy heuristic for training. The tree is therefore constructed one locally-optimal split at a time, and so the final tree as a whole may be far from global optimality. Motivated by the increase in performance of mixed-integer optimization methods over the last 30 years, we formulate the problem of constructing the optimal decision tree using discrete optimization, allowing us to construct the entire decision tree in a single step and hence find the single tree that best minimizes the training error. We develop high-performance local search methods that allow us to efficiently solve this problem and find optimal decision trees with both parallel (axes-aligned) and hyperplane splits. We show that our approach using modern optimization results in decision trees that improve significantly upon classical decision tree methods. In particular, across a suite of synthetic and real-world classification and regression examples, our methods perform similarly to random forests and boosted trees whilst maintaining the interpretability advantage of a single decision tree, thus alleviating the need to choose between performance and interpretability. We also adapt our approach to the problem of prescription, where the goal is to make optimal prescriptions for each observation. While constructing the tree, our method simultaneously infers the unknown counterfactuals in the data and learns to make optimal prescriptions. This results in a decision tree that optimizes both the predictive and prescriptive error, and delivers an interpretable solution that offers significant improvements upon the existing state-of-the-art in prescriptive problems.en_US
dc.description.statementofresponsibilityby Jack William Dunn.en_US
dc.format.extent226 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectOperations Research Center.en_US
dc.titleOptimal trees for prediction and prescriptionen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Operations Research Center
dc.contributor.departmentSloan School of Management
dc.identifier.oclc1065540491en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record