MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Operational decisions and learning for multiproduct retail

Author(s)
Pixton, Clark (Clark Charles)
Thumbnail
DownloadFull printable version (8.411Mb)
Other Contributors
Massachusetts Institute of Technology. Operations Research Center.
Advisor
David Simchi-Levi.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
We study multi-product revenue management problems, focusing on the role of uncertainty in both the seller and the customer decision processes. We begin by considering a logit model framework for personalized revenue management problems where utilities are functions of customer attributes, so that data for any one customer can be generalized to others via regression. We establish finite-sample convergence guarantees on the model parameters. The parameter convergence guarantees are then extended to out-of-sample performance guarantees in terms of revenue, in the form of a high-probability bound on the gap between the expected revenue of the best action taken under the estimated parameters and the revenue generated by a decision-maker with full knowledge of the choice model. In the second chapter, we study the static assortment optimization problem under weakly rational choice. This setting applies to most choice models studied and used in practice. We give a mixed-integer linear optimization formulation and present two branch-and-bound algorithms for solving this optimization problem. The formulation and algorithms require only black-box access to purchase probabilities, and thus provide exact solution methods for a general class of discrete choice models, in particular those models without closed-form choice probabilities. We give approximation results for our algorithms in two special cases, and test the performance of our algorithms with heuristic stopping criteria. The third section, motivated by data from an online retailer, describes sales of durable goods online, focusing on the effects of uncertainty about product quality and learning from customer reviews. We describe the nature of the tradeoff between learning product quality over time and substitution effects between products offered in the same category on the same website. Specifically, small differences in product release tines can be magnified substantially over time. The learning process takes longer in markets with more products. The process also takes longer in markets with higher price because customers take more risk in these markets when purchasing under uncertainty. This results in both smaller demand for new products in high-priced markets and more market concentration around fewer, well-established products. We discuss operational implications and show application to a break-even analysis.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 115-120).
 
Date issued
2018
URI
http://hdl.handle.net/1721.1/119352
Department
Massachusetts Institute of Technology. Operations Research Center; Sloan School of Management
Publisher
Massachusetts Institute of Technology
Keywords
Operations Research Center.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.