Theses - Operations Research
http://hdl.handle.net/1721.1/7902
2014-04-18T17:07:05ZData-driven optimization and analytics for operations management applications
http://hdl.handle.net/1721.1/85695
Data-driven optimization and analytics for operations management applications
Uichanco, Joline Ann Villaranda
In this thesis, we study data-driven decision making in operation management contexts, with a focus on both theoretical and practical aspects. The first part of the thesis analyzes the well-known newsvendor model but under the assumption that, even though demand is stochastic, its probability distribution is not part of the input. Instead, the only information available is a set of independent samples drawn from the demand distribution. We analyze the well-known sample average approximation (SAA) approach, and obtain new tight analytical bounds on the accuracy of the SAA solution. Unlike previous work, these bounds match the empirical performance of SAA observed in extensive computational experiments. Our analysis reveals that a distribution's weighted mean spread (WMS) impacts SAA accuracy. Furthermore, we are able to derive distribution parametric free bound on SAA accuracy for log-concave distributions through an innovative optimization-based analysis which minimizes WMS over the distribution family. In the second part of the thesis, we use spread information to introduce new families of demand distributions under the minimax regret framework. We propose order policies that require only a distribution's mean and spread information. These policies have several attractive properties. First, they take the form of simple closed-form expressions. Second, we can quantify an upper bound on the resulting regret. Third, under an environment of high profit margins, they are provably near-optimal under mild technical assumptions on the failure rate of the demand distribution. And finally, the information that they require is easy to estimate with data. We show in extensive numerical simulations that when profit margins are high, even if the information in our policy is estimated from (sometimes few) samples, they often manage to capture at least 99% of the optimal expected profit. The third part of the thesis describes both applied and analytical work in collaboration with a large multi-state gas utility. We address a major operational resource allocation problem in which some of the jobs are scheduled and known in advance, and some are unpredictable and have to be addressed as they appear. We employ a novel decomposition approach that solves the problem in two phases. The first is a job scheduling phase, where regular jobs are scheduled over a time horizon. The second is a crew assignment phase, which assigns jobs to maintenance crews under a stochastic number of future emergencies. We propose heuristics for both phases using linear programming relaxation and list scheduling. Using our models, we develop a decision support tool for the utility which is currently being piloted in one of the company's sites. Based on the utility's data, we project that the tool will result in 55% reduction in overtime hours.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013.; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 163-166).
2013-01-01T00:00:00ZRobust planning for unmanned underwater vehicles
http://hdl.handle.net/1721.1/84854
Robust planning for unmanned underwater vehicles
Frost, Emily Anne
In this thesis, I design and implement a novel method of schedule and path selection between predetermined waypoints for unmanned underwater vehicles under uncertainty. The problem is first formulated as a mixed-integer optimization model and subsequently uncertainty is addressed using a robust optimization approach. Solutions were tested through simulation and computational results are presented which indicate that the robust approach handles larger problems than could previously be solved in a reasonable running time while preserving a high level of robustness. This thesis demonstrates that the robust methods presented can solve realistic-sized problems in reasonable runtimes - a median of ten minutes and a mean of thirty minutes for 32 tasks - and that the methods perform well both in terms of expected reward and robustness to disturbances in the environment. The latter two results are obtained by simulating solutions given by the deterministic method, a naive robust method, and finally the two restricted affine robust policies. The two restricted affine policies consistently show an expected reward of nearly 100%, while the deterministic and naive robust methods achieve approximately 50% of maximum reward possible.
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 59-60).
2013-01-01T00:00:00ZMarginal social cost auctions for congested airport facilities
http://hdl.handle.net/1721.1/84837
Marginal social cost auctions for congested airport facilities
Schorr, Raphael Avram, 1976-
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2002.; "September 2002."; Includes bibliographical references (p. 96-97).
2002-01-01T00:00:00ZAlgorithms for routing problems in stochastic time-dependent networks
http://hdl.handle.net/1721.1/84786
Algorithms for routing problems in stochastic time-dependent networks
Kang, Seong-Cheol, 1968-
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center , 2002.; Includes bibliographical references (p. 185-187).
2002-01-01T00:00:00Z