Operations Research - Ph.D. / Sc.D.
http://hdl.handle.net/1721.1/7903
Tue, 19 Sep 2017 19:06:33 GMT2017-09-19T19:06:33ZData-driven algorithms for operational problems
http://hdl.handle.net/1721.1/108916
Data-driven algorithms for operational problems
Cheung, Wang Chi
In this thesis, we propose algorithms for solving revenue maximization and inventory control problems in data-driven settings. First, we study the choice-based network revenue management problem. We propose the Approximate Column Generation heuristic (ACG) and Potential Based algorithm (PB) for solving the Choice-based Deterministic Linear Program, an LP relaxation to the problem, to near-optimality. Both algorithms only assume the ability to approximate the underlying single period problem. ACG inherits the empirical efficiency from the Column Generation heuristic, while PB enjoys provable efficiency guarantee. Building on these tractability results, we design an earning-while-learning policy for the online problem under a Multinomial Logit choice model with unknown parameters. The policy is efficient, and achieves a regret sublinear in the length of the sales horizon. Next, we consider the online dynamic pricing problem, where the underlying demand function is not known to the monopolist. The monopolist is only allowed to make a limited number of price changes during the sales horizon, due to administrative constraints. For any integer m, we provide an information theoretic lower bound on the regret incurred by any pricing policy with at most m price changes. The bound is the best possible, as it matches the regret upper bound incurred by our proposed policy, up to a constant factor. Finally, we study the data-driven capacitated stochastic inventory control problem, where the demand distributions can only be accessed through sampling from offline data. We apply the Sample Average Approximation (SAA) method, and establish a polynomial size upper bound on the number of samples needed to achieve a near-optimal expected cost. Nevertheless, the underlying SAA problem is shown to be #P hard. Motivated by the SAA analysis, we propose a randomized polynomial time approximation scheme which also uses polynomially many samples. To complement our results, we establish an information theoretic lower bound on the number of samples needed to achieve near optimality.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 173-180).
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/1721.1/1089162017-01-01T00:00:00ZMultiserver queueing systems in heavy traffic
http://hdl.handle.net/1721.1/108834
Multiserver queueing systems in heavy traffic
Eschenfeldt, Patrick Clark
In the study of queueing systems, a question of significant current interest is that of large scale behavior, where the size of the system increases without bound. This regime has becoming increasingly relevant with the rise of massive distributed systems like server farms, call centers, and health care management systems. To minimize underutilization of resources, the specific large scale regime of most interest is one in which the work to be done increases as processing capability increases. In this thesis, we characterize the behavior of two such large scale queueing systems. In the first part of the thesis we consider a Join the Shortest Queue (JSQ) policy in the so-called Halfin-Whitt heavy traffic regime. We establish that a scaled process counting the number of idle servers and queues of length two weakly converges to a two-dimensional reflected Ornstein-Uhlenbeck process, while processes counting longer queues converge to a deterministic system decaying to zero in constant time. This limiting system is similar to that of the traditional Halfin-Whitt model in its basic performance measures, but there are key differences in the queueing behavior of the JSQ model. In particular, only a vanishing fraction of customers will have to wait, but those who do will incur a constant order waiting time. In the second part of the thesis we consider a widely studied so-called "supermarket model" in which arriving customers join the shortest of d randomly selected queues. Assuming rate n[lambda]n Poisson arrivals and rate 1 exponentially distributed service times, our heavy traffic regime is described by [lambda]n 1 as n --> [infinity]. We give a simple expectation argument establishing that queues have steady state length at least i* = logd 1/1-[lambda]n with probability approaching one as n [infinity] 8. Our main result for this system concerns the detailed behavior of queues with length smaller than i*. Assuming [lambda]n converges to 1 at rate at most [square root of]n, we show that the dynamics of such queues does not follow a diffusion process, as is typical for queueing systems in heavy traffic, but is described instead by a deterministic infinite system of linear differential equations, after an appropriate rescaling.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017.; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 107-109).
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/1721.1/1088342017-01-01T00:00:00ZDynamic trading and behavioral finance
http://hdl.handle.net/1721.1/107017
Dynamic trading and behavioral finance
Remorov, Alexander
The problem of investing over time remains an important open question, considering the recent large moves in the markets, such as the Financial Crisis of 2008, the subsequent rally in equities, and the decline in commodities over the past two years. We study this problem from three aspects. The first aspect lies in analyzing a particular dynamic strategy, called the stop-loss strategy. We derive closed-form expressions for the strategy returns while accounting for serial correlation and transactions costs. When applied to a large sample of individual U.S. stocks, we show that tight stop-loss strategies tend to underperform the buy-and- hold policy due to excessive trading costs. Outperformance is possible for stocks with sufficiently high serial correlation in returns. Certain strategies succeed at reducing downside risk, but not substantially. We also look at optimizing the stop-loss level for a class of these strategies. The second approach is more behavioral in nature and aims to elicit how various market players expect to react to large changes in asset prices. We use a global survey of individual investors, financial advisors, and institutional investors to do this. We find that most institutional investors expect to exhibit highly contrarian reactions to past returns in terms of their equity allocations. Financial advisors are also mostly contrarian; a few of them demonstrate passive behavior. In contrast, individual investors are, on average, extrapolative, and can be partitioned into four distinct types: passive investors, risk avoiders, extrapolators, and everyone else. The third part of the thesis studies how people actually trade. We propose a new model of dynamic trading in which an investor is affected by behavioral heuristics, and carry out extensive simulations to understand how the heuristics affect portfolio performance. We propose an MCMC algorithm that is reasonably successful at estimating model parameters from simulated data, and look at the predictive ability of the model. We also provide preliminary results from looking at trading data obtained from a brokerage firm. We focus on understanding how people trade their portfolios conditional on past returns at various horizons, as well as on past trading behavior.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016.; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 198-204).
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1721.1/1070172016-01-01T00:00:00ZMethods for convex optimization and statistical learning
http://hdl.handle.net/1721.1/106683
Methods for convex optimization and statistical learning
Grigas, Paul (Paul Edward)
We present several contributions at the interface of first-order methods for convex optimization and problems in statistical machine learning. In the first part of this thesis, we present new results for the Frank-Wolfe method, with a particular focus on: (i) novel computational guarantees that apply for any step-size sequence, (ii) a novel adjustment to the basic algorithm to better account for warm-start information, and (iii) extensions of the computational guarantees that hold in the presence of approximate subproblem and/or gradient computations. In the second part of the thesis, we present a unifying framework for interpreting "greedy" first-order methods -- namely Frank-Wolfe and greedy coordinate descent -- as instantiations of the dual averaging method of Nesterov, and we discuss the implications thereof. In the third part of the thesis, we present an extension of the Frank-Wolfe method that is designed to induce near-optimal low-rank solutions for nuclear norm regularized matrix completion and, for more general problems, induces near-optimal "well-structured" solutions. We establish computational guarantees that trade off efficiency in computing near-optimal solutions with upper bounds on the rank of iterates. We then present extensive computational results that show significant computational advantages over existing related approaches, in terms of delivering low rank and low run-time to compute a target optimality gap. In the fourth part of the thesis, we analyze boosting algorithms in linear regression from the perspective modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression can be viewed as subgradient descent to minimize the maximum absolute correlation between features and residuals. We also propose a slightly modified boosting algorithm that yields an algorithm for the Lasso, and that computes the Lasso path. Our perspective leads to first-ever comprehensive computational guarantees for all of these boosting algorithms, which provide a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm, for any dataset. In the fifth and final part of the thesis, we present several related results in the contexts of boosting algorithms for logistic regression and the AdaBoost algorithm.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016.; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 219-225).
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1721.1/1066832016-01-01T00:00:00Z