Management - Ph.D. / Sc.D.
http://hdl.handle.net/1721.1/7917
2017-05-25T08:50:51ZData-driven algorithms for operational problems
http://hdl.handle.net/1721.1/108916
Data-driven algorithms for operational problems
Cheung, Wang Chi
In this thesis, we propose algorithms for solving revenue maximization and inventory control problems in data-driven settings. First, we study the choice-based network revenue management problem. We propose the Approximate Column Generation heuristic (ACG) and Potential Based algorithm (PB) for solving the Choice-based Deterministic Linear Program, an LP relaxation to the problem, to near-optimality. Both algorithms only assume the ability to approximate the underlying single period problem. ACG inherits the empirical efficiency from the Column Generation heuristic, while PB enjoys provable efficiency guarantee. Building on these tractability results, we design an earning-while-learning policy for the online problem under a Multinomial Logit choice model with unknown parameters. The policy is efficient, and achieves a regret sublinear in the length of the sales horizon. Next, we consider the online dynamic pricing problem, where the underlying demand function is not known to the monopolist. The monopolist is only allowed to make a limited number of price changes during the sales horizon, due to administrative constraints. For any integer m, we provide an information theoretic lower bound on the regret incurred by any pricing policy with at most m price changes. The bound is the best possible, as it matches the regret upper bound incurred by our proposed policy, up to a constant factor. Finally, we study the data-driven capacitated stochastic inventory control problem, where the demand distributions can only be accessed through sampling from offline data. We apply the Sample Average Approximation (SAA) method, and establish a polynomial size upper bound on the number of samples needed to achieve a near-optimal expected cost. Nevertheless, the underlying SAA problem is shown to be #P hard. Motivated by the SAA analysis, we propose a randomized polynomial time approximation scheme which also uses polynomially many samples. To complement our results, we establish an information theoretic lower bound on the number of samples needed to achieve near optimality.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 173-180).
2017-01-01T00:00:00ZMultiserver queueing systems in heavy traffic
http://hdl.handle.net/1721.1/108834
Multiserver queueing systems in heavy traffic
Eschenfeldt, Patrick Clark
In the study of queueing systems, a question of significant current interest is that of large scale behavior, where the size of the system increases without bound. This regime has becoming increasingly relevant with the rise of massive distributed systems like server farms, call centers, and health care management systems. To minimize underutilization of resources, the specific large scale regime of most interest is one in which the work to be done increases as processing capability increases. In this thesis, we characterize the behavior of two such large scale queueing systems. In the first part of the thesis we consider a Join the Shortest Queue (JSQ) policy in the so-called Halfin-Whitt heavy traffic regime. We establish that a scaled process counting the number of idle servers and queues of length two weakly converges to a two-dimensional reflected Ornstein-Uhlenbeck process, while processes counting longer queues converge to a deterministic system decaying to zero in constant time. This limiting system is similar to that of the traditional Halfin-Whitt model in its basic performance measures, but there are key differences in the queueing behavior of the JSQ model. In particular, only a vanishing fraction of customers will have to wait, but those who do will incur a constant order waiting time. In the second part of the thesis we consider a widely studied so-called "supermarket model" in which arriving customers join the shortest of d randomly selected queues. Assuming rate n[lambda]n Poisson arrivals and rate 1 exponentially distributed service times, our heavy traffic regime is described by [lambda]n 1 as n --> [infinity]. We give a simple expectation argument establishing that queues have steady state length at least i* = logd 1/1-[lambda]n with probability approaching one as n [infinity] 8. Our main result for this system concerns the detailed behavior of queues with length smaller than i*. Assuming [lambda]n converges to 1 at rate at most [square root of]n, we show that the dynamics of such queues does not follow a diffusion process, as is typical for queueing systems in heavy traffic, but is described instead by a deterministic infinite system of linear differential equations, after an appropriate rescaling.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017.; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 107-109).
2017-01-01T00:00:00ZEssays in financial economics
http://hdl.handle.net/1721.1/108220
Essays in financial economics
Sun, Yang, Ph. D. Massachusetts Institute of Technology
This thesis consists of three essays in corporate finance and capital markets. The first chapter estimates the effect of competition from low-cost index funds on fees in the money management industry. A difference-in-differences analysis exploiting the staggered entry of index funds finds that while actively managed funds sold directly to retail investors reduce fees by six percent, those sold through brokers increase fees by four percent. Additionally, actively managed funds, especially closet indexers, shift away from holding the index portfolio. The paper proposes a price-discrimination model to illustrate that the effect of low-cost passive fund competition depends on market segmentation. Beyond the price competition effect, the entry creates a selection effect that isolates the least-price-sensitive investors in the broker channel and results in a price increase for this group. Repeating the study using the entry of exchange-traded funds reveals similar but stronger finding. Overall, the results shed light on why aggregate mutual fund fees decline slowly despite increased competition from lower-cost passive alternatives. The second chapter, joint with Jean-Noel Barrot, examines the effects of imperfect investor risk adjustment on the behavior of mutual fund managers. We exploit a natural experiment when a major fund rating company changed its rating methodology. While in the old system, all equity funds were compared with one another in one pool, in the new algorithm, funds become compared within narrow peer groups. This algorithm revision increases the ability of retail investor to compare funds based on risk-adjusted returns, and it has an important impact on the fund mangers' compensation. The sensitivity of retail fund flows to systematic returns is eliminated. Using institutional funds as a control for retail funds in a difference-in-differences analysis, we find that this revision reduces fund managers risk taking behavior, in particular for funds in the categories that had biased low ratings ex-ante. The third chapter, joint with Carola Frydman and Eric Hilt, documents the dividend policy of firms in the United States during the first three decades of the twentieth century. This period features severe information asymmetry between insiders and outsiders, while other factors that could affect the payout policy were relatively muted. In the years surrounding World War I, industrial firms increased their payout ratios and dividends became less sticky. The new industrial firms listed on the NYSE in the 1920s had the best fit with the Lintner (1956) model and these firms refrained from committing to sticky dividend policy. Consistent with the asymmetric information theory, the market reacted positively to dividend increase announcements, especially to those made by the new industrials, and reacted negatively to dividend cuts.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2015.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 157-163).
2015-01-01T00:00:00ZHigh dimensional revenue management
http://hdl.handle.net/1721.1/108211
High dimensional revenue management
Ciocan, Dragos Florin
We present potential solutions to several problems that arise in making revenue management (RM) practical for online advertising and related modern applications. Principally, RM solutions for these problems must contend with (i) highly volatile demand processes that are hard to forecast, and (ii) massive scale that makes even basic optimization problems challenging. Our solutions to these problems are interesting in their own right in the areas of stochastic optimization, high dimensional learning and distributed optimization. In the first part of the thesis, we propose a model predictive control approach to combat volatile demand. This approach is conceptually simple, uses available demand data in a natural way, and, most importantly, can be shown to generate significant revenue advantages on real-world data from ad networks. Under mild restrictions, we prove that our algorithm achieves uniform relative performance guarantees vis-a-vis a clairvoyant in the face of arbitrary volatility, while simultaneously being optimal in the event that volatility is negligible. This is the first result of its kind for model predictive control. While our approach above is effective at hedging demand shocks that occur over "large" time horizons, it relies on the ability to estimate snapshots of the prevailing demand distribution over "short" time horizons. The second part of the thesis deals with learning the extremely high dimensional demand distributions that are typical in display advertising applications. This work exploits the special structure of the display advertising version of the NRM problem to achieve a sample complexity that scales gracefully in the dimensions of the problem. The third part of the thesis focuses on the problem of solving terabyte sized LPs on an hourly basis given a distributed computational infrastructure; solving these massive LPs is the computational primitive required to make our model predictive control approach practical. Here we design a linear optimization algorithm that fits a paradigm for distributed computation referred to as 'Map-Reduce'. An implementation of our solver in a shared memory environment where we can benchmark against solvers such as CPLEX shows that the algorithm outperforms those solvers on the types of LPs that an ad network would have to solve in practice.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 2014.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 149-153).
2014-01-01T00:00:00Z