Show simple item record

dc.contributor.advisorPatrick Jaillet.en_US
dc.contributor.authorMastin, Dana Andrewen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2015-07-17T19:12:25Z
dc.date.available2015-07-17T19:12:25Z
dc.date.copyright2015en_US
dc.date.issued2015en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/97761
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 249-260).en_US
dc.description.abstractWe study a series of topics involving approximation algorithms and the presence of uncertain data in optimization. On the first theme of approximation, we derive performance bounds for rollout algorithms. Interpreted as an approximate dynamic programming algorithm, a rollout algorithm estimates the value-to-go at each decision stage by simulating future events while following a heuristic policy, referred to as the base policy. We provide a probabilistic analysis of knapsack problems, proving that rollout algorithms perform significantly better than their base policies. Next, we study the average performance of greedy algorithms for online matching on random graphs. In online matching problems, vertices arrive sequentially and reveal their neighboring edges. Vertices may be matched upon arrival and matches are irrevocable. We determine asymptotic matching sizes obtained by a variety of greedy algorithms on random graphs, both for bipartite and non-bipartite graphs. Moving to the second theme of uncertainty, we analyze losses resulting from uncertain transition probabilities in Markov decision processes. We assume that policies are computed using exact dynamic programming with estimated transition probabilities, but the system evolves according to dierent, true transition probabilities. Given a bound on the total variation error of estimated transition probability distributions, we derive a general tight upper bound on the loss of expected total reward. Finally, we consider a randomized model for minmax regret in combinatorial optimization under cost uncertainty. This problem can be viewed as a zero-sum game played between an optimizing player and an adversary, where the optimizing player selects a solution and the adversary selects costs with the intention of maximizing the regret of the player. We analyze a model where the optimizing player selects a probability distribution over solutions and the adversary selects costs with knowledge of the player's distribution. We show that under this randomized model, the minmax regret version of any polynomial solvable combinatorial problem is polynomial solvable, both for interval and discrete scenario representations of uncertainty.en_US
dc.description.statementofresponsibilityby Dana Andrew Mastin.en_US
dc.format.extent260 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleAnalysis of approximation and uncertainty in optimizationen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc912305897en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record