Show simple item record

dc.contributor.advisorMohammad Alizadeh.en_US
dc.contributor.authorMao, Hongzi.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2021-01-06T20:17:07Z
dc.date.available2021-01-06T20:17:07Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/129297
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020en_US
dc.descriptionCataloged from student-submitted PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 165-192).en_US
dc.description.abstractNetworked systems rely on many control and decision-making algorithms. Classical approaches to designing and optimizing these algorithms, developed over the last four decades, are poorly suited to the diverse and demanding requirements of modern networks and applications. In the classical paradigm, the system designer assumes a simplified model of the system, specifies some low-level design goals, and develops a fixed algorithm to solve the problem. However, as networks and applications have grown in complexity and heterogeneity, designing fixed algorithms that work well across a variety of conditions has become exceedingly difficult. As a result, classical approaches often sacrifice performance for universality (e.g., TCP congestion control), or force designers to develop point solutions and specialized heuristics for each environment and application. In this thesis,we investigate a new paradigm for solving challenging system optimization problems. Rather than design fixed algorithms for each problem, we develop systems that can learn to optimize the performance on their own using modern reinforcement learning. In the proposed approach, the system designer does not develop specialized heuristics for low-level design goals using simplified models. Instead, the designer architects a framework for data collection, experimentation, and learning that discovers the low-level actions that achieve high-level resource management objectives automatically. We use this approach to build a series of practical network systems for important applications, including context-aware control protocols for adaptive video streaming, and schedulers for data-parallel and large-scale data processing workloads. We also use the insights from these systems to identify common problem structures and develop new reinforcement learning techniques for designing robust data-driven network systems.en_US
dc.description.statementofresponsibilityby Hongzi Mao.en_US
dc.format.extentxxv, 192 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleNetwork system optimization with reinforcement learning : methods and applicationsen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1227704163en_US
dc.description.collectionPh.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2021-01-06T20:17:06Zen_US
mit.thesis.degreeDoctoralen_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record