dc.contributor.advisor | Mohammad Alizadeh. | en_US |
dc.contributor.author | Mao, Hongzi. | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2021-01-06T20:17:07Z | |
dc.date.available | 2021-01-06T20:17:07Z | |
dc.date.copyright | 2020 | en_US |
dc.date.issued | 2020 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/129297 | |
dc.description | Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020 | en_US |
dc.description | Cataloged from student-submitted PDF of thesis. | en_US |
dc.description | Includes bibliographical references (pages 165-192). | en_US |
dc.description.abstract | Networked systems rely on many control and decision-making algorithms. Classical approaches to designing and optimizing these algorithms, developed over the last four decades, are poorly suited to the diverse and demanding requirements of modern networks and applications. In the classical paradigm, the system designer assumes a simplified model of the system, specifies some low-level design goals, and develops a fixed algorithm to solve the problem. However, as networks and applications have grown in complexity and heterogeneity, designing fixed algorithms that work well across a variety of conditions has become exceedingly difficult. As a result, classical approaches often sacrifice performance for universality (e.g., TCP congestion control), or force designers to develop point solutions and specialized heuristics for each environment and application. In this thesis,we investigate a new paradigm for solving challenging system optimization problems. Rather than design fixed algorithms for each problem, we develop systems that can learn to optimize the performance on their own using modern reinforcement learning. In the proposed approach, the system designer does not develop specialized heuristics for low-level design goals using simplified models. Instead, the designer architects a framework for data collection, experimentation, and learning that discovers the low-level actions that achieve high-level resource management objectives automatically. We use this approach to build a series of practical network systems for important applications, including context-aware control protocols for adaptive video streaming, and schedulers for data-parallel and large-scale data processing workloads. We also use the insights from these systems to identify common problem structures and develop new reinforcement learning techniques for designing robust data-driven network systems. | en_US |
dc.description.statementofresponsibility | by Hongzi Mao. | en_US |
dc.format.extent | xxv, 192 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Network system optimization with reinforcement learning : methods and applications | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Ph. D. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1227704163 | en_US |
dc.description.collection | Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2021-01-06T20:17:06Z | en_US |
mit.thesis.degree | Doctoral | en_US |
mit.thesis.department | EECS | en_US |