MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Centralized performance control for datacenter networks

Author(s)
Perry, Jonathan, Ph. D. Massachusetts Institute of Technology
Thumbnail
DownloadFull printable version (12.53Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Hari Balakrishnan and Devavrat Shah.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
An ideal datacenter network should allow operators to specify policy for resource allocation between users or applications, while providing several properties, including low median and tail latency, high utilization (throughput), and congestion (loss) avoidance. Current datacenter networks inherit the principles that went into the design of the Internet, where packet transmission and path selection decisions are distributed among the endpoints and routers, which impede obtaining the desired properties. Instead, we propose that a centralized controller should tightly regulate senders' use of the network according to operator policy, and evaluate two architectures: Fastpass and Flowtune. In Fastpass, the controller decides when each packet should be transmitted and what path it should follow. Fastpass incorporates two fast algorithms: the first determines the time at which each packet should be transmitted, while the second determines the path to use for that packet. We deployed and evaluated Fastpass in a portion of Facebook's datacenter network. Our results show that Fastpass achieves high throughput comparable to current networks at a 240 x reduction is queue lengths, achieves much fairer and consistent flow throughputs than the baseline TCP, scales to schedule 2.21 Terabits/s of traffic in software on eight cores, and achieves a 2.5 x reduction in the number of TCP retransmissions in a latency-sensitive service at Facebook. In Flowtune, congestion control decisions are made at the granularity of a flowlet, not a packet, so allocations change only when flowlets arrive or leave. The centralized allocator receives flowlet start and end notifications from endpoints, and computes optimal rates using a new, fast method for network utility maximization. A normalization algorithm ensures allocations do not exceed link capacities. Flowtune updates rate allocations for 4600 servers in 31 ps regardless of link capacities. Experiments show that Flowtune outperforms DCTCP, pFabric, sfqCoDel, and XCP on tail packet delays in various settings, and converges to optimal rates within a few packets rather than over several RTTs. EC2 benchmarks show a fairer rate allocation than Linux's Cubic.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 99-104).
 
Date issued
2017
URI
http://hdl.handle.net/1721.1/111907
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.