Network Requirements for Distributed Machine Learning Training in the Cloud
Author(s)
Salamy, James
DownloadThesis PDF (6.940Mb)
Advisor
Ghobadi, Manya
Terms of use
Metadata
Show full item recordAbstract
In this thesis, I characterize the impact of network bandwidth on distributed machine learning training. I test four popular machine learning models (ResNet, DenseNet, VGG, and BERT) on an Nvidia A-100 cluster to determine the impact of bursty and non-bursty cross traffic (such as web-search traffic and long-lived flows) on the iteration time and throughput of distributed training. By varying the cross traffic load, I measure the impact of network congestion on training iteration times. I observe that with heavy web-search cross traffic (80% of link capacity), on average training iteration time is increased by up to 4 to 8×, for ResNet and BERT models, respectively. Further, I establish that the ring-all reduce communication collective is negatively impacted by network congestion even if the congestion is only affecting part of the ring. I also develop empirical models for the behavior of machine learning training in the presence of each type of cross traffic deployed. These results provide the motivation for developing novel congestion control protocols that are tailored for distributed training environments.
Date issued
2022-02Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology