TopoOpt: Co-optimizing Network Topology and Parallelization Strategy for Distributed Machine Learning Training Jobs
Author(s)
Wang, Weiyang
DownloadThesis PDF (21.11Mb)
Advisor
Ghobadi, Manya
Terms of use
Metadata
Show full item recordAbstract
This thesis explores a novel approach for building direct-connect DNN training clusters. The proposed system, called TopoOpt, co-optimizes the distributed training process across three dimensions: computation, communication, and network topology. TopoOpt uses a novel alternating optimization technique and a group theory-inspired algorithm to find the best network topology and routing plan, together with parallelization strategy, for distributed DNN training. To motivate this research, we measure the communication patterns of distributed DNN workloads at Meta. Simulations with six real distributed training models show that, compared to similar-cost Fat-tree interconnects, TopoOpt reduces DNN training time by up to 3.4× on a 128-server cluster. Importantly, TopoOpt’s performance matches an ideal network using an abstract full bisection bandwidth switch, which costs 3.2× more. Experiments with a 12-node prototype demonstrate the feasibility of TopoOpt. The prototype shows that with 4×25 Gbps interfaces, TopoOpt’s training throughput is comparable to the ideal baseline of a 100 Gbps full bisection bandwidth network. TopoOpt is the first system with entirely commodity hardware that co-optimizes topology and parallelization strategy for DNN workloads and is currently being evaluated for deployment at Meta.
Date issued
2022-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology