MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Load balancing and memory optimizations for expert parallel training of large language models

Author(s)
Wisdom, Daniel
Thumbnail
DownloadThesis PDF (513.0Kb)
Advisor
Leiserson, Charles E.
Kaler, Tim
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Large language models (LLMs) are an effective way to solve many text-based machine learning tasks, but require huge amounts of computation to train and evaluate. Mixture of experts models have emerged as a way to reduce the amount of computation required for LLMs without compromising accuracy. It is necessary to distribute these large models across several devices, but this requires substantial communication between devices throughout training. Expert parallel is a promising approach to distributing the model across devices and communicating necessary information during training, especially for small batch sizes or models with large embedding sizes. Unfortunately, expert parallel creates an imbalanced workload across devices, causes errors with existing memory conservation strategies, and has poor overlapping of communication and computation. Some existing works solve the imbalanced workload by dropping excess tokens sent to experts above a capacity, but that may reduce accuracy. In my thesis I introduce ModuleFormer-PRM, an expert parallel training system that addresses these issues without dropping tokens. I will explain a subtle error that occurs when trying to save memory and a strategy to prevent it. I will analyze the distribution of workload among experts and show two approaches to better balance the workload across devices, leading to more stable memory use and faster runtime. I evaluate ModuleFormerPRM using pretrained MoE models and show my optimizations improved expert parallel’s throughput by 2.1×.
Date issued
2024-02
URI
https://hdl.handle.net/1721.1/153897
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.