DeltaZip: Efficient Serving of Multiple Full-Model-Tuned LLMs
Author(s)
Yao, Xiaozhe; Hu, Qinghao; Klimovic, Ana
Download3689031.3717468.pdf (1.431Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Fine-tuning large language models (LLMs) greatly improves model quality for downstream tasks. However, serving many fine-tuned LLMs concurrently is challenging due to the sporadic, bursty, and varying request patterns of different LLMs. To bridge this gap, we present DeltaZip, an LLM serving system that efficiently serves multiple full-parameter fine-tuned models concurrently by aggressively compressing model deltas by up to 10× while maintaining high model quality. The key insight behind this design is that fine-tuning results in small-magnitude changes to the pre-trained model. By co-designing the serving system with the compression algorithm, DeltaZip achieves 2× to 12× improvement in throughput compared to the state-of-the-art systems.
Description
EuroSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands
Date issued
2025-03-30Department
Massachusetts Institute of Technology. Research Laboratory of ElectronicsPublisher
ACM|Twentieth European Conference on Computer Systems
Citation
Xiaozhe Yao, Qinghao Hu, and Ana Klimovic. 2025. DeltaZip: Efficient Serving of Multiple Full-Model-Tuned LLMs. In Proceedings of the Twentieth European Conference on Computer Systems (EuroSys '25). Association for Computing Machinery, New York, NY, USA, 110–127.
Version: Final published version
ISBN
979-8-4007-1196-1