Show simple item record

dc.contributor.advisorMadden, Samuel
dc.contributor.authorChoi, Kenneth K.
dc.date.accessioned2025-09-18T14:27:48Z
dc.date.available2025-09-18T14:27:48Z
dc.date.issued2025-05
dc.date.submitted2025-06-23T14:01:39.704Z
dc.identifier.urihttps://hdl.handle.net/1721.1/162696
dc.description.abstractLarge language models (LLMs) have emerged as powerful tools for a wide array of applications. Serving multiple LLMs on shared GPUs has increasingly gained attention as single providers need to support multiple applications (summarization, chat, code generation), different model versions (A/B testing), and various types of customers. However, multi-model serving is particularly challenging, as static memory partitioning can lead to severe under-utilization, fragmentation, and latency spikes, while dynamic loading of model weights can cause unacceptable downtime due to high model loading overheads. To address these issues, we introduce hierarchical paging, a novel key-value (KV) cache management strategy, and we implement it within the vLLM serving engine. Hierarchical paging organizes GPU memory into a two-level hierarchy: large contiguous memory blocks allocated to individual models, which are then subdivided into smaller blocks that are allocated to different requests issued to that model. Our design enables dynamic memory sharing across models, improving model throughput and overcoming key problems of existing approaches. We detail our implementation and present end-to-end experiments that showcase these throughput improvements under different workloads. We include further evaluations on the runtime overheads of our hierarchical paging implementation, which show that the overheads are insignificant. Most importantly, we demonstrate that hierarchical paging is easy to implement, optimizing for implementation effort and maintainability.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleHosting LLMs on Shared GPUs
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record