Show simple item record

dc.contributor.authorZhou, Zhuangzhuang
dc.contributor.authorGogte, Vaibhav
dc.contributor.authorVaish, Nilay
dc.contributor.authorKennelly, Chris
dc.contributor.authorXia, Patrick
dc.contributor.authorKanev, Svilen
dc.contributor.authorMoseley, Tipp
dc.contributor.authorDelimitrou, Christina
dc.contributor.authorRanganathan, Parthasarathy
dc.date.accessioned2024-05-02T19:28:36Z
dc.date.available2024-05-02T19:28:36Z
dc.date.issued2024-04-27
dc.identifier.isbn979-8-4007-0386-7
dc.identifier.urihttps://hdl.handle.net/1721.1/154383
dc.descriptionASPLOS '24: Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems April 27-May 1, 2024 La Jolla, CA, USAen_US
dc.description.abstractMemory allocation constitutes a substantial component of warehouse-scale computation. Optimizing the memory allocator not only reduces the datacenter tax, but also improves application performance, leading to significant cost savings. We present the first comprehensive characterization study of TCMalloc, a memory allocator used by warehouse-scale applications in Google's production fleet. Our characterization reveals a profound diversity in the memory allocation patterns, allocated object sizes and lifetimes, for large-scale datacenter workloads, as well as in their performance on heterogeneous hardware platforms. Based on these insights, we optimize TCMalloc for warehouse-scale environments. Specifically, we propose optimizations for each level of its cache hierarchy that include usage-based dynamic sizing of allocator caches, leveraging hardware topology to mitigate inter-core communication overhead, and improving allocation packing algorithms based on statistical data. We evaluate these design choices using benchmarks and fleet-wide A/B experiments in our production fleet, resulting in a 1.4% improvement in throughput and a 3.4% reduction in RAM usage for the entire fleet. For the applications with the highest memory allocation usage, we observe up to 8.1% and 6.3% improvement in throughput and memory usage respectively. At our scale, even a single percent CPU or memory improvement translates to significant savings in server costs.en_US
dc.publisherACMen_US
dc.relation.isversionof10.1145/3620666.3651350en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleCharacterizing a Memory Allocator at Warehouse Scaleen_US
dc.typeArticleen_US
dc.identifier.citationZhou, Zhuangzhuang, Gogte, Vaibhav, Vaish, Nilay, Kennelly, Chris, Xia, Patrick et al. 2024. "Characterizing a Memory Allocator at Warehouse Scale."
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-05-01T07:45:44Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-05-01T07:45:45Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record