Aggregating Funnels for Faster Fetch&Add and Queues
Author(s)
Roh, Younghun; Wei, Yuanhao; Ruppert, Eric; Fatourou, Panagiota; Jayanti, Siddhartha; Shun, Julian; ... Show more Show less
Download3710848.3710873.pdf (830.4Kb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Many concurrent algorithms require processes to perform fetch-and-add operations on a single memory location, which can be a hot spot of contention. We present a novel algorithm called Aggregating Funnels that reduces this contention by spreading the fetch-and-add operations across multiple memory locations. It aggregates fetch-and-add operations into batches so that the batch can be performed by a single hardware fetch-and-add instruction on one location and all operations in the batch can efficiently compute their results by performing a fetch-and-add instruction on a different location. We show experimentally that this approach achieves higher throughput than previous combining techniques, such as Combining Funnels, and is substantially more scalable than applying hardware fetch-and-add instructions on a single memory location. We show that replacing the fetch-and-add instructions in the fastest state-of-the-art concurrent queue by our Aggregating Funnels eliminates a bottleneck and greatly improves the queue's overall throughput.
Description
PPoPP ’25, Las Vegas, NV, USA
Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryPublisher
ACM|The 30th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming
Citation
Roh, Younghun, Wei, Yuanhao, Ruppert, Eric, Fatourou, Panagiota, Jayanti, Siddhartha et al. "Aggregating Funnels for Faster Fetch&Add and Queues."
Version: Final published version
ISBN
979-8-4007-1443-6