Thread Migration Prediction for Distributed Shared Caches
Author(s)Shim, Keun Sup; Lis, Mieszko; Khan, Omer; Devadas, Srinivas
MetadataShow full item record
Chip-multiprocessors (CMPs) have become the mainstream parallel architecture in recent years; for scalability reasons, designs with high core counts tend towards tiled CMPs with physically distributed shared caches. This naturally leads to a Non-Uniform Cache Access (NUCA) design, where on-chip access latencies depend on the physical distances between requesting cores and home cores where the data is cached. Improving data locality is thus key to performance, and several studies have addressed this problem using data replication and data migration. In this paper, we consider another mechanism, hardware-level thread migration. This approach, we argue, can better exploit shared data locality for NUCA designs by effectively replacing multiple round-trip remote cache accesses with a smaller number of migrations. High migration costs, however, make it crucial to use thread migrations judiciously; we therefore propose a novel, on-line prediction scheme which decides whether to perform a remote access (as in traditional NUCA designs) or to perform a thread migration at the instruction level. For a set of parallel benchmarks, our thread migration predictor improves the performance by 24% on average over the shared-NUCA design that only uses remote accesses.
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
IEEE Computer Architecture Letters
Institute of Electrical and Electronics Engineers (IEEE)
Shim, Keun Sup, Mieszko Lis, Omer Khan, and Srinivas Devadas. “Thread Migration Prediction for Distributed Shared Caches.” IEEE Computer Architecture Letters 13, no. 1 (January 14, 2014): 53–56.
Author's final manuscript