| dc.contributor.author | Nasr-Esfahany, Arash | |
| dc.contributor.author | Alizadeh, Mohammad | |
| dc.contributor.author | Lee, Victor | |
| dc.contributor.author | Alam, Hanna | |
| dc.contributor.author | Coon, Brett | |
| dc.contributor.author | Culler, David | |
| dc.contributor.author | Dadu, Vidushi | |
| dc.contributor.author | Dixon, Martin | |
| dc.contributor.author | Levy, Henry | |
| dc.contributor.author | Pandey, Santosh | |
| dc.contributor.author | Ranganathan, Parthasarathy | |
| dc.contributor.author | Yazdanbakhsh, Amir | |
| dc.date.accessioned | 2025-09-16T19:47:10Z | |
| dc.date.available | 2025-09-16T19:47:10Z | |
| dc.date.issued | 2025-06-20 | |
| dc.identifier.isbn | 979-8-4007-1261-6 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/162664 | |
| dc.description | ISCA ’25, Tokyo, Japan | en_US |
| dc.description.abstract | Cycle-level simulators such as gem5 are widely used in microarchitecture design, but they are prohibitively slow for large-scale design space explorations. We present Concorde, a new methodology for learning fast and accurate performance models of microarchitectures. Unlike existing simulators and learning approaches that emulate each instruction, Concorde predicts the behavior of a program based on compact performance distributions that capture the impact of different microarchitectural components. It derives these performance distributions using simple analytical models that estimate bounds on performance induced by each microarchitectural component, providing a simple yet rich representation of a program’s performance characteristics across a large space of microarchitectural parameters. Experiments show that Concorde is more than five orders of magnitude faster than a reference cycle-level simulator, with about 2% average Cycles-Per-Instruction (CPI) prediction error across a range of SPEC, open-source, and proprietary benchmarks. This enables rapid design-space exploration and performance sensitivity analyses that are currently infeasible, e.g., in about an hour, we conducted a first-of-its-kind fine-grained performance attribution to different microarchitectural components across a diverse set of programs, requiring nearly 150 million CPI evaluations. | en_US |
| dc.publisher | ACM|Proceedings of the 52nd Annual International Symposium on Computer Architecture | en_US |
| dc.relation.isversionof | https://doi.org/10.1145/3695053.3731037 | en_US |
| dc.rights | Creative Commons Attribution | en_US |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
| dc.source | Association for Computing Machinery | en_US |
| dc.title | Concorde: Fast and Accurate CPU Performance Modeling with Compositional Analytical-ML Fusion | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Arash Nasr-Esfahany, Mohammad Alizadeh, Victor Lee, Hanna Alam, Brett W. Coon, David Culler, Vidushi Dadu, Martin Dixon, Henry M. Levy, Santosh Pandey, Parthasarathy Ranganathan, and Amir Yazdanbakhsh. 2025. Concorde: Fast and Accurate CPU Performance Modeling with Compositional Analytical-ML Fusion. In Proceedings of the 52nd Annual International Symposium on Computer Architecture (ISCA '25). Association for Computing Machinery, New York, NY, USA, 1480–1494. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.identifier.mitlicense | PUBLISHER_POLICY | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2025-08-01T07:56:20Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The author(s) | |
| dspace.date.submission | 2025-08-01T07:56:21Z | |
| mit.license | PUBLISHER_CC | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |