Show simple item record

dc.contributor.authorAhmad, Wakeel
dc.contributor.authorCarpenter, Bryan
dc.contributor.authorShafi, Aamir
dc.contributor.authorShafi, Muhammad Aamir
dc.date.accessioned2015-03-10T16:31:21Z
dc.date.available2015-03-10T16:31:21Z
dc.date.issued2011-05
dc.identifier.issn18770509
dc.identifier.urihttp://hdl.handle.net/1721.1/95929
dc.description.abstractThe Message Passing Interface (MPI) standard continues to dominate the landscape of parallel computing as the de facto API for writing large-scale scientific applications. But the critics argue that it is a low-level API and harder to practice than shared memory approaches. This paper addresses the issue of programming productivity by proposing a high-level, easy-to-use, and effcient programming API that hides and segregates complex low-level message passing code from the application specific code. Our proposed API is inspired by communication patterns found in Gadget-2, which is an MPI-based parallel production code for cosmological N-body and hydrodynamic simulations. In this paper—we analyze Gadget-2 with a view to understanding what high-level Single Program Multiple Data (SPMD) communication abstractions might be developed to replace the intricate use of MPI in such an irregular application—and do so without compromising the effciency. Our analysis revealed that the use of low-level MPI primitives—bundled with the computation code—makes Gadget-2 diffcult to understand and probably hard to maintain. In addition, we found out that the original Gadget-2 code contains a small handful of—complex and recurring—patterns of message passing. We also noted that these complex patterns can be reorganized into a higherlevel communication library with some modifications to the Gadget-2 code. We present the implementation and evaluation of one such message passing pattern (or schedule) that we term Collective Asynchronous Remote Invocation (CARI). As the name suggests, CARI is a collective variant of Remote Method Invocation (RMI), which is an attractive, high-level, and established paradigm in distributed systems programming. The CARI API might be implemented in several ways—we develop and evaluate two versions of this API on a compute cluster. The performance evaluation reveals that CARI versions of the Gadget-2 code perform as well as the original Gadget-2 code but the level of abstraction is raised considerably.en_US
dc.language.isoen_US
dc.publisherElsevieren_US
dc.relation.isversionofhttp://dx.doi.org/10.1016/j.procs.2011.04.004en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/en_US
dc.sourceElsevieren_US
dc.titleCollective Asynchronous Remote Invocation (CARI): A High-Level and Effcient Communication API for Irregular Applicationsen_US
dc.typeArticleen_US
dc.identifier.citationAhmad, Wakeel, Bryan Carpenter, and Aamir Shafi. “Collective Asynchronous Remote Invocation (CARI): A High-Level and Effcient Communication API for Irregular Applications.” Procedia Computer Science 4 (2011): 26–35.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.mitauthorShafi, Muhammad Aamiren_US
dc.relation.journalProcedia Computer Scienceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsAhmad, Wakeel; Carpenter, Bryan; Shafi, Aamiren_US
mit.licensePUBLISHER_CCen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record