Show simple item record

dc.contributor.advisorSaman Amarasinghe.en_US
dc.contributor.authorRay, Jessica Morganen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-05-23T16:32:17Z
dc.date.available2018-05-23T16:32:17Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/115731
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 107-112).en_US
dc.description.abstractIn many of today's applications, achieving high-performance is critical. Numerous architectures, such as shared memory systems, distributed memory systems, and heterogeneous systems, are all used in high-performance computing. However, all of these have different optimizations and programming models. Additionally, performance is not always portable across architectures-one optimization that provides significant improvement in one system may have little to no effect in another system. Writing high-performance code is a notoriously difficult task that requires significant trial-and-error. Writing high-performance code for multiple architectures is even harder, particularly when these architectural components are all together in a single heterogeneous system and the programmer has to make them all cooperate in order to achieve the highest performance. Hand-optimization only goes so far; it is infeasible to try many compositions of optimizations by hand, so the resulting performance will likely be sub-optimal. This thesis employs a scheduling language approach to abstract optimizations and code generation for shared memory with NUMA multicore, distributed memory, and GPU systems. To the best of our knowledge, we provide the first scheduling language approach that lets a programmer schedule cooperative execution on distributed, heterogeneous systems, all from a single algorithm. Our work extends an existing mid-level compiler, TIRAMISU, with several primitives and functions that present the programmer with a unified interface for generating code for several backends and execution configurations. Our results show that we are able to generate efficient MPI code and CUDA code for distributed memory and heterogeneous GPU systems from a single algorithm. From our unified scheduling abstraction, we are able to generate distributed, heterogeneous cooperative code, giving us OpenMP+MPI+CUDA capability without the extra complexities that come with using multiple programming models.en_US
dc.description.statementofresponsibilityby Jessica Morgan Ray.en_US
dc.format.extent112 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleA unified compiler backend for distributed, cooperative heterogeneous executionen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1036986664en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record