MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

A unified compiler backend for distributed, cooperative heterogeneous execution

Author(s)
Ray, Jessica Morgan
Thumbnail
DownloadFull printable version (13.09Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Saman Amarasinghe.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
In many of today's applications, achieving high-performance is critical. Numerous architectures, such as shared memory systems, distributed memory systems, and heterogeneous systems, are all used in high-performance computing. However, all of these have different optimizations and programming models. Additionally, performance is not always portable across architectures-one optimization that provides significant improvement in one system may have little to no effect in another system. Writing high-performance code is a notoriously difficult task that requires significant trial-and-error. Writing high-performance code for multiple architectures is even harder, particularly when these architectural components are all together in a single heterogeneous system and the programmer has to make them all cooperate in order to achieve the highest performance. Hand-optimization only goes so far; it is infeasible to try many compositions of optimizations by hand, so the resulting performance will likely be sub-optimal. This thesis employs a scheduling language approach to abstract optimizations and code generation for shared memory with NUMA multicore, distributed memory, and GPU systems. To the best of our knowledge, we provide the first scheduling language approach that lets a programmer schedule cooperative execution on distributed, heterogeneous systems, all from a single algorithm. Our work extends an existing mid-level compiler, TIRAMISU, with several primitives and functions that present the programmer with a unified interface for generating code for several backends and execution configurations. Our results show that we are able to generate efficient MPI code and CUDA code for distributed memory and heterogeneous GPU systems from a single algorithm. From our unified scheduling abstraction, we are able to generate distributed, heterogeneous cooperative code, giving us OpenMP+MPI+CUDA capability without the extra complexities that come with using multiple programming models.
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 107-112).
 
Date issued
2018
URI
http://hdl.handle.net/1721.1/115731
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.