Language Evolution for Parallel and Scientific Computing
Author(s)
Churavy, Valentin
DownloadThesis PDF (17.06Mb)
Advisor
Edelman, Alan
Terms of use
Metadata
Show full item recordAbstract
Scientists, working on the biggest problems facing humanity today, write and run largescale computer simulations. It has been a decades’ long dream of both scientists and programming language designers to make the development for and usage of high-performance computing easier. Many attempts have failed, perhaps because this is a hard problem, perhaps because the social motivation and the required steps to achieve success have not come together, and perhaps solutions to date only solve part of the problem in essence never fully solving the problem. This thesis proposes that there is a combination of features necessary to form a solution. Starting from a bedrock that combines performance with high-level abstractions in a single language. The language needs to enable composable abstractions, or we are doomed to keep developing the single-shot applications of the past. These abstractions should enable code reuse for different forms of compute architectures, to allow users to keep up with the fluid landscape of accelerators. These abstractions should enable code reuse for different mathematical objects such as dense, sparse and structured matrices. These abstractions should enable code reuse for differentiable programming, to enable integration of techniques like sensitivity analysis and scientific machine learning. With the right methodology, these abstractions can compose with each other and specialize to the domain. I will demonstrate that the combination of high-level array-based abstractions and a lowlevel performance portable kernel programming framework form a potent combination for large-scale scientific computing. I will show its efficacy using real-world scientific codes. Furthermore, I will introduce a differentiable programming framework built on top of a general automatic differentiation engine operating on compiler level. The automatic differentiation framework outperforms state-of-the-art, is capable of synthesizing gradient functions from GPU kernels, and can differentiate a wide variety of parallel constructs. As the infrastructure supporting this language needs to be more sophisticated than those of yesteryear, new problems arise. This thesis solves some of these problems and demonstrates their solution on a fluid dynamics code used in climate modelling as one of many imaginable applications.
Date issued
2024-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology