MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Sparse and Structured Tensor Programming

Author(s)
Ahrens, Willow
Thumbnail
DownloadThesis PDF (5.962Mb)
Advisor
Amarasinghe, Saman
Terms of use
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/
Metadata
Show full item record
Abstract
From FORTRAN to NumPy, tensors have revolutionized how we express computation. However, tensors in these, and almost all prominent systems, can only handle dense rectilinear grids of values. Real-world tensors are often structured, containing patterns which allow us to optimize storage or computation, such as sparsity (mostly zero), runs of repeated values, or symmetry. Specializing implementations for structure yields significant speedups, but support for structured tensors is fragmented and incomplete. The heart of the problem is coiteration, simultaneously iterating over multiple tensors in a program, where each tensor format may have different internal structure. As each combination of structures requires a unique coiteration algorithm, existing frameworks struggle to abstract over the design space, instead hard-coding support for a few programs and/or a few structures. In this thesis, we build an abstraction for coiteration, enabling us to support both a wide range of programs and diverse tensor structures. We use a language, looplets, to describe the structure of tensors in tensor programs. Looplets allow the compiler to generate code to coiterate over any combination of structured tensor formats. The looplets language decomposes loops over sparse and structured formats hierarchically. This decomposition simplifies compilation, allowing us to capture key mathematical properties (such as x∗0 = 0, which motivates sparsity) with simple term rewriting. Building on looplets, we introduce a new language, Finch, for general structured tensor programming. Finch makes it easier to compute with structured tensors by combining program control flow and tensor structures into a common representation where they can be co-optimized. Finch automatically specializes control flow to data so that performance engineers can focus on experimenting with many algorithms. Finch supports a familiar programming language of loops, statements, ifs, breaks, etc., over a wide variety of tensor structures, such as sparsity, run-length-encoding, symmetry, triangles, padding, or blocks. Finch reliably utilizes the key properties of each structure, making it easier to write and optimize structured tensor programs. In our case studies, we show that this leads to dramatic speedups in diverse applications, including linear algebra, image processing, and graph analytics. Our abstracted design makes it easier to extend Finch to new tensor structures and programming models. Finch has been separately extended to support a DSL for symmetry-aware tensor programs and to support real-valued indexing.
Date issued
2024-09
URI
https://hdl.handle.net/1721.1/158477
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.