dc.contributor.author | Williams, Virginia Vassilevska | |
dc.contributor.author | Demaine, Erik | |
dc.contributor.author | Lincoln, Andrea | |
dc.contributor.author | Liu, Quanquan C. | |
dc.contributor.author | Lynch, Jayson | |
dc.date.accessioned | 2021-11-08T14:58:10Z | |
dc.date.available | 2021-11-08T14:58:10Z | |
dc.date.issued | 2018 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/137672 | |
dc.description.abstract | © Erik D. Demaine, Andrea Lincoln, Quanquan C. Liu, Jayson Lynch, and Virginia Vassilevska Williams. This paper initiates the study of I/O algorithms (minimizing cache misses) from the perspective of fine-grained complexity (conditional polynomial lower bounds). Specifically, we aim to answer why sparse graph problems are so hard, and why the Longest Common Subsequence problem gets a savings of a factor of the size of cache times the length of a cache line, but no more. We take the reductions and techniques from complexity and fine-grained complexity and apply them to the I/O model to generate new (conditional) lower bounds as well as faster algorithms. We also prove the existence of a time hierarchy for the I/O model, which motivates the fine-grained reductions. Using fine-grained reductions, we give an algorithm for distinguishing 2 vs. 3 diameter and radius that runs in O(|E|2/(MB)) cache misses, which for sparse graphs improves over the previous O(|V |2/B) running time. We give new reductions from radius and diameter to Wiener index and median. These reductions are new in both the RAM and I/O models. We show meaningful reductions between problems that have linear-time solutions in the RAM model. The reductions use low I/O complexity (typically O(n/B)), and thus help to finely capture the relationship between “I/O linear time” (n/B) and RAM linear time (n). We generate new I/O assumptions based on the di culty of improving sparse graph problem running times in the I/O model. We create conjectures that the current best known algorithms for Single Source Shortest Paths (SSSP), diameter, and radius are optimal. From these I/O-model assumptions, we show that many of the known reductions in the word-RAM model can naturally extend to hold in the I/O model as well (e.g., a lower bound on the I/O complexity of Longest Common Subsequence that matches the best known running time). We prove an analog of the Time Hierarchy Theorem in the I/O model, further motivating the study of fine-grained algorithmic di erences. | en_US |
dc.language.iso | en | |
dc.relation.isversionof | 10.4230/LIPIcs.ITCS.2018.34 | en_US |
dc.rights | Creative Commons Attribution 4.0 International license | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.source | DROPS | en_US |
dc.title | Fine-grained I/O complexity via reductions: new lower bounds, faster algorithms, and a time hierarchy | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Williams, Virginia Vassilevska, Demaine, Erik, Lincoln, Andrea, Liu, Quanquan C. and Lynch, Jayson. 2018. "Fine-grained I/O complexity via reductions: new lower bounds, faster algorithms, and a time hierarchy." | |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2019-06-04T13:07:11Z | |
dspace.date.submission | 2019-06-04T13:07:11Z | |
mit.license | PUBLISHER_CC | |
mit.metadata.status | Authority Work and Publication Information Needed | en_US |