Accelerating dynamic programming
Author(s)
Weimann, Oren
DownloadFull printable version (20.55Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.
Advisor
Erik D. Demaine.
Terms of use
Metadata
Show full item recordAbstract
Dynamic Programming (DP) is a fundamental problem-solving technique that has been widely used for solving a broad range of search and optimization problems. While DP can be invoked when more specialized methods fail, this generality often incurs a cost in efficiency. We explore a unifying toolkit for speeding up DP, and algorithms that use DP as subroutines. Our methods and results can be summarized as follows. - Acceleration via Compression. Compression is traditionally used to efficiently store data. We use compression in order to identify repeats in the table that imply a redundant computation. Utilizing these repeats requires a new DP, and often different DPs for different compression schemes. We present the first provable speedup of the celebrated Viterbi algorithm (1967) that is used for the decoding and training of Hidden Markov Models (HMMs). Our speedup relies on the compression of the HMM's observable sequence. - Totally Monotone Matrices. It is well known that a wide variety of DPs can be reduced to the problem of finding row minima in totally monotone matrices. We introduce this scheme in the context of planar graph problems. In particular, we show that planar graph problems such as shortest paths, feasible flow, bipartite perfect matching, and replacement paths can be accelerated by DPs that exploit a total-monotonicity property of the shortest paths. - Combining Compression and Total Monotonicity. We introduce a method for accelerating string edit distance computation by combining compression and totally monotone matrices. (cont.) In the heart of this method are algorithms for computing the edit distance between two straight-line programs. These enable us to exploits the compressibility of strings, even if each string is compressed using a different compression scheme. - Partial Tables. In typical DP settings, a table is filled in its entirety, where each cell corresponds to some subproblem. In some cases, by changing the DP, it is possible to compute asymptotically less cells of the table. We show that [theta](n³) subproblems are both necessary and sufficient for computing the similarity between two trees. This improves all known solutions and brings the idea of partial tables to its full extent. - Fractional Subproblems. In some DPs, the solution to a subproblem is a data structure rather than a single value. The entire data structure of a subproblem is then processed and used to construct the data structure of larger subproblems. We suggest a method for reusing parts of a subproblem's data structure. In some cases, such fractional parts remain unchanged when constructing the data structure of larger subproblems. In these cases, it is possible to copy this part of the data structure to the larger subproblem using only a constant number of pointer changes. We show how this idea can be used for finding the optimal tree searching strategy in linear time. This is a generalization of the well known binary search technique from arrays to trees.
Description
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009. Cataloged from PDF version of thesis. Includes bibliographical references (p. 129-136).
Date issued
2009Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.