Show simple item record

dc.contributor.advisorCharles E. Leiserson.en_US
dc.contributor.authorSuksompong, Waruten_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2014-11-24T18:41:41Z
dc.date.available2014-11-24T18:41:41Z
dc.date.copyright2014en_US
dc.date.issued2014en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/91874
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 61-62).en_US
dc.description.abstractBlumofe and Leiserson [6] gave the first provably good work-stealing work scheduler for multithreaded computations with dependencies. Their scheduler executes a fully strict (i.e., wellstructured) computation on P processors in expected time [mathematical formula], where T denotes the minimum serial execution time of the multithreaded computation, and T. denotes the minimum execution time with an infinite number of processors. This thesis extends the existing literature in two directions. Firstly, we analyze the number of successful steals in multithreaded computations. The existing literature has dealt with the number of steal attempts without distinguishing between successful and unsuccessful steals. While that approach leads to a fruitful probabilistic analysis, it does not yield an interesting result for a worst-case analysis. We obtain tight upper bounds on the number of successful steals when the computation can be modeled by a computation tree. In particular, if the computation starts with a complete k-ary tree of height h, the maximum number of successful steals is [mathematical formula]. Secondly, we investigate a variant of the work-stealing algorithm that we call the localized work-stealing algorithm. The intuition behind this variant is that because of locality, processors can benefit from working on their own work. Consequently, when a processor is free, it makes a steal attempt to get back its own work. We call this type of steal a steal-back. We show that under the "even distribution of free agents assumption", the expected running time of the algorithm is [mathematical formula]. In addition, we obtain another running-time bound based on ratios between the sizes of serial tasks in the computation. If M denotes the maximum ratio between the largest and the smallest serial tasks of a processor after removing a total of O(P) serial tasks across all processors from consideration, then the expected running time of the algorithm is [mathematical formula].en_US
dc.description.statementofresponsibilityby Warut Suksompong.en_US
dc.format.extent62 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleBounds on multithreaded computations by work stealingen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc894357510en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record