Approximation algorithms for stochastic scheduling problems
Author(s)
Dean, Brian C. (Brian Christopher), 1975-
DownloadFull printable version (7.938Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.
Advisor
Michel X. Goemans.
Terms of use
Metadata
Show full item recordAbstract
In this dissertation we study a broad class of stochastic scheduling problems characterized by the presence of hard deadline constraints. The input to such a problem is a set of jobs, each with an associated value, processing time, and deadline. We would like to schedule these jobs on a set of machines over time. In our stochastic setting, the processing time of each job is random, known in advance only as a probability distribution (and we make no assumptions about the structure of this distribution). Only after a job completes do we know its actual "instantiated" processing time with certainty. Each machine can process only a singe job at a time, and each job must be assigned to only one machine for processing. After a job starts processing we require that it must be allowed to complete - it cannot be canceled or "preempted" (put on hold and resumed later). Our goal is to devise a scheduling policy that maximizes the expected value of jobs that are scheduled by their deadlines. A scheduling policy observes the state of our machines over time, and any time a machine becomes available for use, it selects a new job to execute on that machine. Scheduling policies can be classified as adaptive or non-adaptive based on whether or not they utilize information learned from the instantiation of processing times of previously-completed jobs in their future scheduling decisions. A novel aspect of our work lies in studying the benefit one can obtain through adaptivity, as we show that for all of our stochastic scheduling problems, adaptivity can only allow us to improve the expected value obtained by an optimal policy by at most a small constant factor. All of the problems we consider are at least NP-hard since they contain the deterministic 0/1 knapsack problem as a special case. We therefore seek to develop approximation algorithms: algorithms that run in polynomial time and compute a policy whose expected value is provably close to that of an optimal adaptive (cont.) policy. For all the problems we consider, we can approximate the expected value obtained by an optimal adaptive policy to within a small constant factor (which depends on the problem under consideration, but is always less than 10). A small handful of our results are pseudo-approximation algorithms, delivering an approximately optimal policy that is feasible with respect to a slightly expanded set of deadlines. Our algorithms utilize a wide variety of techniques, ranging from fairly well-established methods like randomized rounding to more novel techniques such as those we use to bound the expected value obtained by an optimal adaptive policy. In the scheduling literature to date and also in practice, the "deadline" of a job refers to the time by which a job must be completed. We introduce a new model, called the start deadline model, in which the deadline of a job instead governs the time by which we must start the job. While there is no difference between this model and the standard "completion deadline" model in a deterministic setting, we show that for our stochastic problems, one can generally obtain much stronger approximation results with much simpler analyses in the start deadline model. The simplest problem variant we consider is the so-called stochastic knapsack problem, where all jobs share a common deadline and we schedule them on a single machine. The most general variant we consider involves scheduling jobs with individual deadlines on a set of "unrelated" parallel machines, where the value of a job and its processing time distribution can vary depending on the machine to which it is assigned. (cont.) We also discuss algorithms based on dynamic programming for stochastic scheduling problems and their relatives in a discrete-time setting (where processing times are small integers), and we show how to use a new technique from signal processing called zero-delay convolution to improve the running time of dynamic programming algorithms for some of these problems.
Description
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005. Includes bibliographical references (p. [109]-113).
Date issued
2005Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.