Provably Efficient Adaptive Scheduling for Parallel Jobs
Author(s)He, Yuxiong; Hsu, Wen Jing; Leiserson, Charles E.
Scheduling competing jobs on multiprocessors has always been an important issue for parallel and distributed systems. The challenge is to ensure global, system-wide efficiency while offering a level of fairness to user jobs. Various degrees of successes have been achieved over the years. However, few existing schemes address both efficiency and fairness over a wide range of work loads. Moreover, in order to obtain analytical results, most of them require prior information about jobs, which may be difficult to obtain in real applications. This paper presents two novel adaptive scheduling algorithms -- GRAD for centralized scheduling, and WRAD for distributed scheduling. Both GRAD and WRAD ensure fair allocation under all levels of workload, and they offer provable efficiency without requiring prior information of job's parallelism. Moreover, they provide effective control over the scheduling overhead and ensure efficient utilization of processors. To the best of our knowledge, they are the first non-clairvoyant scheduling algorithms that offer such guarantees. We also believe that our new approach of resource request-allotment protocol deserves further exploration. Specifically, both GRAD and WRAD are O(1)-competitive with respect to mean response time for batched jobs, and O(1)-competitive with respect to makespan for non-batched jobs with arbitrary release times. The simulation results show that, for non-batched jobs, the makespan produced by GRAD is no more than 1.39 times of the optimal on average and it never exceeds 4.5 times. For batched jobs, the mean response time produced by GRAD is no more than 2.37 times of the optimal on average, and it never exceeds 5.5 times.
Computer Science (CS)
Adaptive Scheduling, Competitive Analysis, Data-parallel Computing, Greedy Scheduling, Instantaneous Parallelism, Job Scheduling, Makespan, Mean Response Time, Multiprocessing, Multiprogramming, Parallelism Feedback, Parallel Computation, Processor Allocation, Span, Thread Scheduling, Two-level Scheduling, Space Sharing, Trim Analysis, Work, Work-stealing