cs294-ai-sys-fa19 icon indicating copy to clipboard operation
cs294-ai-sys-fa19 copied to clipboard

RL for Spark Job Scheduling

Open simon-mo opened this issue 5 years ago • 0 comments

https://web.mit.edu/decima/

Learning Scheduling Algorithms for Data Processing Clusters

Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems use simple, generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective, such as minimizing average job completion time. However, off-the-shelf RL techniques cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs’ dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves average job completion time by at least 21% over hand-tuned scheduling heuristics, achieving up to 2× improvement during periods of high cluster load.

simon-mo avatar Sep 11 '19 20:09 simon-mo