dask-drmaa icon indicating copy to clipboard operation
dask-drmaa copied to clipboard

Deployment Policies

Open mrocklin opened this issue 9 years ago • 2 comments

Currently deployment is handled by manually scaling the cluster up and down

jobids = cluster.start_workers(10)
cluster.stop_workers(jobids)

This system is straightforward and effective, but there are other options that we could consider, such as adaptively scaling up or down based on load or on memory needs.

There was some preliminary work done a while ago in this direction described here: http://matthewrocklin.com/blog/work/2016/09/22/cluster-deployments

mrocklin avatar Oct 13 '16 17:10 mrocklin

This might be kind of a dumb question, but how well is dask able to predict a priori the memory requirements of any task before the task is dispatched to a worker? If dask had 10 workers each with 1GB RAM each would it know if it had a number of operations in the graph that required some 10 GB workers?

davidr avatar Oct 17 '16 16:10 davidr

Dask only knows the size of the result of a task after the task has computed. It would expand the network on-the-fly as it saw the total memory use increasing and that it still had more tasks to compute.

In normal operation Dask assumes that all tasks can run on all workers unless the user explicitly states otherwise (doc page, issue for expansion) so Dask would fail in the case of the 10GB task in the presence of the 1GB worker (unless we were to implement the issue linked to above).

The motivation for different deployment policies would be to remove the need for users to think about the size of their cluster, both in terms of scaling out, and in terms of cleaning up afterwards.

mrocklin avatar Oct 17 '16 16:10 mrocklin