metaflow icon indicating copy to clipboard operation
metaflow copied to clipboard

Support HPC & GPU local clusters

Open oguitart opened this issue 5 years ago • 8 comments

Is is possible to use a local HPC or GPU cluster? I understand that it works perfectly with AWS but what about when the use of AWS is not possible but there are other resources available? Can it be configured to use other resources? Thanks,

oriol

oguitart avatar Dec 13 '19 14:12 oguitart

Please check #29 .

savingoyal avatar Dec 13 '19 17:12 savingoyal

To add a bit of flavor on #29 which was very specific to slurm and accelerators, the batch plugin that is currently included is basically a thin way of launching a process on a remote machine (in this case a batch instance). You could write a similar plugin for your specific environment. You would basically need to provide:

  • a commonly accessible object store (batch uses S3 but you could use NFS mounted partitions, or Lustre partitions or whatever shared file system you have)
  • a way to remote launch, monitor and collect stdout/err from a process. This could technically even be as simple as a slightly more fancy ssh -c

Without knowing your exact setup, it's hard for me to help further but hopefully this helps a little bit.

romain-intel avatar Dec 15 '19 07:12 romain-intel

Thank you for the explanation. We have an HPC cluster with SGE and a GPU cluster with Slurm. So I think your suggestion of creating a plugin, it should be the right way. I need to start checking documentation and code to understand how to create this kind of plugin.

oguitart avatar Dec 16 '19 13:12 oguitart

We currently have little documentation regarding the internals of Metaflow (our initial release is primarily targeted at users of metaflow rather than the developers of metaflow). Feel free, however, to ask any question here. To get you started, the batch plugin is in plugins/aws/batch. In there, a few things to keep in mind:

  • When launching something on batch, MF actually launches two things: a local process that is responsible for launching a batch job and monitoring it and then the actual batch job.
  • The way the local process is launched is through the batch_cli.py (ie: a command line that contains batch step ...).
  • The way this command line is generated is in batch_decorator.py in the runtime_step_cli.py function. The way this works is that effectively, when a step needs to execute, the runtime (see runtime.py) will launch a subprocess to execute the step, to do so, it will generate a command line to call metaflow with (see use of runtime_step_cli.py in runtime.py).
  • batch.py provides the functionality called by the batch_cli.py file
  • batch_client.py is the file that contains the code that communicates with the AWS Batch backend. This is probably the part that you will need to change the most to talk to Slurm (or whatever else you want).

Let me know if you have more questions. As I mentioned, happy to help give you more information/help if needed. Please also see caveats I posted in #29.

romain-intel avatar Dec 16 '19 17:12 romain-intel

Thank you very much for all the information. I'll let you know if I have any questions,

oguitart avatar Dec 17 '19 13:12 oguitart

You may want to consider opening this up to arbitrary workload managers such as Parsl, Dask (job queue), RADICAL, etc. While these are typically full workflow/workload managers their core workload capability can be used to execute arbitrary tasks on these machines. Some serious thought will need to go into this as you manage your own environments as well, but there has been some pretty hefty work getting these kinds of task management systems onto SLURM/PBS and general academic clusters/leadership platforms.

dgasmith avatar Dec 19 '19 14:12 dgasmith

Is it possible to run metaflow across a local cluster of machines? I've got a cluster of machines locally and I'd rather use that than prematurely deploy to AWS when I may not need it? I've tried googling "metaflow local cluster" but this was the first result and the rest didn't look particularly relevant (all of them advocate training and verifying locally before scaling to AWS)

IanQS avatar Apr 26 '22 04:04 IanQS

@IanQS If you can deploy Kubernetes on top of this cluster, then our latest release which adds support for Kubernetes will get you going.

savingoyal avatar Apr 26 '22 04:04 savingoyal