maestrowf icon indicating copy to clipboard operation
maestrowf copied to clipboard

Parallel local

Open jwhite242 opened this issue 4 years ago • 2 comments

Add parallel local adapter type for improved throughput on multi-core machines/allocations. Adapter is resource aware and can support non-uniform job task/job sizes.

jwhite242 avatar Jan 28 '21 01:01 jwhite242

Ok, just circling back to this after far too long. So, how about this for a plan: add it now as a separate adapter, and let it get tested on user workflows for a bit and then in a followup replace it with a stable version of pyaestro executor and then also just drop the standard local adapter?

The local adapter replacement is likely a good follow on given local parallel runs share more incommon with the scheduled adapters than the local? launcher tokens then become a feature of all adapters with threaded, mpirun type launcher tokens enabled locally

jwhite242 avatar Feb 02 '22 03:02 jwhite242

Ok, just circling back to this after far too long. So, how about this for a plan: add it now as a separate adapter, and let it get tested on user workflows for a bit and then in a followup replace it with a stable version of pyaestro executor and then also just drop the standard local adapter?

The local adapter replacement is likely a good follow on given local parallel runs share more incommon with the scheduled adapters than the local? launcher tokens then become a feature of all adapters with threaded, mpirun type launcher tokens enabled locally

One more thought on this particular PR: do we want to enable launcher tokens on this adapter right now and hide the pain of mpirun/srun/etc, or save that for a second PR enabling more general token replacement facilities (i.e. per step specifications of tokens, possibly even different users specified named tokens to mix and match in single steps, ...)

jwhite242 avatar Feb 02 '22 04:02 jwhite242