maestrowf
maestrowf copied to clipboard
Parallel local
Add parallel local adapter type for improved throughput on multi-core machines/allocations. Adapter is resource aware and can support non-uniform job task/job sizes.
Ok, just circling back to this after far too long. So, how about this for a plan: add it now as a separate adapter, and let it get tested on user workflows for a bit and then in a followup replace it with a stable version of pyaestro executor and then also just drop the standard local adapter?
The local adapter replacement is likely a good follow on given local parallel runs share more incommon with the scheduled adapters than the local? launcher tokens then become a feature of all adapters with threaded, mpirun type launcher tokens enabled locally
Ok, just circling back to this after far too long. So, how about this for a plan: add it now as a separate adapter, and let it get tested on user workflows for a bit and then in a followup replace it with a stable version of pyaestro executor and then also just drop the standard local adapter?
The local adapter replacement is likely a good follow on given local parallel runs share more incommon with the scheduled adapters than the local? launcher tokens then become a feature of all adapters with threaded, mpirun type launcher tokens enabled locally
One more thought on this particular PR: do we want to enable launcher tokens on this adapter right now and hide the pain of mpirun/srun/etc, or save that for a second PR enabling more general token replacement facilities (i.e. per step specifications of tokens, possibly even different users specified named tokens to mix and match in single steps, ...)