Kevin Paul
Kevin Paul
In terms of a user CLI to allow complete control over worker placement, the _complete_ solution would involve asking the user to specify a list of one `int` per MPI...
So, currently Dask-MPI _in the stand-alone script mode_ only works if you have an MPI comm size of 3 or larger. This proposed change would make it possible to run...
> My desire to have a worker on the ranks 0 and 1 is that I don't know enough about the scheduler/job submission system I am using to request different...
> For "why not more than one worker": I learnt that the word "worker" is overloaded. For example when I instantiate one CUDAWorker I will end up with as many...
@jacobtomlinson: Oooh! Excited to hear about progress on the `dask-hpc-runner`. Been interested in that for a while. And I agree that the use cases are compelling. Thanks for chiming in...
Ok. I've been playing around with this today and there are solutions, but I don't think they are as pretty as anyone was hoping. So, I'm sharing my experiences in...
I don't think it is true that one MPI rank typically equates to one machine/node. In mixed threading/processing jobs, one MPI rank typically equates to one machine, with the assumption...
To be clear about this, MPI typically doesn't decide where to place its ranks. Or rather, the user does not usually need to tell MPI where to place its ranks....
@jacobtomlinson: Thanks for the reply! I agree that there is enough "magic" going on here that I don't want to charge forward with anything that makes things worse. You are...
Ok. I'm working on a solution to this, but I think it needs to change the fundamental way that the `dask_mpi.initialize()` function works. In fact, it's so large of a...