dask-mpi
dask-mpi copied to clipboard
Deploy Dask using MPI4Py
updates: - [github.com/psf/black: 24.4.0 → 24.4.2](https://github.com/psf/black/compare/24.4.0...24.4.2)
I believe that in some simple cases, we don't need to have a rank dedicated to the Scheduler, and a rank dedicated to the Client. We should provide a way...
# Update - Ready for Review I have added a `dask_mpi.execute()` function, though it looks a little different than the sketch in the "Previous Header" below. The idea is that...
Previously, `initialize()` allows creating MPI comm world after `import distributed` ``` from distributed import Client, Nanny, Scheduler from distributed.utils import import_term ... def initialize( ...): if comm is None: from...
Since a "major makeover" was mentioned for dask-mpi in dask/distributed#7192, I thought I would ask if support for non-MPI backends was a possibility. That is, use something like Slurm's `srun`...
**Describe the issue**: I'm trying out a simple hello-world style `dask-mpi` example, and the computation returns the right result, but I'm getting exceptions when the client finishes. I'm running the...
I would like to explore the possibility of Dask starting MPI. This is sort of the reverse behavior of what the dask-mpi package does today. To clarify the situation I'm...
**What happened**: When Dask-MPI is used in *batch* mode (i.e., using `initialize()`) on Linux with Python >3.8, it does not properly shut down the scheduler and worker processes when the...
Using `pip install dask-mpi` ``` $ pip install dask-mpi $ mpirun -np 2 dask-mpi --name=test-worker --nthreads=1 --memory-limit=0 --scheduler-file=test.json distributed.http.proxy - INFO - To route to workers diagnostics web server please...
**What happened**: This may be an issue in astropy, so my apologies if this is in the wrong location. Although, this appears to only happen using `dask-mpi`. I'm using `dask-mpi`...