Embed parallelization into the multi_voxel_fit decorator.
I've started playing around with the idea that the multi_voxel_fit decorator could use paramap instead of iterating over voxels. If we can make this work generally that would be pretty cool. So far, I've only tested this with the fwdti model, and in that case, the change to the additional changes to the code are rather minimal, which gives me hope that we might be able to use this wherever we use this decorator, so in csd, dsi, forecast, fwdti, gqi, ivim, mapmri, mcsd, qtdmri, and shore (!).
Hello @arokem, Thank you for updating !
Cheers ! There are no PEP8 issues in this Pull Request. :beers:
Comment last updated at 2024-04-26 11:29:59 UTC
Thank you for starting this @arokem!
Have you looked at https://github.com/dipy/dipy/pull/1418? I think some ideas can be reused here.
Oh yeah - that’s a great pointer! I’ll try to incorporate the ideas you implemented there into this PR.
On Sun, May 8, 2022 at 10:07 AM Serge Koudoro @.***> wrote:
Thank you for starting this @arokem https://github.com/arokem?
Have you looked at #1418 https://github.com/dipy/dipy/pull/1418? I think some ideas can be reused here.
— Reply to this email directly, view it on GitHub https://github.com/dipy/dipy/pull/2593#issuecomment-1120453964, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA46NTHC7HEWHHJSVVXWKDVI7YGFANCNFSM5VLIP45A . You are receiving this because you were mentioned.Message ID: @.***>
Codecov Report
Attention: Patch coverage is 80.85106% with 9 lines in your changes are missing coverage. Please review.
Project coverage is 83.71%. Comparing base (
60669b8) to head (cd0e653).
Additional details and impacted files
@@ Coverage Diff @@
## master #2593 +/- ##
==========================================
- Coverage 83.75% 83.71% -0.04%
==========================================
Files 153 153
Lines 21317 21338 +21
Branches 3440 3446 +6
==========================================
+ Hits 17853 17863 +10
- Misses 2608 2618 +10
- Partials 856 857 +1
| Files | Coverage Δ | |
|---|---|---|
| dipy/reconst/csdeconv.py | 87.38% <100.00%> (ø) |
|
| dipy/reconst/dsi.py | 80.21% <100.00%> (ø) |
|
| dipy/reconst/forecast.py | 92.82% <100.00%> (ø) |
|
| dipy/reconst/fwdti.py | 94.28% <100.00%> (ø) |
|
| dipy/reconst/gqi.py | 54.00% <100.00%> (ø) |
|
| dipy/reconst/ivim.py | 96.00% <100.00%> (ø) |
|
| dipy/reconst/mapmri.py | 92.09% <100.00%> (ø) |
|
| dipy/reconst/mcsd.py | 88.69% <100.00%> (ø) |
|
| dipy/reconst/qtdmri.py | 93.87% <100.00%> (ø) |
|
| dipy/reconst/shore.py | 91.90% <100.00%> (ø) |
|
| ... and 2 more |
I ran a benchmark on a beefy 24-cpu compute server with the recent commit.I get a roughly 13x speedup for fitting the fwdti model with engine="joblib" relative to the default serial mode. I should maybe mention that the server is also doing a bunch of other work, so it's not the cleanest benchmark, but still quite promising.
Does anyone understand why half the CI actions are still pending? They have been pending since Friday!
No, but I will restart them first
Hi @arokem,
It seems we have a new issue with DIPY installation. I do not know yet what changes. the CI's are failing in all PR. I will start to dig into it
Just rebased on top of #2595
Does anyone understand these CI failures? I don't think they are related to the content of the PR, but I might be missing something.
Does anyone understand these CI failures? I don't think they are related to the content of the PR, but I might be missing something.
Both failures are on the parallelization CI's with a memory leaks issue. This might be due to some of the parallel packages that might change some environment variable flags. These flags could have an impact on this parallelized function.
All supposition, this is what comes first to my mind.
the failing function are using openmp
Hey @skoudoro, I noticed that you did not pin the ray version in #2600, instead pinning only protobuf, but I am seeing this again on the CI: https://github.com/dipy/dipy/runs/6634820045?check_suite_focus=true#step:9:119, which suggests to me that I should pin ray to 0.11 for now. Does that make sense to you? I'll give it a try here.
Or possibly 1.11.1
We're back to this failure: https://github.com/dipy/dipy/runs/6645881563?check_suite_focus=true#step:9:3751
Interestingly, I can't get this to fail locally on my machine (in an env with dask, ray and joblib installed). I also don't exactly understand how this is related to openmp. Does set_number_of_points use openmp?
Plan to make progress here:
-
[ ] Set up experimental datasets: All of the models except for DSI can use multi-shell data. Only CSD (I think) can run on single-shell data. For multi-shell datasets we can use HBN and HCP. For DSI, I guess we can use the dsi dataset we have in our data fetchers. We'll need to set up fetchers for HBN data (see #2695) and for HCP (see #2696).
-
[ ] Set up experimental scripts (separate repo, probably): these should run every one of the models that are decorated in this PR with: 1. Serial mode. 2. Parallelized by voxel with dask, ray, joblib. 3. Parallelized by chunk with dask, ray, joblib. 4. Parallelized with different backends if possible. 5. For ray/dask, parallelize on a big distributed AWS cluster.
-
[ ] Run the experiments. We'll need to have some uniform hardware settings. We'll want to run this on different OS (Windows, Linux, Mac OS) and maybe on different kinds of computational architectures (e.g., distributed cluster vs. one big machine).
-
[ ] Separately benchmark timing (this is straightforward) and memory (using https://github.com/pythonprofilers/memory_profiler).
-
[ ] Compare and contrast 😄
Hey @skoudoro : just to let you know that we are still running some experiments with this, and looking into some potentially problematic behaviors (e.g., ray spilling objects to disk, which can fill up the hard drive when parallelizing with ray). I think it's best that we continue to sort this out before we merge this pr. So, I think that we should push this back to a subsequent release for now. I'll keep updating here.
The errors on the CI are really puzzling and hard to reproduce locally and we worry that it might be some funky interaction with joblib, so I am removing the multi_voxel decorator for that model and we can reinstate it on a separate PR later on, if we fix everything else here.
OK - having removed the decorator from the SFM model, it seems that the only remaining error on the CI is completely unrelated. Or am I missing something?
At any rate, @asagilmore has now completed an extensive set of experiments with this PR and we are glad to say that Ray in particular provides a substantial speedup for reconstruction models (on the order of 10x) across two different models, and across a pretty wide range of hardware setups and chunking schemes. We're writing up a report about this here and we'd be happy to have input on the results and ideas that we are developing there (the repo for that report is here: https://github.com/nrdg/2024-dipy-parallelization).
With that said, I think that this code and the results we report are ready for a review, and are hopefully close to a shape where they can be merged for an upcoming release.
Looks like installing pytables on mac is (newly?) broken: https://github.com/dipy/dipy/actions/runs/8847440312/job/24295269319?pr=2593#step:6:125
Yes, I saw that with pytable.
Not sure what we can do apart from reporting.
The last release was 6month ago so not sure what changed last week.
Maybe a release of h5py...
There was one on April 10th: https://pypi.org/project/h5py/3.11.0/
I'm trying to pin to 3.10 in https://github.com/dipy/dipy/pull/2593/commits/cd0e653bdb56e6bb6accdce637483bc1d1837f5b. Let's see what we learn.
Following #3202, what's the right way to catch a warning thrown when importing one of the optional dependencies? I am getting:
reconst/tests/test_multi_voxel.py::test_multi_voxel_fit
/home/runner/work/dipy/dipy/venv/lib/python3.10/site-packages/ray/_private/pydantic_compat.py:2: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import packaging
reconst/tests/test_multi_voxel.py::test_multi_voxel_fit
/home/runner/work/dipy/dipy/venv/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
I believe this is emitted when importing ray. Should I explicitly ignore it in a context manager around that import?
The "right way" would be to add the warning message with ignore status in the pyproject.toml.
see here: https://github.com/dipy/dipy/blob/60669b888e7bc322a8b973f3a0b8121e4fd7f7d1/pyproject.toml#L184-L186
if it is too much for the pyproject.toml, we could add/create a specific python file for that.
For example, mne-python do it directly in the conftest.py instead of the pyproject.toml.
see here: https://github.com/mne-tools/mne-python/blob/main/mne/conftest.py#L131-L178
So, this is something to decide together, what is your opinion @arokem and @jhlegarreta ?
I like it better in the conftest
Hats off for this work, Ariel.
So, this is something to decide together, what is your opinion @arokem and @jhlegarreta ?
Having filtering rules both in the pyproject.toml and the conftest.py file would make things easier to follow, as we would need to look at two files instead of one. Also, not sure if pytest would end up by overriding the rules in one file by the ones that it reads in the last place.
So probably as Serge says, if the list of rules becomes too lengthy, better to keep them all in the conftest.py file.
OK, thank you for your feedback @arokem and @jhlegarreta.
I will create a PR later today to update that. Also, I think we should add it to the developer guide concerning the warnings policy.