Lawrence Mitchell
Lawrence Mitchell
/ok to test
RMM has no concept of distributed memory parallelism built in, nor does it need to. What you need to arrange is that the different ranks in your process correctly select...
Can you try removing the lines: ``` local_rank = int(os.environ.get("LOCAL_RANK", 0)) rmm.reinitialize(pool_allocator=True, devices=local_rank) ``` And in `demo_basic` function, move the `reinitialize` call to after you have set the device with...
Ah, I _suspect_ that the problem was that 23.12 did not have https://github.com/rapidsai/rmm/pull/1407, but 24.02 does. I'll go ahead and close.
@harrism good point, let me pull that out into a separate change (I wanted these so I could actually make the recommendation to do the right thing in the docs).
> I think there's no real need to separate it; just to clarify the title and description. Ok, done. I also added a test of the new functionality.
> Can we close #1132 with this change? Yes, though I didn't cover as many APIs as that one (exposing everywhere is tracked in #1515).
> This all looks great. My only question is whether `README.md` is getting too big. Maybe this belongs in the Python docs instead? e.g. `guide.md`? Arguably yes, a question then...