[REQ] Rapids Memory Manager (RMM) Support
Description
https://github.com/rapidsai/rmm
Note that Warp does not support any external memory allocators, which makes this task less trivial.
Context
Improve interop with PyTorch
Thank you for opening this issue!
For posterity: If/when this feature is implemented, the physicsnemo package has several model pipelines that combine pytorch + rapids + warp that could take advantage of this support. In particular, the DoMINO datapipeline + model has warp kernels in both preprocessing and model implementation in pytorch. Preprocessing also uses rapids' cuml package - and combining all of these bumps up against memory limits since pytorch will allocate most of the memory even when it's not in use.
I'm happy to be a tester if/when you need one.
Happy to help on the RMM side, reach out if I can assist with a review or design discussion!
Hi @shi-eric - just wanted to check, is this something Warp can support?
Hi @shi-eric - just wanted to check, is this something Warp can support?
We would need @nvlukasz to weigh in on the design. For now I'm pushing this work to the v1.12 release.