AdaNeRF
AdaNeRF copied to clipboard
Potential for a AMD ROCm port?
Hi! Just stumbled upon this and I'm incredibly impressed! I only have an AMD GPU at the moment and I'm curious about hos possible it would be to support ROCm as a backed. Torch has had support for it since March 2021: https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/
I notice that there's only one file that actually requires CUDA: https://github.com/thomasneff/AdaNeRF/blob/c6a64b8433d11684eb6b397cbc4666653d8018ae/src/native/disc_depth_multiclass_cuda.cu. Do you know how easy it'd be to port this to generic torch python code so that it could be used across different backends.
Thanks!
Hi!
I don't have any experience with AMD/ROCm, but I can tell you that the CUDA kernel for training is optional - it is only used for training the DONeRF reference, and even there it is optional and uses standard PyTorch training as a fallback. I don't know if there are any things in the code base that would break with a different backend (maybe device tensors being on CPU/GPU might break?), but the custom CUDA extension kernel should not cause any issues in that regard!
- Thomas