tiny-cuda-nn
tiny-cuda-nn copied to clipboard
Possibility to integrate tiny-cuda-nn with my own custom CUDA kernel?
Hi,
First of all, thanks for this amazing library! I was wondering if the following is doable (or how complicated it would be) with the tiny-cuda-nn framework.
I have a PyTorch model that uses a custom CUDA kernel that implements some forward/backward passes. The gradients from CUDA kernels are connected back to PyTorch to facilitate autodiff with other modules defined in PyTorch.
Now, I would like to integrate tiny-cuda-nn with my model. The caveat is that the tiny-cuda-nn input is actually calculated in my custom CUDA kernel (due to design and efficiency reasons, it is not practical to expose this calculation to PyTorch), so I cannot use the PyTorch binding that you already provided. This means I have to initialize a tiny-cuda-nn instance in my custom CUDA kernel, is that correct?
From my understanding, what I'll have to do is:
- Define the tiny-cuda-nn weights in PyTorch
- Pass the weights to my custom CUDA kernel from the Python process
- Initialize a tiny-cuda-nn instance in CUDA kernel
- Set tiny-cuda-nn parameters manually using the weights passed in from Python process
- Connect the forward / backward passes of my CUDA kernel with that of tiny-cuda-nn's
Thank you so much!