tiny-cuda-nn
tiny-cuda-nn copied to clipboard
GPU Compatibility
Are there some minimum requirements for the GPUs to be used? I am trying to run the PyTorch bindings with V100 GPUs, but I am getting some strange results. I am wondering that is it because V100 does not support some operations or it's just that I did not build the PyTorch extension correctly?
The V100 should be supported. Could you shed more light on the strange results you experience? A self-contained example for reproduction would be great.
@Tom94
Thanks for you reply! I just tried the latest version but I could not reproduce anymore. Perhaps it's just that my environment settings were wrong before?
An related issue: do you think it would make sense to allow the Python bindings to target multiple architectures? Currently it would only build for one target:
https://github.com/NVlabs/tiny-cuda-nn/blob/fe6e3ae75b2e0d444b63994fc43590fb4cff29cc/bindings/torch/setup.py#L45
But the tiny-cuda-nn package itself seems to support targeting multiple architectures, so it would be great if the Python bindings also support it! Thanks!
Hi there, unfortunately tiny-cuda-nn doesn't support generating optimally efficient code for multiple architectures simultaneously -- although it can target the lowest common denominator with the caveat that it'll run suboptimally on newer architectures.
This is difficult to work around (and not planned for now), so your best bet is to simply compile the PyTorch bindings for the lowest architecture you want to run on. I'd be open to adding a CLI argument to setup.py to avoid users having to hardcode this.