pytorch_backend
pytorch_backend copied to clipboard
Disable cudnn option
We've occasionally had issues when multiple models that use cuDNN, (we sometimes see CUDNN_INTERNAL_ERROR and sometimes GPU memory will spike when running a kernel from cuDNN) so have found it beneficial to disable it in our own fork of the repo. It would be helpful to have an option to do this upstreamed.
If it would be helpful I could try and find a repro of the CUDNN_INTERNAL_ERROR issue but that may take a bit more time.
@Tabrizian are you okay to take a look at this? Thanks
Thanks for your contribution! Could you also add some documentation regarding this in the readme?
Done!
@Tabrizian is this being looked at?
@kthui do you know who can review this change?
@HennerM I'm so sorry I was out of the office when I was mentioned and might have missed the notification for this PR. This looks good to me. Thanks for your contribution. We need to run this PR through CI and add some testing before merging it. We'll merge the PR if the CI looks green.
@Tabrizian Thanks for approving. Can you help with merging as well?
@Tabrizian Thanks for approving. Can you help with merging as well?
Should be able to merge it soon. Sorry for the delay.