Kaden Uhlig
Kaden Uhlig
I also just compiled jaxlib from source (pointing it at my CUDA and CUDNN installations) successfully: ```` python build/build.py --enable_cuda --cuda_path /usr/local/cuda-11.2 --cudnn_path /usr/ --cuda_version 11.2 --cudnn_version 8 ```` I...
I have two graphics cards (one integrated and one discrete; currently the integrated one shouldn't be used in any of the following, as verified by running `nvidia-smi` while using JAX)....
I've tried looking for any other files that might be left over from past installations/other libraries like PyTorch. I removed all those (even though I know PyTorch, for example, is...
@hawkinsp I get the same error, even after adding a symlink from `/usr/local/cuda` to `/opt/cuda`
@skye I'm still experiencing the same exact errors for the provided sample (and anything involving convolution), but literally everything else in JAX works. I'm currently using Flax for another project...
@hawkinsp `/opt/cuda/targets/x86_64-linux/lib/libcudnn.so.8`
@hawkinsp It is symlinked to both `/usr/local/cuda` and `/usr/local/cuda-11.0`, but I can try adding it to my `LD_LIBRARY_PATH`. I also just tried using the exact versions that @astanziola used, and...
So I've tried both of those before – I can't remember the exact output, but I know they didn't work/just delayed this issue temporarily. I can try those and the...
> Can you share the complete log with `TF_CPP_MIN_LOG_LEVEL=0` when you run a convolution? The log at the top of the bug is missing a few things, I'm hoping maybe...
@hawkinsp That's fair, I just thought it was a bug because it was only happening to me with JAX, and not with PyTorch (I can use much, much larger models...