无法使用 faster whisper
报错:Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
me too, I encounter this issue with my nvidia driver version is 535
I did research in google, https://github.com/SYSTRAN/faster-whisper/issues/516
It seem demonstrate I need cuda11.8, but i using the official docker image to run the repos, and checked the pytorch has cuda11.8 with it
so now, I have no idea to deal with the issue. Anyone helps?
the same to you
+1
export LD_LIBRARY_PATH
+1
I installed a Cuda with version 11.8 on my Ubuntu and switched to it, this problem has been solved. Another problem is core dump occurs in inferencing, the solution is to run the below code in the terminal before using faster whisper
export LD_LIBRARY_PATH=python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'
Hope the messages will be helpful to you.