faster-whisper
faster-whisper copied to clipboard
Could not load library libcudnn_ops_infer.so.8
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
I'm using "nvidia/cuda:12.2.0-base-ubuntu20.04" image on google cloud with nvidia t4 gpus.
The normal whisper package model works fine on cuda.
need cuda 11.8
Could not load library libcudnn_ops_infer.so.8
You can find cuBLAS and cuDNN libs for Linux there at Releases -> https://github.com/Purfview/whisper-standalone-win
Not tested, report if they work.
Check if your LD_LIBRARY_PATH is specified and points to your cuda location.
I am also having a similar problem. I tried uninstalling cuda-12.2 and cudaNN-9.x, and installing and pointing at cuda-11.8.0. I also used the pip-based command in the instructions and set my $LD_LIBRARY_PATH in the terminal prior to running the script.
Is it possible there are disconnects either between Jupyter Notebook and the actual virtual environment, or maybe the virtual environment and the base OS?
@justinthelaw try adding path to .../**/torch/lib to the LD_LIBRARY_PATH.
@bestasoff @Benny739 I was able to fix this particular issue by uninstalling all of the NVIDIA dependencies for cuda12.x, and just reinstalling cuda11.8. Now, I am running into a different problem that I'll discuss in a different issue..
Having a similar issue. I'm trying to get Faster Whisper to run off a docker build.
I'm trying to use the docker image: pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime
Unfortunately, getting this libcudnn_ops_infer.so.8 issue as well. Anyone know how I might add the necessary additional libraries? I can't use the official Nvidia one it seems (was too large for my smaller system to handle).
need cuda 11.8
As in "at least 11.8" or "exactly 11.8"? I have CUDA Version: 12.0 installed (in WSL2/Ubuntu) but get this error.
If you are installing CUDA via pip in a virtual environment (and the same goes for on host, VM, or in a container):
# point to VENV's local CUDA 11.8 python lib
export LD_LIBRARY_PATH=${PWD}/.venv/lib64/python3.11/site-packages/nvidia/cublas/lib:${PWD}/.venv/lib64/python3.11/site-packages/nvidia/cudnn/lib
My previous comment about needing to downgrade my host CUDA toolkit and drivers was wrong. You just need to have a host system with drivers that supports up to or past the CUDA version required by the library.
If you continue to have trouble, please provide the pip dependencies installed in your dev/prod environment, where those deps are located in the environment, and also post the outputs of the following:
nvidia-smi
nvcc --version
Hi everybody, and thank you for helping me in solving this issue!
Expanding on @justinthelaw comment I have used the following command instead:
path LD_LIBRARY=$LD_LIBRARY_PATH:$HOME/path/to/venv/lib64/python3.x/site-packages/nvidia/cublas/lib:$HOME/path/to/venv/lib64/python3.x/site-packages/nvidia/cudnn/lib
With this you append the paths to the $LD_LIBRARY_PATH rather than overwriting it. In the path, /path/to/venv must be substituted with the actual location (and name) of your virtual environment. The same applies to python3.x, where the x must be substituted with the Python version in use.
As a final comment, export applies only to the terminal in which it is issued. One may consider to append it to the $HOME/.bashrc script to make it persistent.
@justinthelaw I am facing same issue. Here are the answers to the question you asked to answer.
NVIDIA-SMI
NVCC --VERSION
PIP DEPENDENCIES INSTALLED packages.txt
LOCATION:
/home/anaconda3/envs/my_env/bin
Sweet. I was able to get this working.
I installed the NVIDIA software in the README. That caused issues. Had the same libcudnn_ops_infer.so.8 error as the original poster.
Steps to fix:
- Went back to factory GPU driver
sudo apt-get purge nvidia-*thensudo apt autoremovethensudo apt install system76-driver-nvidia- Your factory driver will likely be different than mine
- Ran the alternative instructions in the README (I use Linux)
pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`pip install faster-whisper
- Tried the quickstart code in the README with the
jfk.flakfrom thetest/datadirectory- The first time it ran, it downloaded the model
- Second time, it transcribed the data
- The key was setting the
LD_LIBRARY_PATHenv var
me@me:~/projects/speech$ source .venv/bin/activate
(.venv) me@me:~/projects/speech$ python3 test3.py
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
Aborted (core dumped)
(.venv) me@me:~/projects/speech$ export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
(.venv) me@me:~/projects/speech$ python3 test3.py
Detected language 'en' with probability 0.929688
[0.00s -> 3.00s] And so my fellow Americans,
[3.00s -> 8.00s] ask not what your country can do for you,
[8.00s -> 11.00s] ask what you can do for your country.
Hope this helps someone!
It happened in the docker-compose / docker. To solve it I had to execute the next command inside the docker:
export LD_LIBRARY_PATH=/usr/local/lib/python3.9/site-packages/torch/lib:$LD_LIBRARY_PATH
Problem: Could not load library libcudnn_ops_infer.so.8. Error: libcublas.so.11: cannot open shared object file: No such file or directory Aborted (core dumped)
Use python to check the path of the lib
import os
import nvidia.cublas.lib
import nvidia.cudnn.lib
print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))
Add the LD_LIBRARY_PATH variable in bashrc, the content is the data printed by python.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/usr/.local/lib/python3.10/site-packages/nvidia/cublas/lib:/home/usr/.local/lib/python3.10/site-packages/nvidia/ cudnn/lib
After modification, remember to close the current terminal and reopen a new terminal so that the above configuration will take effect.
it work for me:
pip install torch --index-url https://download.pytorch.org/whl/cu121
For everyone that has this issue, what fixed it for me was to include the path to torch too in LD_LIBRARY_PATH.
The docker image I run is pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
The line below adds torch as well as cudnn and cublas to the path.
export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; import torch; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__) + ":" + os.path.dirname(torch.__file__) +"/lib")'`
I do not have an NVIDIA GPU, do not want to use CUDA and cannot install CUDA.
How can I use this program without installing any cuda packages?
@otonoton I think that your best bet would be to use Whisper C++
I have been using it but I was hoping to use faster-whisper for obvious reasons...
For everyone that has this issue, what fixed it for me was to include the path to torch too in
LD_LIBRARY_PATH.
For posterity: if you need to add torch library path, you don't need to add other libraries as a set of cuda libraries are also bundled with it:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`python3 -c 'import os; import torch; print(os.path.dirname(torch.__file__) +"/lib")'`
I've had this issue when using 12.4.1-cudnn-devel-ubuntu22.04 in my Dockerfile, switching to nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04 has resolved the issue for me without resorting to the LD_LIBRARY_PATH hackery and pip installing the drivers.
I think the issue with using the latest cuda image is because it ships with cuDNN 9 which according to the README.md isn't supported.
I hope this helps!
Full Dockerfile for context:
FROM nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04
RUN apt-get update && \
apt-get install -y curl software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get -y install python3.11 python3.11-distutils && \
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.11
RUN pip install --no-cache-dir poetry==1.8.2
WORKDIR /root/speaches
COPY pyproject.toml poetry.lock ./
RUN poetry install
COPY ./speaches ./speaches
ENTRYPOINT ["poetry", "run"]
CMD ["uvicorn", "speaches.main:app"]
For me installing the cuDNN 8 libraries using sudo apt install libcudnn8 on Ubuntu 22.04 fixed the issue!
Having a similar issue. I'm trying to get Faster Whisper to run off a docker build.
I'm trying to use the docker image: pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime
Unfortunately, getting this libcudnn_ops_infer.so.8 issue as well. Anyone know how I might add the necessary additional libraries? I can't use the official Nvidia one it seems (was too large for my smaller system to handle).
Did you manage to fix this on the docker container?
For me installing the cuDNN 8 libraries using
sudo apt install libcudnn8on Ubuntu 22.04 fixed the issue!
Just for others - I had used sudp apt install libcudnn9 - that did not work - but 8 does work.
For me installing the cuDNN 8 libraries using
sudo apt install libcudnn8on Ubuntu 22.04 fixed the issue!
it works for me as well, after you install libcudnn8, you will get libcudnn_ops_infer.so installed as well as other lib, just wonder why libcudnn9 installation are missing those lib
then, you can put the path into system LD_LIBRARY_PATH
For me installing the cuDNN 8 libraries using
sudo apt install libcudnn8on Ubuntu 22.04 fixed the issue!
OMG you saved me ! Thank you so much.
I used in Ubuntu 24.04
sudo apt install nvidia-cudnn
Just a note: vast.ai machine was Ubuntu 22.04 x84_64 machine. And I tested it for this machine only.
Here is what helped me:
- Install NVCC driver
sudo apt install nvidia-cuda-toolkit
- Install cuDNN (Version 8!)
It is not trivial to install version 8! If you go to the official website to install cuDNN and enter your machine's details, you will only get the latest version — version 9. I tried it, but it doesn't work (even when you link from calls of version 8 to version 9).
Official Website
So, you would need to go to archive and submit your request for version 8.
After that, you will get a file called .deb. In my case, I got cudnn-local-repo-ubuntu2204-8.9.7.29_1.0-1_amd64.deb
Here is how you install it:
sudo dpkg -i cudnn-local-repo-ubuntu2204-8.9.7.29_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2204-8.9.7/cudnn-local-*.gpg /usr/share/keyrings/
Maybe you would also need to do that:
sudo apt-get update
sudo apt-get install --reinstall libcudnn8 libcudnn8-dev libcudnn8-samples
- Verify installation
ls /usr/lib/x86_64-linux-gnu/libcudnn* | grep libcudnn_ops_infer.so.8
ls /usr/lib/x86_64-linux-gnu/libcudnn* | grep libcudnn_cnn_infer.so.8
I installed whisperX. It uses faster-whisper, and it works oob.
https://github.com/m-bain/whisperX
For me installing the cuDNN 8 libraries using
sudo apt install libcudnn8on Ubuntu 22.04 fixed the issue!
Just doing this command fixed the issue for me. No need for export LD_LIBRARY_PATH=... in my case.