faster-whisper icon indicating copy to clipboard operation
faster-whisper copied to clipboard

Could not load library libcudnn_ops_infer.so.8

Open Benny739 opened this issue 2 years ago • 47 comments
trafficstars

Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

I'm using "nvidia/cuda:12.2.0-base-ubuntu20.04" image on google cloud with nvidia t4 gpus.

The normal whisper package model works fine on cuda.

Benny739 avatar Oct 16 '23 03:10 Benny739

need cuda 11.8

phineas-pta avatar Oct 16 '23 13:10 phineas-pta

Could not load library libcudnn_ops_infer.so.8

You can find cuBLAS and cuDNN libs for Linux there at Releases -> https://github.com/Purfview/whisper-standalone-win

Not tested, report if they work.

Purfview avatar Oct 16 '23 15:10 Purfview

Check if your LD_LIBRARY_PATH is specified and points to your cuda location.

bestasoff avatar Oct 16 '23 18:10 bestasoff

I am also having a similar problem. I tried uninstalling cuda-12.2 and cudaNN-9.x, and installing and pointing at cuda-11.8.0. I also used the pip-based command in the instructions and set my $LD_LIBRARY_PATH in the terminal prior to running the script.

Is it possible there are disconnects either between Jupyter Notebook and the actual virtual environment, or maybe the virtual environment and the base OS?

justinthelaw avatar Oct 16 '23 21:10 justinthelaw

@justinthelaw try adding path to .../**/torch/lib to the LD_LIBRARY_PATH.

bestasoff avatar Oct 17 '23 15:10 bestasoff

@bestasoff @Benny739 I was able to fix this particular issue by uninstalling all of the NVIDIA dependencies for cuda12.x, and just reinstalling cuda11.8. Now, I am running into a different problem that I'll discuss in a different issue..

justinthelaw avatar Oct 17 '23 22:10 justinthelaw

Having a similar issue. I'm trying to get Faster Whisper to run off a docker build.

I'm trying to use the docker image: pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime

Unfortunately, getting this libcudnn_ops_infer.so.8 issue as well. Anyone know how I might add the necessary additional libraries? I can't use the official Nvidia one it seems (was too large for my smaller system to handle).

bakermanbrian avatar Nov 01 '23 00:11 bakermanbrian

need cuda 11.8

As in "at least 11.8" or "exactly 11.8"? I have CUDA Version: 12.0 installed (in WSL2/Ubuntu) but get this error.

einarpersson avatar Jan 21 '24 13:01 einarpersson

If you are installing CUDA via pip in a virtual environment (and the same goes for on host, VM, or in a container):

# point to VENV's local CUDA 11.8 python lib
export LD_LIBRARY_PATH=${PWD}/.venv/lib64/python3.11/site-packages/nvidia/cublas/lib:${PWD}/.venv/lib64/python3.11/site-packages/nvidia/cudnn/lib

My previous comment about needing to downgrade my host CUDA toolkit and drivers was wrong. You just need to have a host system with drivers that supports up to or past the CUDA version required by the library.

If you continue to have trouble, please provide the pip dependencies installed in your dev/prod environment, where those deps are located in the environment, and also post the outputs of the following:

nvidia-smi
nvcc --version

justinthelaw avatar Jan 21 '24 16:01 justinthelaw

Hi everybody, and thank you for helping me in solving this issue!

Expanding on @justinthelaw comment I have used the following command instead:

path LD_LIBRARY=$LD_LIBRARY_PATH:$HOME/path/to/venv/lib64/python3.x/site-packages/nvidia/cublas/lib:$HOME/path/to/venv/lib64/python3.x/site-packages/nvidia/cudnn/lib

With this you append the paths to the $LD_LIBRARY_PATH rather than overwriting it. In the path, /path/to/venv must be substituted with the actual location (and name) of your virtual environment. The same applies to python3.x, where the x must be substituted with the Python version in use.

As a final comment, export applies only to the terminal in which it is issued. One may consider to append it to the $HOME/.bashrc script to make it persistent.

Luca-Pozzi avatar Feb 01 '24 14:02 Luca-Pozzi

@justinthelaw I am facing same issue. Here are the answers to the question you asked to answer.

NVIDIA-SMI Screenshot from 2024-02-27 10-54-30

NVCC --VERSION Screenshot from 2024-02-27 10-54-43

PIP DEPENDENCIES INSTALLED packages.txt

LOCATION: /home/anaconda3/envs/my_env/bin

fusesid avatar Feb 27 '24 05:02 fusesid

Sweet. I was able to get this working.

I installed the NVIDIA software in the README. That caused issues. Had the same libcudnn_ops_infer.so.8 error as the original poster.

Steps to fix:

  1. Went back to factory GPU driver sudo apt-get purge nvidia-* then sudo apt autoremove then sudo apt install system76-driver-nvidia
    • Your factory driver will likely be different than mine
  2. Ran the alternative instructions in the README (I use Linux)
    • pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
    export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
    
    • pip install faster-whisper
  3. Tried the quickstart code in the README with the jfk.flak from the test/data directory
    • The first time it ran, it downloaded the model
    • Second time, it transcribed the data
    • The key was setting the LD_LIBRARY_PATH env var
me@me:~/projects/speech$ source .venv/bin/activate
(.venv) me@me:~/projects/speech$ python3 test3.py 
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
Aborted (core dumped)
(.venv) me@me:~/projects/speech$ export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
(.venv) me@me:~/projects/speech$ python3 test3.py 
Detected language 'en' with probability 0.929688
[0.00s -> 3.00s]  And so my fellow Americans,
[3.00s -> 8.00s]  ask not what your country can do for you,
[8.00s -> 11.00s]  ask what you can do for your country.

Hope this helps someone!

SeaDude avatar Mar 01 '24 06:03 SeaDude

It happened in the docker-compose / docker. To solve it I had to execute the next command inside the docker:

 export LD_LIBRARY_PATH=/usr/local/lib/python3.9/site-packages/torch/lib:$LD_LIBRARY_PATH

uumami avatar Mar 16 '24 14:03 uumami

Problem: Could not load library libcudnn_ops_infer.so.8. Error: libcublas.so.11: cannot open shared object file: No such file or directory Aborted (core dumped)

Use python to check the path of the lib

import os
import nvidia.cublas.lib
import nvidia.cudnn.lib

print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))

Add the LD_LIBRARY_PATH variable in bashrc, the content is the data printed by python.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/usr/.local/lib/python3.10/site-packages/nvidia/cublas/lib:/home/usr/.local/lib/python3.10/site-packages/nvidia/ cudnn/lib

After modification, remember to close the current terminal and reopen a new terminal so that the above configuration will take effect.

HsinChiaChen avatar Mar 30 '24 12:03 HsinChiaChen

it work for me:

pip install torch --index-url https://download.pytorch.org/whl/cu121

CrazyBunQnQ avatar Apr 02 '24 11:04 CrazyBunQnQ

For everyone that has this issue, what fixed it for me was to include the path to torch too in LD_LIBRARY_PATH. The docker image I run is pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime

The line below adds torch as well as cudnn and cublas to the path.

export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; import torch; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__) + ":" + os.path.dirname(torch.__file__) +"/lib")'`

disbullief avatar Apr 10 '24 21:04 disbullief

I do not have an NVIDIA GPU, do not want to use CUDA and cannot install CUDA.

How can I use this program without installing any cuda packages?

otonoton avatar Apr 13 '24 22:04 otonoton

@otonoton I think that your best bet would be to use Whisper C++

disbullief avatar Apr 14 '24 03:04 disbullief

I have been using it but I was hoping to use faster-whisper for obvious reasons...

otonoton avatar Apr 14 '24 17:04 otonoton

For everyone that has this issue, what fixed it for me was to include the path to torch too in LD_LIBRARY_PATH.

For posterity: if you need to add torch library path, you don't need to add other libraries as a set of cuda libraries are also bundled with it:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`python3 -c 'import os; import torch; print(os.path.dirname(torch.__file__) +"/lib")'`

Prashant446 avatar May 10 '24 21:05 Prashant446

I've had this issue when using 12.4.1-cudnn-devel-ubuntu22.04 in my Dockerfile, switching to nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04 has resolved the issue for me without resorting to the LD_LIBRARY_PATH hackery and pip installing the drivers.

I think the issue with using the latest cuda image is because it ships with cuDNN 9 which according to the README.md isn't supported.

I hope this helps!

Full Dockerfile for context:

FROM nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04
RUN apt-get update && \
    apt-get install -y curl software-properties-common && \
    add-apt-repository ppa:deadsnakes/ppa && \
    apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get -y install python3.11 python3.11-distutils && \
    curl -sS https://bootstrap.pypa.io/get-pip.py | python3.11
RUN pip install --no-cache-dir poetry==1.8.2
WORKDIR /root/speaches
COPY pyproject.toml poetry.lock ./
RUN poetry install
COPY ./speaches ./speaches
ENTRYPOINT ["poetry", "run"]
CMD ["uvicorn", "speaches.main:app"]

fedirz avatar May 18 '24 21:05 fedirz

For me installing the cuDNN 8 libraries using sudo apt install libcudnn8 on Ubuntu 22.04 fixed the issue!

storytracer avatar Jul 01 '24 21:07 storytracer

Having a similar issue. I'm trying to get Faster Whisper to run off a docker build.

I'm trying to use the docker image: pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime

Unfortunately, getting this libcudnn_ops_infer.so.8 issue as well. Anyone know how I might add the necessary additional libraries? I can't use the official Nvidia one it seems (was too large for my smaller system to handle).

Did you manage to fix this on the docker container?

wolfassi123 avatar Jul 16 '24 13:07 wolfassi123

For me installing the cuDNN 8 libraries using sudo apt install libcudnn8 on Ubuntu 22.04 fixed the issue!

Just for others - I had used sudp apt install libcudnn9 - that did not work - but 8 does work.

msj121 avatar Jul 28 '24 21:07 msj121

For me installing the cuDNN 8 libraries using sudo apt install libcudnn8 on Ubuntu 22.04 fixed the issue!

it works for me as well, after you install libcudnn8, you will get libcudnn_ops_infer.so installed as well as other lib, just wonder why libcudnn9 installation are missing those lib

then, you can put the path into system LD_LIBRARY_PATH

dantanqikun avatar Aug 22 '24 11:08 dantanqikun

For me installing the cuDNN 8 libraries using sudo apt install libcudnn8 on Ubuntu 22.04 fixed the issue!

OMG you saved me ! Thank you so much.

Nazza77 avatar Sep 01 '24 18:09 Nazza77

I used in Ubuntu 24.04 sudo apt install nvidia-cudnn

miaohf avatar Sep 06 '24 08:09 miaohf

Just a note: vast.ai machine was Ubuntu 22.04 x84_64 machine. And I tested it for this machine only.

Here is what helped me:

  1. Install NVCC driver
sudo apt install nvidia-cuda-toolkit
  1. Install cuDNN (Version 8!)

It is not trivial to install version 8! If you go to the official website to install cuDNN and enter your machine's details, you will only get the latest version — version 9. I tried it, but it doesn't work (even when you link from calls of version 8 to version 9).

Official Website

So, you would need to go to archive and submit your request for version 8.

After that, you will get a file called .deb. In my case, I got cudnn-local-repo-ubuntu2204-8.9.7.29_1.0-1_amd64.deb

Here is how you install it:

sudo dpkg -i cudnn-local-repo-ubuntu2204-8.9.7.29_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2204-8.9.7/cudnn-local-*.gpg /usr/share/keyrings/

Maybe you would also need to do that:

sudo apt-get update
sudo apt-get install --reinstall libcudnn8 libcudnn8-dev libcudnn8-samples
  1. Verify installation
ls /usr/lib/x86_64-linux-gnu/libcudnn* | grep libcudnn_ops_infer.so.8
ls /usr/lib/x86_64-linux-gnu/libcudnn* | grep libcudnn_cnn_infer.so.8

Nik-Kras avatar Sep 16 '24 17:09 Nik-Kras

I installed whisperX. It uses faster-whisper, and it works oob.

https://github.com/m-bain/whisperX

koen-dejonghe avatar Sep 17 '24 16:09 koen-dejonghe

For me installing the cuDNN 8 libraries using sudo apt install libcudnn8 on Ubuntu 22.04 fixed the issue!

Just doing this command fixed the issue for me. No need for export LD_LIBRARY_PATH=... in my case.

saiakarsh-augnito avatar Oct 02 '24 04:10 saiakarsh-augnito