[BUG] 2.4.0-gpu is broken, can't load libcudnn_ops
Is there an existing issue for this?
- [x] I have searched the existing issues
Current Behavior
When invoking faster-whisper from home assist on version current gpu tag which is the same as 2.4.0-gpu I'm getting the following response:
whisper | INFO:__main__:Ready
whisper | Connection to localhost (127.0.0.1) 10300 port [tcp/*] succeeded!
whisper | INFO:faster_whisper:Processing audio with duration 00:02.520
whisper | Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}
whisper | Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor
whisper | Traceback (most recent call last):
whisper | File "<string>", line 1, in <module>
whisper | File "<frozen posixpath>", line 181, in dirname
whisper | TypeError: expected str, bytes or os.PathLike object, not NoneType
Expected Behavior
downgrading the tag in my compose file to 2.3.0-gpu and everything works as expected.
Steps To Reproduce
use image with tag gpu
invoke a request from home assistant
Environment
CPU architecture
x86-64
Docker creation
services:
whisper:
container_name: whisper
image: lscr.io/linuxserver/faster-whisper:gpu
pull_policy: always
restart: unless-stopped
environment:
PUID: 1000
PGID: 1000
TZ: Europe/Amsterdam
WHISPER_MODEL: medium.en
WHISPER_LANG: en
ports:
- 10300:10300
volumes:
- ./data/whisper:/config
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities:
- gpu
- utility
- compute
Container logs
whisper | INFO:__main__:Ready
whisper | Connection to localhost (127.0.0.1) 10300 port [tcp/*] succeeded!
whisper | INFO:faster_whisper:Processing audio with duration 00:02.520
whisper | Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}
whisper | Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor
whisper | Traceback (most recent call last):
whisper | File "<string>", line 1, in <module>
whisper | File "<frozen posixpath>", line 181, in dirname
whisper | TypeError: expected str, bytes or os.PathLike object, not NoneType
Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.
I'm experiencing the same issue.
Might be related to the version of ctranslate2
- https://github.com/m-bain/whisperX/issues/901
- https://github.com/jhj0517/Whisper-WebUI/issues/346
Same issue here
Same issue as well. I tried to specifically install cuDNN 9.x but unfortunately it installs 9.9 (latest, as I haven't found a way to install a specific version) and ctranslate2 seems to specifically look for 9.1 libs or something:
Connection to localhost (127.0.0.1) 10300 port [tcp/*] succeeded!
[ls.io-init] done.
INFO:faster_whisper:Processing audio with duration 00:04.000
Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}
Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<frozen posixpath>", line 181, in dirname
TypeError: expected str, bytes or os.PathLike object, not NoneType
INFO:__main__:Ready
Downgrading manually to ctranslate2 v. 4.4.0 in the container via pip, as suggested in the related issues mentioned above, just brings another error (this time looking for cuDNN 8.x of course...) so that's likely not the best way to do it (or there's a part I'm doing wrong maybe?).
Temporary fix is using the 2.3.0 image and blocking watchtower (Note to self: I knew this was a stupid idea!) to auto-update anything just in case:
image: lscr.io/linuxserver/faster-whisper:gpu
labels:
- "com.centurylinklabs.watchtower.enable=false"
Edit: also tried to add symlinks to the 9.9 cuDNN versions for this container to use, but likely these can't be used by ctranslate2 at runtime and aren't taken into account.
Unfortunately while digging into the original faster-whisper repo, i noticed the issue is not specific to lscr.io version, but could be more widespread: https://github.com/rhasspy/wyoming-faster-whisper/issues/35#issuecomment-2564704354
Same issue as well. I tried to specifically install cuDNN 9.x but unfortunately it installs 9.9 (latest, as I haven't found a way to install a specific version) and ctranslate2 seems to specifically look for 9.1 libs or something:
Connection to localhost (127.0.0.1) 10300 port [tcp/*] succeeded! [ls.io-init] done. INFO:faster_whisper:Processing audio with duration 00:04.000 Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so} Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor Traceback (most recent call last): File "<string>", line 1, in <module> File "<frozen posixpath>", line 181, in dirname TypeError: expected str, bytes or os.PathLike object, not NoneType INFO:__main__:ReadyDowngrading manually to ctranslate2 v. 4.4.0 in the container via pip, as suggested in the related issues mentioned above, just brings another error (this time looking for cuDNN 8.x of course...) so that's likely not the best way to do it (or there's a part I'm doing wrong maybe?).
Temporary fix is using the 2.3.0 image and blocking watchtower (Note to self: I knew this was a stupid idea!) to auto-update anything just in case:
image: lscr.io/linuxserver/faster-whisper:gpu labels: - "com.centurylinklabs.watchtower.enable=false"Edit: also tried to add symlinks to the 9.9 cuDNN versions for this container to use, but likely these can't be used by ctranslate2 at runtime and aren't taken into account.
any chance while we wait for a fix you can share the image? Tried to recreate with the zip repo but couldmnt get to build right.
any chance while we wait for a fix you can share the image? Tried to recreate with the zip repo but couldmnt get to build right.
Use
lscr.io/linuxserver/faster-whisper:2.3.0-gpu
Same here
whisper | [migrations] started
whisper | [migrations] no migrations found
whisper | usermod: no changes
whisper | ───────────────────────────────────────
whisper |
whisper | ██╗ ███████╗██╗ ██████╗
whisper | ██║ ██╔════╝██║██╔═══██╗
whisper | ██║ ███████╗██║██║ ██║
whisper | ██║ ╚════██║██║██║ ██║
whisper | ███████╗███████║██║╚██████╔╝
whisper | ╚══════╝╚══════╝╚═╝ ╚═════╝
whisper |
whisper | Brought to you by linuxserver.io
whisper | ───────────────────────────────────────
whisper |
whisper | To support LSIO projects visit:
whisper | https://www.linuxserver.io/donate/
whisper |
whisper | ───────────────────────────────────────
whisper | GID/UID
whisper | ───────────────────────────────────────
whisper |
whisper | User UID: 1000
whisper | User GID: 1000
whisper | ───────────────────────────────────────
whisper | Linuxserver.io version: v2.4.0-ls72
whisper | Build-date: 2025-05-04T06:42:53+00:00
whisper | ───────────────────────────────────────
whisper |
whisper | [custom-init] No custom files found, skipping...
whisper | Traceback (most recent call last):
whisper | File "<string>", line 1, in <module>
whisper | File "<frozen posixpath>", line 181, in dirname
whisper | TypeError: expected str, bytes or os.PathLike object, not NoneType
whisper | INFO:__main__:Ready
whisper | Connection to localhost (127.0.0.1) 10300 port [tcp/*] succeeded!
whisper | [ls.io-init] done.
whisper | INFO:faster_whisper:Processing audio with duration 00:01.780
whisper | Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}
whisper | Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor
whisper | Traceback (most recent call last):
whisper | File "<string>", line 1, in <module>
whisper | File "<frozen posixpath>", line 181, in dirname
whisper | TypeError: expected str, bytes or os.PathLike object, not NoneType
whisper | INFO:__main__:Ready
whisper | Connection to localhost (127.0.0.1) 10300 port [tcp/*] succeeded!
gpu-v2.4.0-ls71 works well
The problem is with LD_LIBRARY_PATH set in /etc/s6-overlay/s6-rc.d/svc-whisper/run, the easy fix is to change line 4 of this file to:
export LD_LIBRARY_PATH=$(python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__path__[0]) + "/lib:" + os.path.dirname(nvidia.cudnn.lib.__path__[0]) + "/lib")')
This can be done by editing it in container or mounting it via -v /path_on_host_with_fix/run:/etc/s6-overlay/s6-rc.d/svc-whisper/run
This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.
Bumping to keep the bot away.
Looks like this was fixed by an pr last month? https://github.com/linuxserver/docker-faster-whisper/pull/43