Tdarr icon indicating copy to clipboard operation
Tdarr copied to clipboard

CUDA libs missing in Unraid docker

Open Jpereyra316 opened this issue 2 years ago • 3 comments

I'm running into this issue. I had Tdarr setup and working on Unraid for a while. It has saved me a lot of storage space. However, a while ago it stopped working. I'm finally looking into it and and seeing this issue. Here is my configuration, environment and issue.

Command used to reproduce (run on Tdarr node):

/usr/local/bin/tdarr-ffmpeg -c:v h264_cuvid -i "/mnt/media/TV Shows/Wipeout (US)/Season 4/Wipeout (US) - S04E02 - Winter Wipeout - The Musical HDTV-720p.mkv" -map 0 -c:v hevc_nvenc -cq:v 19 -b:v 1715k -minrate 1200k -maxrate 2229k -bufsize 3431k -spatial_aq:v 1 -rc-lookahead:v 32 -c:a copy -c:s copy -max_muxing_queue_size 9999 "/temp/Wipeout (US) - S04E02 - Winter Wipeout - The Musical HDTV-720p-TdarrCacheFile-U0drK1dKb.mkv"

Tdarr server configuration:

/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='tdarr' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'serverIP'='192.168.1.10' -e 'internalNode'='false' -e 'nodeIP'='192.168.1.10' -e 'nodeID'='MyInternalNode' -e 'PUID'='99' -e 'PGID'='100' -e 'Extra Parameters'='--runtime=nvidia' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-d6aa6938-c1fe-85b3-1a76-0d475eec33bf' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -p '8266:8266/tcp' -p '8265:8265/tcp' -p '8264:8264/tcp' -v '/mnt/user/appdata/tdarr/server':'/app/server':'rw' -v '/mnt/user/appdata/tdarr/configs':'/app/configs':'rw' -v '/mnt/user/appdata/tdarr/logs':'/app/logs':'rw' -v '/mnt/user/Media/':'/mnt/media':'rw' -v '/mnt/user/Tdarr/temp/':'/temp':'rw' 'ghcr.io/haveagitgat/tdarr'

Tdarr node configuration:

/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='tdarr_node' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'serverIP'='192.168.1.10' -e 'serverPort'='8266' -e 'nodeIP'='192.168.1.10' -e 'nodeID'='Tdarr Node' -e 'PUID'='99' -e 'PGID'='100' -e 'Extra Parameters'='--runtime=nvidia' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-d6aa6938-c1fe-85b3-1a76-0d475eec33bf' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -p '8267:8267/tcp' -v '/mnt/user/appdata/tdarr/configs':'/app/configs':'rw' -v '/mnt/user/appdata/tdarr/logs':'/app/logs':'rw' -v '/mnt/user/Media/':'/mnt/media':'rw' -v '/mnt/user/Tdarr/temp/':'/temp':'rw' 'ghcr.io/haveagitgat/tdarr_node'

Server env:

# env
LANGUAGE=en_US.UTF-8
LIBVA_DRIVERS_PATH=/usr/lib/x86_64-linux-gnu/dri
HOSTNAME=ce7b46888913
serverIP=192.168.1.10
LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu
internalNode=false
HOME=/home/Tdarr
HOST_OS=Unraid
PGID=100
NODE_PORT=8267
NVIDIA_DRIVER_CAPABILITIES=all
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
nodeID=MyInternalNode
LANG=en_US.UTF-8
WEB_UI_PORT=8265
PUID=99
UMASK=002
PWD=/
nodeIP=192.168.1.10
SERVER_PORT=8266
NVIDIA_VISIBLE_DEVICES=GPU-d6aa6938-c1fe-85b3-1a76-0d475eec33bf
TZ=America/New_York
HANDBRAKE=1.5.1

Node env:

# env
LANGUAGE=en_US.UTF-8
LIBVA_DRIVERS_PATH=/usr/lib/x86_64-linux-gnu/dri
HOSTNAME=2c8a7b3b2418
serverIP=192.168.1.10
LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu
HOME=/home/Tdarr
HOST_OS=Unraid
PGID=100
NODE_PORT=8267
NVIDIA_DRIVER_CAPABILITIES=all
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
nodeID=Tdarr Node
LANG=en_US.UTF-8
WEB_UI_PORT=8265
PUID=99
UMASK=002
PWD=/
nodeIP=192.168.1.10
SERVER_PORT=8266
NVIDIA_VISIBLE_DEVICES=GPU-d6aa6938-c1fe-85b3-1a76-0d475eec33bf
TZ=America/New_York
HANDBRAKE=1.5.1
serverPort=8266

Issue:

# cd /usr/local/bin
# ./tdarr-ffmpeg -c:v h264_cuvid -i "/mnt/media/TV Shows/Wipeout (US)/Season 4/Wipeout (US) - S04E02 - Winter Wipeout - The Musical HDTV-720p.mkv" -map 0 -c:v hevc_nvenc -cq:v 19 -b:v 1715k -minrate 1200k -maxrate 2229k -bufsize 3431k -spatial_aq:v 1 -rc-lookahead:v 32 -c:a copy -c:s copy -max_muxing_queue_size 9999 "/temp/Wipeout (US) - S04E02 - Winter Wipeout - The Musical HDTV-720p-TdarrCacheFile-U0drK1dKb.mkv"
ffmpeg version 4.3.2-Jellyfin Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-17ubuntu1~20.04)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-shared --disable-libxcb --disable-sdl2 --disable-xlib --enable-gpl --enable-version3 --enable-static --enable-libfontconfig --enable-fontconfig --enable-gmp --enable-gnutls --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libdav1d --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --arch=amd64 --enable-opencl --enable-vaapi --enable-amf --enable-libmfx --enable-vdpau --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvenc --enable-nvdec --enable-ffnvcodec
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Input #0, matroska,webm, from '/mnt/media/TV Shows/Wipeout (US)/Season 4/Wipeout (US) - S04E02 - Winter Wipeout - The Musical HDTV-720p.mkv':
  Metadata:
    encoder         : libebml v0.7.9 + libmatroska v0.8.1
    creation_time   : 2011-01-13T23:44:20.000000Z
  Duration: 00:43:27.10, start: 0.000000, bitrate: 3598 kb/s
    Stream #0:0(eng): Video: h264 (High), yuv420p(progressive), 1280x720, SAR 1:1 DAR 16:9, 29.97 fps, 29.97 tbr, 1k tbn, 59.94 tbc
    Stream #0:1: Audio: ac3, 48000 Hz, 5.1(side), fltp, 384 kb/s (default)
File '/temp/Wipeout (US) - S04E02 - Winter Wipeout - The Musical HDTV-720p-TdarrCacheFile-U0drK1dKb.mkv' already exists. Overwrite? [y/N] y
[h264_cuvid @ 0x5632480f1980] Cannot load libnvcuvid.so.1
[h264_cuvid @ 0x5632480f1980] Failed loading nvcuvid.
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (h264_cuvid) -> hevc (hevc_nvenc))
  Stream #0:1 -> #0:1 (copy)
Error while opening decoder for input stream #0:0 : Operation not permitted

libnvcuvid not found:

# ldconfig -p | grep nvcuvid
# 

Originally posted by @Jpereyra316 in https://github.com/HaveAGitGat/Tdarr/issues/479#issuecomment-1231055898

Jpereyra316 avatar Sep 04 '22 21:09 Jpereyra316

I think, as usual, Spaceinvader One to the rescue - https://www.youtube.com/watch?v=KD6G-tpsyKw

I run my Tdarr node on Ubuntu and had to install nvidia-docker2 to get things to work. I don't know Unraid but it seems that your problem is possible that you just need to install, perhaps update, the nvidia runtime for Docker on Unraid.

That video was more oriented to saving power but I'd try reinstalling the Nividia plugins.

adefaria avatar Sep 10 '22 03:09 adefaria

Thanks for the suggestion as I hadn't tried that yet.

I tried reinstalling the Nvidia plugin, restart and everything but didn't have any luck. I did check my Plex container since HW transcoding is working there to see if it would find the proper libs. Here is the result:

Plex container:

# ldconfig -p | grep nv
        libvdpau_nvidia.so.1 (libc6,x86-64, OS ABI: Linux 2.3.99) => /usr/lib/x86_64-linux-gnu/libvdpau_nvidia.so.1
        libnvidia-ptxjitcompiler.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.1
        libnvidia-opticalflow.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.1
        libnvidia-opencl.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.1
        libnvidia-ml.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
        libnvidia-encode.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.1
        libnvidia-compiler.so.510.73.05 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.510.73.05
        libnvidia-cfg.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.1
        libnvidia-allocator.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.1
        libnvcuvid.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvcuvid.so.1
# ldconfig -p | grep cuda
        libcuda.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcuda.so.1

Tdarr container again after reinstalling Nvidia plugins:

# ldconfig -p | grep nv
# ldconfig -p | grep cuda
        libicudata.so.66 (libc6,x86-64) => /lib/x86_64-linux-gnu/libicudata.so.66
        libicudata.so (libc6,x86-64) => /lib/x86_64-linux-gnu/libicudata.so
#

As you can see, the Plex container found libnvcuvid.so.1. I do need to figure out how to update the Tdarr container somehow.

Jpereyra316 avatar Sep 10 '22 23:09 Jpereyra316

Hmmm...

Plex Docker container running on my PMS (Synology):

Jupiter:docker exec -it plex bash -c "ldconfig -p | grep nv"
Jupiter:docker exec -it plex bash -c "ldconfig -p | grep cuda"
	libicudata.so.66 (libc6,x86-64) => /lib/x86_64-linux-gnu/libicudata.so.66
Jupiter:

Tdarr_server Docker container running on my PMS (Synology)

Jupiter:docker exec -it tdarr_server bash -c "ldconfig -p | grep nv"
Jupiter:docker exec -it tdarr_server bash -c "ldconfig -p | grep cuda"
	libicudata.so.66 (libc6,x86-64) => /lib/x86_64-linux-gnu/libicudata.so.66
	libicudata.so (libc6,x86-64) => /lib/x86_64-linux-gnu/libicudata.so
Jupiter:

Tdarr_node Docker container running on my Ubuntu desktop:

Earth:docker exec -it tdarr_node bash -c "ldconfig -p | grep nv"
	libnvoptix.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvoptix.so.1
	libnvidia-tls.so.510.73.05 (libc6,x86-64, OS ABI: Linux 2.3.99) => /lib/x86_64-linux-gnu/libnvidia-tls.so.510.73.05
	libnvidia-rtcore.so.510.73.05 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-rtcore.so.510.73.05
	libnvidia-ptxjitcompiler.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.1
	libnvidia-opticalflow.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-opticalflow.so.1
	libnvidia-opticalflow.so (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-opticalflow.so
	libnvidia-opencl.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-opencl.so.1
	libnvidia-ngx.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-ngx.so.1
	libnvidia-ml.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-ml.so.1
	libnvidia-glvkspirv.so.510.73.05 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.510.73.05
	libnvidia-glsi.so.510.73.05 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-glsi.so.510.73.05
	libnvidia-glcore.so.510.73.05 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-glcore.so.510.73.05
	libnvidia-fbc.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-fbc.so.1
	libnvidia-encode.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-encode.so.1
	libnvidia-eglcore.so.510.73.05 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-eglcore.so.510.73.05
	libnvidia-compiler.so.510.73.05 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-compiler.so.510.73.05
	libnvidia-cfg.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-cfg.so.1
	libnvidia-allocator.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvidia-allocator.so.1
	libnvcuvid.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libnvcuvid.so.1
	libGLX_nvidia.so.0 (libc6,x86-64) => /lib/x86_64-linux-gnu/libGLX_nvidia.so.0
	libGLESv2_nvidia.so.2 (libc6,x86-64) => /lib/x86_64-linux-gnu/libGLESv2_nvidia.so.2
	libGLESv1_CM_nvidia.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.1
	libEGL_nvidia.so.0 (libc6,x86-64) => /lib/x86_64-linux-gnu/libEGL_nvidia.so.0
Earth:docker exec -it tdarr_node bash -c "ldconfig -p | grep cuda"
	libicudata.so.66 (libc6,x86-64) => /lib/x86_64-linux-gnu/libicudata.so.66
	libicudata.so (libc6,x86-64) => /lib/x86_64-linux-gnu/libicudata.so
	libcuda.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libcuda.so.1
	libcuda.so (libc6,x86-64) => /lib/x86_64-linux-gnu/libcuda.so
Earth:

Tdarr version: 2.00.18

adefaria avatar Sep 11 '22 00:09 adefaria

Retry on 2.00.19 but this seems to be something going wrong outside of your Tdarr image/container as NVENC hardware transcoding is working fine for many as @adefaria has demonstrated. Update if needed ty.

HaveAGitGat avatar Mar 05 '23 20:03 HaveAGitGat

Since i encountered the problem this morning, i can provide the fix for Unraid

  • Open properties for the Tdarr Node container
  • click on the advanced view in top right corner.
  • In extra parameters field add: --runtime=nvidia

And it should works fine.

butch2k avatar Mar 15 '23 12:03 butch2k