Tdarr
Tdarr copied to clipboard
Can Tdarr do HW transcoding AMD Radeon R9 M370X?
I have a laptop with an AMD Radeon R9 M370X. I'd like to use it as a tdarr_node and use the HW transcoding but it doesn't seem to work. Instead, I see
[hevc_nvenc @ 0x5573a1564c00] Cannot load libcuda.so.1
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
...
[h264_cuvid @ 0x55752fe2f880] Cannot load libnvcuvid.so.1
[h264_cuvid @ 0x55752fe2f880] Failed loading nvcuvid.
To Reproduce In docker container on a machine with AMD Radeon R9 M370X run:
ffmpeg -c:v h264_cuvid -i "/input/TV/The Expanse/The Expanse S04E04 - Episode 4.mkv" -map 0 -c:v hevc_nvenc -cq:v 19 -b:v 1339k -minrate 937k -maxrate 1740k -bufsize 2679k -spatial_aq:v 1 -rc-lookahead:v 32 -c:a copy -c:s copy -max_muxing_queue_size 9999 "/temp/The Expanse S04E04 - Episode 4-TdarrCacheFile-T5Z-HWIOm.mkv"
The above is the ffmpeg command line that tdarr composed.
Expected behavior Expected video to transcode using HW.
- Config files
{
"nodeID": "Mars",
"nodeIP": "192.168.0.100",
"nodePort": "8268",
"serverIP": "192.168.0.251",
"serverPort": "8266",
"handbrakePath": "",
"ffmpegPath": "",
"mkvpropeditPath": "",
"pathTranslators": [
{
"server": "",
"node": ""
}
],
"platform_arch": "linux_x64_docker_true"
}
- OS: Ubuntu 21.10
- Browser N/A
- Version 2.00.15
I havent yet been able to get amd encoding to work well for all media. your issue here is you are using the nvidia tasks and those will never work with an amd card. you have to use a task that is using 'vaapi' instead of the nvidia ones. however that only works for some media.
How do I configure a task that uses vaapi instead of Nvidia? To be clear I have two tdarr nodes, one with Nvidia and the other with AMD. And if a video will not work with vaapi would the Nvidia node be able to pick it up later on and process it?
I was looking for the same thing and turns out tdarr ffmpeg is compiled without vaapi support:
ffmpeg version 5.0.1-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 8 (Debian 8.3.0-6)
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil 57. 17.100 / 57. 17.100
libavcodec 59. 18.100 / 59. 18.100
libavformat 59. 16.100 / 59. 16.100
libavdevice 59. 4.100 / 59. 4.100
libavfilter 8. 24.100 / 8. 24.100
libswscale 6. 4.100 / 6. 4.100
libswresample 4. 3.100 / 4. 3.100
libpostproc 56. 3.100 / 56. 3.100
configuration:
--enable-gpl
--enable-version3
--enable-static
--disable-debug
--disable-ffplay
--disable-indev=sndio
--disable-outdev=sndio
--cc=gcc
--enable-fontconfig
--enable-frei0r
--enable-gnutls
--enable-gmp
--enable-libgme
--enable-gray
--enable-libaom
--enable-libfribidi
--enable-libass
--enable-libvmaf
--enable-libfreetype
--enable-libmp3lame
--enable-libopencore-amrnb
--enable-libopencore-amrwb
--enable-libopenjpeg
--enable-librubberband
--enable-libsoxr
--enable-libspeex
--enable-libsrt
--enable-libvorbis
--enable-libopus
--enable-libtheora
--enable-libvidstab
--enable-libvo-amrwbenc
--enable-libvpx
--enable-libwebp
--enable-libx264
--enable-libx265
--enable-libxml2
--enable-libdav1d
--enable-libxvid
--enable-libzvbi
--enable-libzimg
Here are compile instructions for anyone curious https://gist.github.com/Brainiarc7/95c9338a737aa36d9bb2931bed379219
tdarr_server takes ffempeg from jellyfin, but it looks to be out of date. tdarr downloads ffmpeg using hardcoded version and that version seems to be out of date https://github.com/HaveAGitGat/Tdarr/blob/master/docker/Dockerfile#L35 as jellyfin-ffmpeg released lots of 5.x and even 6.0.1 ffmpeg -> https://repo.jellyfin.org/releases/server/ubuntu/versions/jellyfin-ffmpeg/
@agilob latest container is using 5.1.2 FYI so might be worth trying that
I put ffmpeg log from version 2.00.19, I can see this is the latest stable
@agilob from the container? That uses Jellyfin FFmpeg 5.1.2 for x64 containers. It also has the van sickle FFmpeg (which is included in the Tdarr package) but Tdarr uses the Jellyfin one in that environment.
I see there are multiple ffmpeg included in the container. Looks like the one thats used on the page above is using wrong ffmpeg? Screenshot is showing lack of VAAPI support
I can kubectl exec into the container and see that vaapi support is compiled:
root@tdarr-5cdd655cc5-sqjc9:/# which ffmpeg
/usr/local/bin/ffmpeg
root@tdarr-5cdd655cc5-sqjc9:/# /usr/local/bin/ffmpeg -buildconf
ffmpeg version 5.1.2-Jellyfin Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-libs=-lfftw3f --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-shared --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto --enable-gpl --enable-version3 --enable-static --enable-gmp --enable-gnutls --enable-chromaprint --enable-libdrm --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libdav1d --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libsvtav1 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-opencl --enable-vaapi --enable-amf --enable-libmfx --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
Anyway I gave a shot at adding jellyfin-ffmpeg repo into tdarr dockerfile and including intel-compute as well:
FROM lsiobase/ubuntu:focal
ARG VERSION
ARG MODULE
# https://github.com/intel/compute-runtime/releases
ARG GMMLIB_VERSION=22.0.2
ARG IGC_VERSION=1.0.10395
ARG NEO_VERSION=22.08.22549
ARG LEVEL_ZERO_VERSION=1.3.22549
ARG LEVEL_ZERO_VERSION=1.3.22549
ENV \
LIBVA_DRIVERS_PATH="/usr/lib/x86_64-linux-gnu/dri" \
LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu" \
NVIDIA_DRIVER_CAPABILITIES="compute,video,utility" \
NVIDIA_VISIBLE_DEVICES="all" \
HANDBRAKE=1.5.1
ENV WEB_UI_PORT="8265" SERVER_PORT="8266" NODE_PORT="8267" PUID="1000" PGID="1000" UMASK="002" TZ="Etc/UTC" HOME="/home/Tdarr"
COPY root/ /
# handle deps
RUN apt-get update && \
apt-get install -y \
software-properties-common \
git \
trash-cli && \
mkdir -p \
/app \
/logs \
/temp \
"${HOME}" && \
useradd -u ${PUID} -U -d ${HOME} -s /bin/false Tdarr && \
usermod -G users Tdarr && \
apt-get update && apt-get install -y curl unzip mkvtoolnix libtesseract-dev && \
apt-get install --no-install-recommends --no-install-suggests -y ca-certificates gnupg wget curl && \
wget -O - https://repo.jellyfin.org/jellyfin_team.gpg.key | apt-key add - && \
echo "deb [arch=$( dpkg --print-architecture )] https://repo.jellyfin.org/$( awk -F'=' '/^ID=/{ print $NF }' /etc/os-release ) $( awk -F'=' '/^VERSION_CODENAME=/{ print $NF }' /etc/os-release ) main" | tee /etc/apt/sources.list.d/jellyfin.list && \
apt-get update && \
apt-get install -y jellyfin-ffmpeg5 && \
if uname -m | grep -q x86; then \
apt-get install --no-install-recommends --no-install-suggests -y \
mesa-va-drivers \
openssl \
locales && \
# FFmpeg
ln -s /usr/lib/jellyfin-ffmpeg/ffmpeg /usr/local/bin/ffmpeg && \
ln -s /usr/lib/jellyfin-ffmpeg/ffmpeg /usr/local/bin/tdarr-ffmpeg && \
# Intel deps
curl -s https://repositories.intel.com/graphics/intel-graphics.key | apt-key add - && \
echo 'deb [arch=amd64] https://repositories.intel.com/graphics/ubuntu focal main' > /etc/apt/sources.list.d/intel-graphics.list && \
apt-get update && \
apt-get install -y --no-install-recommends \
intel-media-va-driver-non-free \
vainfo \
mesa-va-drivers && \
# As in jellyfin container
# https://github.com/jellyfin/jellyfin/blob/master/Dockerfile#L47
mkdir intel-compute-runtime && \
cd intel-compute-runtime && \
wget https://github.com/intel/compute-runtime/releases/download/${NEO_VERSION}/intel-gmmlib_${GMMLIB_VERSION}_amd64.deb && \
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-${IGC_VERSION}/intel-igc-core_${IGC_VERSION}_amd64.deb && \
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-${IGC_VERSION}/intel-igc-opencl_${IGC_VERSION}_amd64.deb && \
wget https://github.com/intel/compute-runtime/releases/download/${NEO_VERSION}/intel-opencl-icd_${NEO_VERSION}_amd64.deb && \
wget https://github.com/intel/compute-runtime/releases/download/${NEO_VERSION}/intel-level-zero-gpu_${LEVEL_ZERO_VERSION}_amd64.deb && \
dpkg -i *.deb && \
cd .. && \
rm -rf intel-compute-runtime && \
# HandBrake deps
apt-get install -y \
autoconf \
automake \
autopoint \
appstream \
build-essential \
cmake \
git \
libass-dev \
libbz2-dev \
libfontconfig1-dev \
libfreetype6-dev \
libfribidi-dev \
libharfbuzz-dev \
libjansson-dev \
liblzma-dev \
libmp3lame-dev \
libnuma-dev \
libogg-dev \
libopus-dev \
libsamplerate-dev \
libspeex-dev \
libtheora-dev \
libtool \
libtool-bin \
libturbojpeg0-dev \
libvorbis-dev \
libx264-dev \
libxml2-dev \
libvpx-dev \
m4 \
make \
meson \
nasm \
ninja-build \
patch \
pkg-config \
python \
tar \
zlib1g-dev \
libva-dev \
libdrm-dev && \
rm -rdf /tmp/handbrake && \
mkdir -p /tmp/handbrake && \
git clone \
--branch ${HANDBRAKE} \
--depth 1 https://github.com/HandBrake/HandBrake.git \
/tmp/handbrake && \
cd /tmp/handbrake && \
./configure \
--enable-nvenc \
--enable-qsv \
--enable-x265 \
--disable-gtk \
--launch-jobs=14 \
--launch \
--force && \
make --directory=build install && \
cp /tmp/handbrake/build/HandBrakeCLI /usr/local/bin/HandBrakeCLI && \
rm -rdf /tmp/handbrake/ ; \
fi && \
if uname -m | grep -q aarch64; then \
apt-get install -y handbrake-cli ffmpeg && \
ln -s /usr/bin/ffmpeg /usr/local/bin/tdarr-ffmpeg ; \
fi && \
if uname -m | grep -q armv7l; then \
apt-get install -y handbrake-cli ffmpeg && \
ln -s /usr/bin/ffmpeg /usr/local/bin/tdarr-ffmpeg ; \
fi
# handle tdarr binaries
RUN if [ "$MODULE" = "Tdarr_Node" ]; then \
echo removing /tdarr_server && \
rm -rdf /etc/services.d/tdarr_server ; \
fi && \
apt-get update && apt-get install -y curl unzip mkvtoolnix libtesseract-dev && \
if uname -m | grep -q x86; then \
curl --connect-timeout 120 --retry 5 -o /tmp/$MODULE.zip -L \
"https://tdarrs.s3.us-west-000.backblazeb2.com/versions/$VERSION/linux_x64/$MODULE.zip" && \
unzip -q /tmp/$MODULE.zip -d /app/$MODULE -x *.exe && \
if [ "$MODULE" = "Tdarr_Server" ]; then \
curl --connect-timeout 120 --retry 5 -o /tmp/Tdarr_Node.zip -L \
"https://tdarrs.s3.us-west-000.backblazeb2.com/versions/$VERSION/linux_x64/Tdarr_Node.zip" && \
unzip -q /tmp/Tdarr_Node.zip -d /app/Tdarr_Node -x *.exe ; \
fi ; \
fi && \
if uname -m | grep -q aarch64; then \
curl --connect-timeout 120 --retry 5 -o /tmp/$MODULE.zip -L \
"https://tdarrs.s3.us-west-000.backblazeb2.com/versions/$VERSION/linux_arm64/$MODULE.zip" && \
unzip -q /tmp/$MODULE.zip -d /app/$MODULE -x *.exe && \
if [ "$MODULE" = "Tdarr_Server" ]; then \
curl --connect-timeout 120 --retry 5 -o /tmp/Tdarr_Node.zip -L \
"https://tdarrs.s3.us-west-000.backblazeb2.com/versions/$VERSION/linux_arm64/Tdarr_Node.zip" && \
unzip -q /tmp/Tdarr_Node.zip -d /app/Tdarr_Node -x *.exe ; \
fi ; \
fi && \
if uname -m | grep -q armv7l; then \
curl --connect-timeout 120 --retry 5 -o /tmp/$MODULE.zip -L \
"https://tdarrs.s3.us-west-000.backblazeb2.com/versions/$VERSION/linux_arm/$MODULE.zip" && \
unzip -q /tmp/$MODULE.zip -d /app/$MODULE -x *.exe && \
if [ "$MODULE" = "Tdarr_Server" ]; then \
curl --connect-timeout 120 --retry 5 -o /tmp/Tdarr_Node.zip -L \
"https://tdarrs.s3.us-west-000.backblazeb2.com/versions/$VERSION/linux_arm/Tdarr_Node.zip" && \
unzip -q /tmp/Tdarr_Node.zip -d /app/Tdarr_Node -x *.exe ; \
fi ; \
fi && \
rm -rdf /tmp/$MODULE.zip && \
rm -rdf /tmp/Tdarr_Node.zip && \
trash-empty && \
apt-get autoremove -y
EXPOSE ${NODE_PORT}
EXPOSE ${WEB_UI_PORT}
EXPOSE ${SERVER_PORT}
ENTRYPOINT ["/init"]
btw when you wget jellyfin-ffmpeg
you forgot to delete the .deb file, this contributes to extra 43Mib in the container:
❯ k exec -it tdarr-5cdd655cc5-sqjc9 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# ls /
app boot command defaults etc home jellyfin-ffmpeg5_5.1.2-5-focal_amd64.deb lib32 libx32 media package root sbin sys tmp var
bin cache config dev docker-mods firstRun init lib lib64 logs mnt opt proc run srv temp usr
I confirm tdarr isnt using ffmpeg from jellyfin:
Now i can login to the container and check ffmpeg config:
root@tdarr-6764b48469-g8nrc:/# /app/Tdarr_Node/node_modules/ffmpeg-static/ffmpeg -buildconf
ffmpeg version 5.0.1-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 8 (Debian 8.3.0-6)
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil 57. 17.100 / 57. 17.100
libavcodec 59. 18.100 / 59. 18.100
libavformat 59. 16.100 / 59. 16.100
libavdevice 59. 4.100 / 59. 4.100
libavfilter 8. 24.100 / 8. 24.100
libswscale 6. 4.100 / 6. 4.100
libswresample 4. 3.100 / 4. 3.100
libpostproc 56. 3.100 / 56. 3.100
configuration:
--enable-gpl
--enable-version3
--enable-static
--disable-debug
--disable-ffplay
--disable-indev=sndio
--disable-outdev=sndio
--cc=gcc
--enable-fontconfig
--enable-frei0r
--enable-gnutls
--enable-gmp
--enable-libgme
--enable-gray
--enable-libaom
--enable-libfribidi
--enable-libass
--enable-libvmaf
--enable-libfreetype
--enable-libmp3lame
--enable-libopencore-amrnb
--enable-libopencore-amrwb
--enable-libopenjpeg
--enable-librubberband
--enable-libsoxr
--enable-libspeex
--enable-libsrt
--enable-libvorbis
--enable-libopus
--enable-libtheora
--enable-libvidstab
--enable-libvo-amrwbenc
--enable-libvpx
--enable-libwebp
--enable-libx264
--enable-libx265
--enable-libxml2
--enable-libdav1d
--enable-libxvid
--enable-libzvbi
--enable-libzimg
This is server container, not node.
@agilob please post your run command or compose. The container is using jellyfin ffmpeg (has been for the last year and a half or so).
You can simply check this by loading the container with default settings and running --help on the help tab:
Ran a test file through default plugins:
Where it shows it's using tdarr-ffmpeg
which is linked to jellyfin ffmpeg.
server:
containers:
- name: tdarr
image: ghcr.io/haveagitgat/tdarr
volumeMounts:
- name: tdarr-config
mountPath: /app/configs
- name: tdarr-server
mountPath: /app/server
- name: disk-data
mountPath: /disk
- name: nextcloud
mountPath: /nextcloud
- name: cache
mountPath: /cache
ports:
- containerPort: 8265
- containerPort: 8266
env:
- name: PGID
value: "1000"
- name: PUID
value: "1000"
- name: internalNode
value: "true"
- name: webUIPort
value: "8265"
- name: serverPort
value: "8266"
- name: serverIP
value: "0.0.0.0"
resources:
limits:
amd.com/gpu: 1 # requesting a GPU
cpu: "4"
memory: "6Gi"
requests:
cpu: "2"
memory: "2Gi"
securityContext:
privileged: true #Needed for /dev
capabilities:
drop: ["ALL"]
node:
containers:
- name: tdarr
image: ghcr.io/haveagitgat/tdarr_node
volumeMounts:
- name: tdarr-config
mountPath: /app/configs
- name: disk-data
mountPath: /disk
- name: nextcloud
mountPath: /nextcloud
- name: cache
mountPath: /cache
ports:
- containerPort: 8266
env:
- name: PGID
value: "1000"
- name: PUID
value: "1000"
- name: serverPort
value: "8266"
- name: serverIP
value: "tdarr-server"
resources:
limits:
gpu.intel.com/i915: 1
requests:
cpu: "1"
memory: "2Gi"
securityContext:
privileged: true #Needed for /dev
capabilities:
drop: ["ALL"]
My image hashes:
Image: ghcr.io/haveagitgat/tdarr
Image ID: ghcr.io/haveagitgat/tdarr:2.00.19.1/tdarr@sha256:fe3ebf399c3829da9645ff17473f00927f5135b64c9533d3533b0d2909f3a875
I havent modified any run commands.
I can k exec
into an instance and do cp /usr/lib/jellyfin-ffmpeg/ffmpeg /app/Tdarr_Node/node_modules/ffmpeg-static/ffmpeg
to fix encoding per instance, then i have vaapi and amdf working
Ok thanks, from what I can see the application isn't detecting that it's running in a container.
You can verify this by checking the Server or Node log on startup:
| [2023-03-27T11:37:52.379] [INFO] Tdarr_Server - linux_x64_docker_true
I dont see this line on server or any node. Is there an env var I could enforce container mode?
@agilob sorry was at work but no can't force set it atm. I've add env var inContainer
to the dev images which will force it, can try if you like:
docker pull haveagitgat/tdarr_acc:dev_2.00.20_2023_03_27T19_19_13z
docker pull haveagitgat/tdarr_node_acc:dev_2.00.20_2023_03_27T19_19_13z
So would set inContainer=true
page crashes
Failed to load resource: the server responded with a status of 404 ()
/undefinedtdarr/api/v2/cruddb:1
Failed to load resource: the server responded with a status of 404 ()
2.e35fd529.chunk.js:2 Uncaught (in promise) Error: Request failed with status code 404
at e.exports (2.e35fd529.chunk.js:2:867694)
at e.exports (2.e35fd529.chunk.js:2:1373450)
at XMLHttpRequest.w (2.e35fd529.chunk.js:2:866083)
/undefinedtdarr/socket.io/?EIO=4&transport=polling&t=OSaUaby:1
Failed to load resource: the server responded with a status of 404 ()
Think the custom url you’re using might be interfering so try using localhost on the machine it’s running on. Can also use the latest dev build as have done a few since then.
https://hub.docker.com/r/haveagitgat/tdarr_acc/tags
all plugins failed to load :facepalm:
i recreated library and it's detecting ffmpeg correctly
Don't want to spam up this issue as it's separate so please use the Discord if you need support: https://discord.com/invite/GF8X8cq
But it's failing to convert any video that worked before:
2023-03-28T06:19:59.443Z frame= 2769 fps=119 q=-0.0 size= 6912kB time=00:00:46.37 bitrate=1221.1kbits/s dup=2 drop=0 speed= 2x
2023-03-28T06:19:59.443Z frame= 2828 fps=119 q=-0.0 size= 7168kB time=00:00:47.34 bitrate=1240.3kbits/s dup=2 drop=0 speed= 2x
2023-03-28T06:19:59.443Z frame= 2889 fps=119 q=-0.0 size= 7168kB time=00:00:48.36 bitrate=1214.1kbits/s dup=2 drop=0 speed= 2x
2023-03-28T06:19:59.443Z Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scale_0'
2023-03-28T06:19:59.443Z
2023-03-28T06:19:59.443Z Error reinitializing filters!
2023-03-28T06:19:59.443Z Failed to inject frame into filter network: Function not implemented
2023-03-28T06:19:59.443Z Error while processing the decoded data for stream #0:0
2023-03-28T06:19:59.443Z
2023-03-28T06:19:59.443Z Conversion failed!
2023-03-28T06:19:59.443Z
2023-03-28T06:19:59.443Z
2023-03-28T06:19:59.443Z C2UhobDakQ:Node[slave]:Worker[nifty-nag]:[-error-]
Don't want to spam up this issue as it's separate so please use the Discord if you need support:
that's what github is for, support and issues, it can be searched for and our comments are in google. discord is not for support at all
Discord is absolutely for support issues like this. Having back and forths here over several hours/days on here for issues is very time consuming. Github is for specific technical issues. This issue is specifically for Can Tdarr do HW transcoding AMD Radeon R9 M370X?
. Discord too can be searched.
Having back and forths here over several hours/days on here for issues is very time consuming
As much as asking issue reporters to register on another chat platform to continue discussing an open issue?
This issue is specifically for Can Tdarr do HW transcoding AMD Radeon R9 M370X?
And I have MSI Radeon R9 380, can I stay in this thread or should I open another issue?
Discord too can be searched.
Using google, bing and ddg?
Was just a suggestion as on here it’s just me (got work all day unfortunately) but on Discord there are others who can help. Don’t think you even need to sign up as it gives you a temporary user.
Added some changes so it should now work fine when using your custom url.
Can just keep it on here.
I pulled the same image tag and GPU is working on server and nodes, but 2/4 libraries have damaged plugins now:
Still doesn't work with non-localhost access
All GPU transcodings fail with:
2023-03-28T11:10:24.008Z Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scale_0'
2023-03-28T11:10:24.008Z Error reinitializing filters!
2023-03-28T11:10:24.008Z Failed to inject frame into filter network: Function not implemented
2023-03-28T11:10:24.008Z Error while processing the decoded data for stream #0:0
2023-03-28T11:10:24.008Z Conversion failed!
Hi sorry missed this but your first issue typically happens when Tdarr can't download or extract the plugins. You can check the Tdarr server log, should show something like:
[2023-04-02T20:23:52.398] [INFO] Tdarr_Server - Cloning plugins
[2023-04-02T20:23:52.936] [INFO] Tdarr_Server - Finished downloading plugins!
[2023-04-02T20:24:13.980] [INFO] Tdarr_Server - [21589ms]Plugin update finished
So could either be network issue or permission issue.
Not sure about the second issue as I don't have any AMD GPUs.
Would need the network request info for the other error.
Can perhaps try using the 'Set Video Encoder' in a flow as it has options to auto detect which hardware encoders are available:
Nvidia nvenc mentioned in the opening post won't work on an AMD GPU.
Reopen if needed ty.