Tdarr
Tdarr copied to clipboard
TDarr can 'skip' processing a plugin if both CPU and GPU Workers are enabled
Describe the bug Under certain circumstances TDAR processing can 'skip' or 'bypass' a plugin if both CPU and GPU workers are enabled for the node.
To Reproduce I have two plugins, the first is a modified version of 'Tdarr_Plugin_s7x9_winsome_h265_nvenc' and was built specifically to transcode AVI files (avi/mpeg4/mp3-with no audio lang metadata) to mkv/hevc/aac-lc (with eng lang metadata). This must be done in two passes:
PASS 1: 'BTM---Tdarr_Plugin_avi2mkv_h265_aac_nvenc' transcodes to mkv/hevc/mp3. This process uses response.preset = '-Z "H.265 MKV 1080p30" --all-audio --all-subtitles -e nvenc_h265'; It takes about 1.5 minutes so is easy to monitor, showing plugin 1/1 each time.
PASS 2: 'BTM---Tdarr_Plugin_avi2mkv_h265_aac_nvenc' transcodes to mkv/hevc/aac-lc and adds 'eng' 'mono' language meta-tags to aac. This process uses response.preset = ' ,-c copy -map 0:v -map 0:1 -metadata:s:a:0 language=eng -c:1 ${preferredCodec} -map 0:s? -map 0:d? -max_muxing_queue_size 9999; It also takes about 1.5 minutes and is easy to monitor, showing plugin 1/1 each time.
If TDarr is configured with only this plugin it works perfectly on my 5 test files and other similar which I tested. However, if I enable the community plugin 'Tdarr_Plugin_MC93_Migz3CleanAudio' downstream (meant to clean up non-avi/mp3 files) TDarr will process about half of the files properly but will fail the others with the following log entries (entries truncated to conserve bits):
Safety check [-error-]:The new transcode arguments were the exact same as the last ones meaning the file/worker would most likely be stuck in an infinite transcode loop if not stopped. Last arguments: , -map 0 -metadata:s:a:0 language=eng -c copy -max_muxing_queue_size 9999 in .avi New arguments: , -map 0 -metadata:s:a:0 language=eng -c copy -max_muxing_queue_size 9999 in .avi Plugin Community Tdarr_Plugin_MC93_Migz3CleanAudio
Watching TDarr process these files you can see that the failed files never list the expected plugin 1/2 running but instead move directly to 2/2. This fails as it attempts to process the avi/mpeg4/mp3 file to add an eng tag which is unsupported in this configuration. The files which process through the first plugin properly already have an eng meta-tag when they get to the second plugin so they pass through #2 quickly.
Expected behavior I am, as yet, unable to determine why TDarr appears to 'skip' plugin one on some of the files but I know that it does so if both CPU and GPU workers are enabled and it does not skip all files. If we assume that plugin processing is cyclical and linear, meaning that each file is processed from top to bottom with no plugin 'skipped' ...ever, it seems to me that for some reason a decision is being made to bypass the first plugin. For if it runs and requires an unavailable xxx worker (which it appears may be happening) it should be queued, waiting for a free xxx worker? Given my limited understanding, the behavior I am seeing is absolutely NOT expected.
Screenshots
Please provide the following information:
-
Config files Tdarr_Node_Config.json.txt Tdarr_Server_Config.json.txt
-
Log files [can be found in /app/logs/ when using Docker or in the /logs folder next to Tdarr_Updater if not using Docker] faillog.txt passlog.txt
-Worker error [can be found on the 'Tdarr' tab by pressing the 'i' button on a failed item in the staged file section or in the transcode error section at the bottom] (see logs)
- OS: Windows 10 Version 10.0.19044 (64-bit) Build 19044
- Browser Chrome Version 105.0.5195.126 (Official Build) (64-bit)
- Version 2.00.18
Additional context BTM---Tdarr_Plugin_avi2mkv_h265_aac_nvenc.js.txt