frigate
frigate copied to clipboard
[HW Accel Support]: Minimum Nvidia driver version may not work with CUDA/NVDEC
Describe the problem you are having
I bought an old Fermi-era Tesla M2090 for use in my server. It is the best-spec card that is part of an officially supported configuration. Due to its age the latest Nvidia driver version supported is 390.157. This is above the Jellyfin documentation's listed minimum driver version of 361.93.
Following all the documentation, I have built the driver and successfully loaded it:
# nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.157 Driver Version: 390.157 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M2090 Off | 00000000:42:00.0 Off | 0 |
| N/A N/A P0 79W / N/A | 0MiB / 5301MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
As I am on Alpine Linux (with @sgerrand's alpine-pkg-glibc), I did not want to bother trying to get libnvidia-container
to compile; all it is anyway is a shim for Docker to automatically mount host binaries and libraries to ensure driver compatibility, as things apparently break between dot versions. So I do what Singularity does and simply mount the binaries and libraries manually into the container:
# cat docker-compose.yml
...
devices:
- /dev/nvidia0
...
# https://github.com/sylabs/singularity/blob/main/etc/nvliblist.conf
# binaries, only mount what's needed
- /usr/bin/nvidia-smi:/usr/bin/nvidia-smi:ro
- /usr/bin/nvidia-debugdump:/usr/bin/nvidia-debugdump:ro
- /usr/bin/nvidia-persistenced:/usr/bin/nvidia-persistenced:ro
- /usr/bin/nvidia-cuda-mps-control:/usr/bin/nvidia-cuda-mps-control:ro
- /usr/bin/nvidia-cuda-mps-server:/usr/bin/nvidia-cuda-mps-server:ro
# libs, only mount what exists
- /usr/lib/libcuda.so:/usr/lib/libcuda.so.1:ro
- /usr/lib/libEGL.so:/usr/lib/libEGL.so.1:ro
- /usr/lib/libGLESv1_CM.so:/usr/lib/libGLESv1_CM.so.1:ro
- /usr/lib/libGLESv2.so:/usr/lib/libGLESv2.so.1:ro
- /usr/lib/libGL.so:/usr/lib/libGL.so.1:ro
- /usr/lib/libGLX.so:/usr/lib/libGLX.so.1:ro
- /usr/lib/libnvcuvid.so:/usr/lib/libnvcuvid.so.1:ro
- /usr/lib/libnvidia-cfg.so:/usr/lib/libnvidia-cfg.so.1:ro
- /usr/lib/libnvidia-encode.so:/usr/lib/libnvidia-encode.so.1:ro
- /usr/lib/libnvidia-fbc.so:/usr/lib/libnvidia-fbc.so.1:ro
- /usr/lib/libnvidia-ifr.so:/usr/lib/libnvidia-ifr.so.1:ro
- /usr/lib/libnvidia-ml.so:/usr/lib/libnvidia-ml.so.1:ro
- /usr/lib/libnvidia-ptxjitcompiler.so:/usr/lib/libnvidia-ptxjitcompiler.so.1:ro
- /usr/lib/libOpenCL.so:/usr/lib/libOpenCL.so.1:ro
- /usr/lib/libOpenGL.so:/usr/lib/libOpenGL.so.1:ro
- /usr/lib/libvdpau_nvidia.so:/usr/lib/libvdpau_nvidia.so.1:ro
Before someone complains, while yes this is ugly, it does work:
# sudo docker exec -it frigate nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.157 Driver Version: 390.157 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M2090 Off | 00000000:42:00.0 Off | 0 |
| N/A N/A P0 81W / N/A | 0MiB / 5301MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
The issue, based on what I can see from the logs, appears to stem from libnvcuvid.so
missing symbols:
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Cannot load cuvidGetDecodeStatus
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Failed loading nvcuvid.
I believe this is due to the fact that FFmpeg seems to only support Nvidia drivers over 520.56.06. The reason I bought this graphics card in the first place and not a newer one is because I believed it would work despite the old driver version, as the Jellyfin documentation says it should work, which is the documentation Frigate points at. If it is indeed the case that the older drivers will not work, either Jellyfin's documentation needs to be updated if it is relevant to them (they may use an older FFmpeg version, I don't know), or a note needs to be included in Frigate's documentation about the minimum supported driver version differing from Jellyfin docs.
If I am wrong about any of this, please correct me. I am not at all familiar with hardware accelerated video encoding/decoding.
P.S.: As I did not install libnvidia-container
, I am not able to explicitly allocate the GPU resource from the host, as per the deploy
element for the sample docker-compose.yml
in the docs, but based on my research, this only directs Docker to ensure that the Nvidia Docker runtime is used and that the GPU is available.
So... I know this is a hacky and likely to be unsupported configuration, but the important part here is that all the moving parts work up until the libraries are loaded, at which point the error encountered doesn't appear to be from my hacks, and instead from the driver being unsupported by upstream libraries.
One possibility would be to use Nvidia's VDPAU implementation, which based on my reading has been shown to work with similar-generation cards with FFmpeg, but Frigate's FFmpeg is not compiled with --enable-vdpau
so I am not even able to test it.
As it stands I am probably going to return this card and get an AMD, officially supported configuration or not...
Version
0.11.1-2eada21
Frigate config file
ffmpeg:
hwaccel_args: []
input_args: -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -timeout 5000000 -use_wallclock_as_timestamps 1
output_args:
detect: -f rawvideo -pix_fmt yuv420p
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -c:a aac
rtmp: -c copy -f flv
cameras:
cam1:
ffmpeg:
hwaccel_args: -loglevel debug -c:v h264_cuvid
inputs:
- path: rtsp://cam1:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://cam1:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 480
docker-compose file or Docker CLI command
version: "3.9"
services:
frigate:
container_name: frigate
privileged: true
restart: unless-stopped
image: blakeblackshear/frigate:stable
shm_size: "128mb"
devices:
- /dev/bus/usb:/dev/bus/usb
- /dev/nvidia0
volumes:
- /etc/localtime:/etc/localtime:ro
- ./config.yml:/config/config.yml:ro
- /media/plains/frigate:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
# https://github.com/sylabs/singularity/blob/main/etc/nvliblist.conf
# binaries, only mount what's needed
- /usr/bin/nvidia-smi:/usr/bin/nvidia-smi:ro
- /usr/bin/nvidia-debugdump:/usr/bin/nvidia-debugdump:ro
- /usr/bin/nvidia-persistenced:/usr/bin/nvidia-persistenced:ro
- /usr/bin/nvidia-cuda-mps-control:/usr/bin/nvidia-cuda-mps-control:ro
- /usr/bin/nvidia-cuda-mps-server:/usr/bin/nvidia-cuda-mps-server:ro
# libs, only mount what exists
- /usr/lib/libcuda.so:/usr/lib/libcuda.so:ro
#- /usr/lib/libEGL_installertest.so:/usr/lib/libEGL_installertest.so.1:ro
#- /usr/lib/libEGL_nvidia.so:/usr/lib/libEGL_nvidia.so.1:ro
- /usr/lib/libEGL.so:/usr/lib/libEGL.so.1:ro
#- /usr/lib/libGLdispatch.so:/usr/lib/libGLdispatch.so.1:ro
#- /usr/lib/libGLESv1_CM_nvidia.so:/usr/lib/libGLESv1_CM_nvidia.so.1:ro
- /usr/lib/libGLESv1_CM.so:/usr/lib/libGLESv1_CM.so.1:ro
#- /usr/lib/libGLESv2_nvidia.so:/usr/lib/libGLESv2_nvidia.so.1:ro
- /usr/lib/libGLESv2.so:/usr/lib/libGLESv2.so.1:ro
- /usr/lib/libGL.so:/usr/lib/libGL.so.1:ro
#- /usr/lib/libGLX_installertest.so:/usr/lib/libGLX_installertest.so.1:ro
#- /usr/lib/libGLX_nvidia.so:/usr/lib/libGLX_nvidia.so.1:ro
#- /usr/lib/libglx.so:/usr/lib/libglx.so.1:ro
- /usr/lib/libGLX.so:/usr/lib/libGLX.so.1:ro
- /usr/lib/libnvcuvid.so:/usr/lib/libnvcuvid.so.1:ro
#- /usr/lib/libnvidia-cbl.so:/usr/lib/libnvidia-cbl.so.1:ro
- /usr/lib/libnvidia-cfg.so:/usr/lib/libnvidia-cfg.so.1:ro
#- /usr/lib/libnvidia-compiler.so:/usr/lib/libnvidia-compiler.so.1:ro
#- /usr/lib/libnvidia-eglcore.so:/usr/lib/libnvidia-eglcore.so.1:ro
#- /usr/lib/libnvidia-egl-wayland.so:/usr/lib/libnvidia-egl-wayland.so.1:ro
- /usr/lib/libnvidia-encode.so:/usr/lib/libnvidia-encode.so.1:ro
#- /usr/lib/libnvidia-fatbinaryloader.so:/usr/lib/libnvidia-fatbinaryloader.so.1:ro
- /usr/lib/libnvidia-fbc.so:/usr/lib/libnvidia-fbc.so.1:ro
#- /usr/lib/libnvidia-glcore.so:/usr/lib/libnvidia-glcore.so.1:ro
#- /usr/lib/libnvidia-glsi.so:/usr/lib/libnvidia-glsi.so.1:ro
#- /usr/lib/libnvidia-glvkspirv.so:/usr/lib/libnvidia-glvkspirv.so.1:ro
#- /usr/lib/libnvidia-gtk2.so:/usr/lib/libnvidia-gtk2.so.1:ro
#- /usr/lib/libnvidia-gtk3.so:/usr/lib/libnvidia-gtk3.so.1:ro
- /usr/lib/libnvidia-ifr.so:/usr/lib/libnvidia-ifr.so.1:ro
- /usr/lib/libnvidia-ml.so:/usr/lib/libnvidia-ml.so.1:ro
#- /usr/lib/libnvidia-opencl.so:/usr/lib/libnvidia-opencl.so.1:ro
#- /usr/lib/libnvidia-opticalflow.so:/usr/lib/libnvidia-opticalflow.so.1:ro
- /usr/lib/libnvidia-ptxjitcompiler.so:/usr/lib/libnvidia-ptxjitcompiler.so.1:ro
#- /usr/lib/libnvidia-rtcore.so:/usr/lib/libnvidia-rtcore.so.1:ro
#- /usr/lib/libnvidia-tls.so:/usr/lib/libnvidia-tls.so.1:ro
#- /usr/lib/libnvidia-wfb.so:/usr/lib/libnvidia-wfb.so.1:ro
#- /usr/lib/libnvoptix.so.1:/usr/lib/libnvoptix.so.1.1:ro
- /usr/lib/libOpenCL.so:/usr/lib/libOpenCL.so.1:ro
- /usr/lib/libOpenGL.so:/usr/lib/libOpenGL.so.1:ro
- /usr/lib/libvdpau_nvidia.so:/usr/lib/libvdpau_nvidia.so.1:ro
#- /usr/lib/nvidia_drv.so:/usr/lib/nvidia_drv.so.1:ro
#- /usr/lib/tls_test_.so:/usr/lib/tls_test_.so.1:ro
ports:
- "127.0.0.1:5000:5000"
- "127.0.0.1:1935:1935" # RTMP feeds
extra_hosts: ["host.docker.internal:host-gateway"]
Relevant log output
sudo docker logs --tail=10 -f frigate
[2022-12-14 20:12:02] frigate.app INFO : Starting Frigate (0.11.1-2eada21)
[2022-12-14 20:12:22] watchdog.cam1 ERROR : Ffmpeg process crashed unexpectedly for cam1.
[2022-12-14 20:12:22] watchdog.cam1 ERROR : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : c=IN IP4 0.0.0.0
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : b=AS:500
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : a=rtpmap:96 H264/90000
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : a=fmtp:96 packetization-mode=1;profile-level-id=640033;sprop-parameter-sets=Z2QAM6wVFKCgPZA=,aO48sA==
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : a=control:track1
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : m=audio 0 RTP/AVP 97
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : c=IN IP4 0.0.0.0
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : b=AS:256
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : a=rtpmap:97 MPEG4-GENERIC/16000
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : a=fmtp:97 streamtype=5;profile-level-id=1;mode=AAC-hbr;sizelength=13;indexlength=3;indexdeltalength=3;config=1408
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : a=control:track2
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR :
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Failed to parse interval end specification ''
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] video codec set to: h264
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] RTP Packetization Mode: 1
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] RTP Profile IDC: 64 Profile IOP: 0 Level: 33
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] Extradata set to 0x55aec7b7db00 (size: 23)
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] audio codec set to: aac
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] audio samplerate set to: 16000
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] audio channels set to: 1
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] setting jitter buffer size to 0
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] hello state=0
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Failed to parse interval end specification ''
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] nal_unit_type: 7(SPS), nal_ref_idc: 3
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] nal_unit_type: 7(SPS), nal_ref_idc: 3
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] DTS discontinuity in stream 1: packet 3 with DTS 26737125184936, packet 4 with DTS 26737125187534
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] nal_unit_type: 7(SPS), nal_ref_idc: 3
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] nal_unit_type: 5(IDR), nal_ref_idc: 3
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] Format yuv420p chosen by get_format().
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] Reinit context to 640x480, pix_fmt: yuv420p
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264 @ 0x55aec7b81a00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 5 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] max_analyze_duration 5000000 reached at 5056000 microseconds st:1
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 2.916667 0.009279
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 3.000000 0.008424
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 6.750000 0.009708
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 6.833333 0.010258
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 9.666667 0.010239
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 9.750000 0.000507
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 9.833333 0.009628
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 12.666667 0.009315
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 12.750000 0.008155
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 16.500000 0.012005
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 16.583333 0.012250
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 19.416667 0.012065
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 19.500000 0.002028
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 19.583333 0.010844
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 22.416667 0.010364
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 22.500000 0.008899
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Last message repeated 1 times
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 26.250000 0.015317
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 26.333333 0.015256
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 29.166667 0.014904
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 29.250000 0.004562
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 29.333333 0.013074
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 36.000000 0.019642
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 39.000000 0.008111
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [rtsp @ 0x55aec7b7ab80] rfps: 42.000000 0.013429
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Input #0, rtsp, from 'rtsp://cam1:554/h264Preview_01_sub':
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Metadata:
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : title : Session streamed by "preview"
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : comment : h264Preview_01_sub
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Duration: N/A, start: 1671070323.868938, bitrate: N/A
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Stream #0:0, 14, 1/90000: Video: h264 (High), 1 reference frame, yuv420p(progressive), 640x480, 0/1, 9.75 tbr, 90k tbn
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Stream #0:1, 81, 1/16000: Audio: aac (LC), 16000 Hz, mono, fltp
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Successfully opened the file.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Parsing a group of options: output url pipe:.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Applying option r (set frame rate (Hz value, fraction or abbreviation)) with argument 5.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Applying option s (set frame size (WxH or abbreviation)) with argument 640x480.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Applying option f (force format) with argument rawvideo.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Applying option pix_fmt (set pixel format) with argument yuv420p.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Successfully parsed a group of options.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Opening an output file: pipe:.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [pipe @ 0x55aec7cff400] Setting default whitelist 'crypto,data'
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Successfully opened the file.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_mp4toannexb @ 0x55aec7b83980] The input looks like it is Annex B already
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Format nv12 chosen by get_format().
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Loaded lib: libnvcuvid.so.1
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Loaded sym: cuvidGetDecoderCaps
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Loaded sym: cuvidCreateDecoder
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Loaded sym: cuvidDestroyDecoder
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Loaded sym: cuvidDecodePicture
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Cannot load cuvidGetDecodeStatus
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [h264_cuvid @ 0x55aec7b97840] Failed loading nvcuvid.
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Stream mapping:
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Stream #0:0 -> #0:0 (h264 (h264_cuvid) -> rawvideo (native))
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : Error while opening decoder for input stream #0:0 : Operation not permitted
[2022-12-14 20:12:22] ffmpeg.cam1.detect ERROR : [AVIOContext @ 0x55aec7d20cc0] Statistics: 0 bytes written, 0 seeks, 0 writeouts
FFprobe output from your camera
ffprobe version n5.1-2-g915ef932a3-20220731 Copyright (c) 2007-2022 the FFmpeg developers
built with gcc 12.1.0 (crosstool-NG 1.25.0.55_3defb7b)
configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-ffbuild-linux-gnu- --arch=x86_64 --target-os=linux --enable-gpl --enable-version3 --disable-debug --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libvorbis --enable-opencl --enable-libpulse --enable-libvmaf --enable-libxcb --enable-xlib --enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth --enable-libdav1d --enable-libdavs2 --disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --enable-frei0r --enable-libgme --enable-libass --enable-libbluray --enable-libjxl --enable-libmp3lame --enable-libopus --enable-mbedtls --enable-librist --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --enable-libmfx --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --enable-librubberband --disable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --enable-libdrm --enable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc --enable-libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags=-pie --extra-libs='-ldl -lgomp' --extra-version=20220731
libavutil 57. 28.100 / 57. 28.100
libavcodec 59. 37.100 / 59. 37.100
libavformat 59. 27.100 / 59. 27.100
libavdevice 59. 7.100 / 59. 7.100
libavfilter 8. 44.100 / 8. 44.100
libswscale 6. 7.100 / 6. 7.100
libswresample 4. 7.100 / 4. 7.100
libpostproc 56. 6.100 / 56. 6.100
Input #0, rtsp, from 'rtsp://cam1:554/h264Preview_01_sub':
Metadata:
title : Session streamed by "preview"
comment : h264Preview_01_sub
Duration: N/A, start: 0.000313, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 640x480, 90k tbr, 90k tbn
Stream #0:1: Audio: aac (LC), 16000 Hz, mono, fltp
Operating system
Other Linux
Install method
Docker Compose
Network connection
Mixed
Camera make and model
Reolink RLC-542WA
Any other information that may be helpful
No response
We had a LOT of trouble with ffmpeg finding a workable version that fit as many use cases as possible in 0.11 beta, so I don't think we're looking to change that / compile ourselves, especially given the nicheness of the usecase.
You're more than welcome to mount your own (or any other) build of ffmpeg https://docs.frigate.video/configuration/advanced#custom-ffmpeg-build
Less any tweaks to Frigate, more I don't know for sure if things are as I described. I am mildly confident in my hunch but it is still just that. If I'm correct, the best way to keep someone from making the same mistakes as me in the future would be to update the most obvious parts of the docs.
I have no plans of looking for an ancient [statically-compiled] ffmpeg/doing something so extreme as to build a version that supports my ancient driver, when I could easily get an AMD card for just as cheap if not cheaper than what I got the Nvidia one for, and have it work with less of a headache as well.
Adding --enable-vdpau
to official builds would be super cool tho, as long as it wouldn't break anything. I'll assume there is a reason it's disabled.
I could more realistically find/build ffmpeg with VDPAU support to mount and test but I don't know for sure will work at all—I am weighing if it will be worth the hassle past what I've already done.
To be clear I'm just suggesting that's how it could be done, given the age of that card I'm not sure the results would be worth the effort
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I made a PR for this.
Sorry to comment on a nearly closed thread. Wasn't sure how else I could ask.
@sevmonster I assume you're using an older server like mine (PowerEdge R620 here). Just wondering if you found a workable card and, if so, if you'd mind sharing your compose and config. I made the same mistake buying an old Fermi card because the docs looked like it would support it. $20 for the card so isn't worth the effort/cost to return it but I'd rather not buy 5 or 6 more cards trying to find one that works. :) Appreciate any pointers.
Sorry, I haven't tried to get a new card. My CPU can eat the difference so it's not a big deal. Any GPU that FFmpeg supports should be fine. For the record, I recorded my trials and tribulations here.
As mentioned elsewhere, you can always mount an old [statically compiled] version of ffmpeg
that supports your driver version. It's more effort and you'll likely be missing out on newer FFmpeg performance and features, but it will work.
No worries. Appreciate the response and the link. I'm thinking the same thing, at this point. I've spent too many hours trying to get this going and I've got extra CPU cycles to waste so I may just let it ride with the CPU's. I was just hoping to get the noise from the fans down a bit more.
As for the old ffmpeg, I'm beyond my element in Linux as it is so I'm probably better off just letting it be before I break anything and have to rebuild the box, at least for now.
Thanks again!
If you can find an old statically compiled ffmpeg for your architecture, it is as easy as mounting it as a volume overtop the Frigate ffmpeg. It should "just work" in theory. But you would probably want to find out what ffmpeg version you could use that is the newest, which would probably be the least fun part.
I didn't go that route simply because I didn't want to mess with it :)