Ability to record sub stream along with main stream
Describe what you are trying to accomplish and why in non technical terms
I have cameras with very high resolutions, I want to be able to do continuous recording of the lower resolution input stream but whenever a detection occurs, I want the associated clips to be taken from the high resolution input stream, not from the lower resolution input stream.
Describe the solution you'd like
A new camera role called something like "Clips" would allow users to choose which input stream to use for generating clips instead of assuming that this should come from the continous recording role.
Describe alternatives you've considered
I could duplicate the camera in Frigate to achieve this, however this would also mean running detection twice for the same stream which is highly undesirable.
I could also do continuous recording of the high resolution input stream but that would take very large amounts of disk space, not ideal.
+1 for this. It would also help in using Doubletake and Compreface. Have it up and running but the resolution of the clip used is to low for face detection, too many incorrect results. Same goes for using the clip for a doorbell notification.
My understanding is that Doubletake and Compreface use still images for detection, not video clips. This won't help increase the resolution snapshots.
Yes, but doubletake grabs the snapshot from Frigate and passes that to compreface. The snapshot resolution is from the substream, because that is used for detection. Would be awesome I f we could pass a snapshot from the mainstream.
/edit I now see my mistake, I mixed up clips (video) and snapshots (stills).
Below is a proposed config to implement continuous recording of sub-stream and recording only active_objects in the main stream. Of course while I see this as easy to configure from the user's side, I have no idea if this would work behind the scenes. Just for consideration:
go2rtc:
streams:
camera_one_main: rtsp://192.168.0.2:1234 # camera one main stream
camera_one_sub: rtsp://192.168.0.2:5678 # camera one sub stream
cameras:
camera_one:
enabled: true
ffmpeg:
inputs:
- path: rtsp://localhost:8554/camera_one_main
roles:
- record
- audio
- path: rtsp://localhost:8554/camera_one_sub
roles:
- detect
- record
- audio
record:
enabled: true
retain:
mode: all
days: -1 # '-1' to record continuously, or until storage is consumed & delete oldest 2 hours with 1 hr remaining, or state a number of days as is currently implemented
stream: camera_one_sub
alerts:
mode: active_objects
days: -1
stream: camera_one_main
My issue is the ability to stream high-res from my house with only 11Mb/s upload speed, which can buffer clips for 20-30 seconds before playback. Basically as it sits now, I can't use my 4k cameras to their full potential.
My idea for this implementation would be the ability to have a high-res recording accessible on the LAN, and a secondary low-res recording for remote viewing. Possibly some logic to detect if I'm on the LAN or remote when presenting the stream.
A secondary option would be on-the-fly transcoding from high-res to low-res when remote viewing, possibly with logic to detect when streaming is too slow. Seems like this is how some other NVR software handles it.
This is probably the top feature I believe im missing with Frigate. I think the implementation suggested in the OP or the one above by @partytimeexcellent would be fantastic. If it was implemented by on-the-fly transcoding that's a bit of a mutually exclusive feature however as it wouldn't save on disk space like the other suggestions.
For me, disk space is cheap. What I desire is eliminating as much buffering/clip loading time as possible while maximizing viewing resolution.
Has anyone figured out a way to always record the sub stream and use the main stream only for events? Or do the devs have any plans to implement this?
Yes, that's why it's pinned