frigate icon indicating copy to clipboard operation
frigate copied to clipboard

0.12.0 Release

Open blakeblackshear opened this issue 2 years ago • 48 comments

For this release, the goals are to focus on stability, diagnostics, and device compatibility. No promises, but these are the guardrails.

Docs preview: https://deploy-preview-4055--frigate-docs.netlify.app/

Stability

  • [x] Proper daylight savings time fix by switching to storing files with unix timestamps rather than YYYY-HH folders (client should specify timezone in API calls to avoid any assumptions about timezone on backend)
  • [x] Ensure recording durations are a reasonable amount of time to avoid corruption
  • [x] Incorporate go2rtc to get rid of common RTMP related issues and provide new live view options
  • [x] Ensure Frigate doesn't fill up storage and crash
  • [x] Hwaccel presets so defaults can be updated without requiring users to change config files
  • [x] Attempt to remove VOD module with a direct m3u8 playlist builder
  • [x] Make MQTT optional and more gracefully handle connection failures and reconnects
  • [x] Detect if segments stop being written to tmp when record is enabled and restart the ffmpeg process responsible for record

Diagnostics/Troubleshooting

  • [x] CPU/GPU stats per process per camera
  • [x] Storage information (per camera too)
  • [x] Logs in UI
  • [x] Logs from nginx/ffmpeg/go2rtc etc more easily available
  • [x] Simple pattern matching for secret scrubbing in logs, etc
  • [ ] Github action to scrub secrets in issues
  • [x] Ability to execute ffprobe/vainfo etc in container and get output for diagnosis
  • [x] Replace green screen with helpful placeholder message
  • [x] Error out on duplicate keys in config file like #4213
  • [x] More helpful output from config validation errors for things like extra keys
  • [x] Show yaml config in the UI to reduce posts with json config

Device support

  • [x] TensorRT
  • [x] OpenVINO

Known issues

  • [x] BASE_PATH not being properly replaced in monaco editor
  • [x] When watching MSE live view on android, if you scroll down past the video and then back up, playback is broken
  • [x] iOS does not support MSE, instead of loading forever should show error message
  • [x] Recordings playback when selecting an hour starts playback ~10 minutes before the hour (in America/Chicago, possibly others)

blakeblackshear avatar Oct 09 '22 11:10 blakeblackshear

Deploy Preview for frigate-docs ready!

Name Link
Latest commit 0e61ea77230bd31192ed0b5c2639e59caca7ae45
Latest deploy log https://app.netlify.com/sites/frigate-docs/deploys/643161a6ad0c5d000821847b
Deploy Preview https://deploy-preview-4055--frigate-docs.netlify.app
Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

netlify[bot] avatar Oct 09 '22 11:10 netlify[bot]

Proper daylight savings time fix by switching to storing files with unix timestamps rather than YYYY-HH folders (client should specify timezone in API calls to avoid any assumptions about timezone on backend)

What about keeping YYYY-HH folders but use UTC instead to avoid DST issues?

pdecat avatar Oct 09 '22 14:10 pdecat

What about keeping YYYY-HH folders but use UTC instead to avoid DST issues?

That could work too

blakeblackshear avatar Oct 09 '22 15:10 blakeblackshear

I like Frigate's feature where the camera's stream fills the computer screen when you click on the image of the camera's live view. This feature would really be great if it would switch over to the camera's stream identified as "rtmp" in the config file when it does this. For example, I use the low res stream for "detect", but use the high res stream for "rtmp". I would like to see the high res stream when I click on the live image to fill the screen. Would this be possible given that you're planning to incorporate go2rtc?

Cold-Lemonade avatar Oct 10 '22 02:10 Cold-Lemonade

I like Frigate's feature where the camera's stream fills the computer screen when you click on the image of the camera's live view. This feature would really be great if it would switch over to the camera's stream identified as "rtmp" in the config file when it does this. For example, I use the low res stream for "detect", but use the high res stream for "rtmp". I would like to see the high res stream when I click on the live image to fill the screen. Would this be possible given that you're planning to incorporate go2rtc?

Yes. The current live view requires decoding the video stream which would require lots of CPU. go2rtc would allow a direct passthrough of the video to the frontend so the higher resolution can be used.

blakeblackshear avatar Oct 10 '22 11:10 blakeblackshear

Yes. The current live view requires decoding the video stream which would require lots of CPU. go2rtc would allow a direct passthrough of the video to the frontend so the higher resolution can be used.

Given that go2rtc allows for direct passthrough of the video to the frontend, could the "camera" tab in the Frigate UI show the low res live feeds instead of snapshots as they currently do?

Cold-Lemonade avatar Oct 10 '22 12:10 Cold-Lemonade

Given that go2rtc allows for direct passthrough of the video to the frontend, could the "camera" tab in the Frigate UI show the low res live feeds instead of snapshots as they currently do?

That's technically already possible without go2rtc. There are existing feature requests for that.

blakeblackshear avatar Oct 10 '22 13:10 blakeblackshear

  • Proper daylight savings time fix by switching to storing files with unix timestamps rather than YYYY-HH folders (client should specify timezone in API calls to avoid any assumptions about timezone on backend)

Would this means that all the files would be in one folder?

*What about keeping YYYY-HH folders but use UTC instead to avoid DST issues?

Personally I would much prefer this solution, unix timestamps are annoying to work with as a human.

thinkloop avatar Oct 13 '22 17:10 thinkloop

Would this means that all the files would be in one folder?

Not necessarily.

Personally I would much prefer this solution, unix timestamps are annoying to work with as a human.

This will probably be just as easy, but why do you need to interact with the segment data? I think of them like internal binary blob storage for the database not intended to be used directly.

blakeblackshear avatar Oct 13 '22 17:10 blakeblackshear

but why do you need to interact with the segment data? I think of them like internal binary blob storage for the database not intended to be used directly.

That's always the dream, but in the end we still rely on our laptop fans to discover errant processes 😛. In my case I concatenated segments to make a timelapse that eneded up being exceptionally easy to accomplish given how nicely organized everything in frigate is.

thinkloop avatar Oct 13 '22 18:10 thinkloop

With go2rtc being integrated directly, does this mean we can stop using the go2rtc add-on within Home Assistant? What if Frigate is running on a host other than HA how would the WebRTC streams be exposed via HA when external to the network? Thanks for continually making Frigate better!!!

rsnodgrass avatar Oct 14 '22 19:10 rsnodgrass

With go2rtc being integrated directly, does this mean we can stop using the go2rtc add-on within Home Assistant? What if Frigate is running on a host other than HA how would the WebRTC streams be exposed via HA when external to the network? Thanks for continually making Frigate better!!!

It depends what all you're using go2rtc for. In this initial implementation not all go2rtc features will necessarily be available. There will also be some caveats with things like webRTC where you may need to run frigate as host

NickM-27 avatar Oct 14 '22 19:10 NickM-27

It depends what all you're using go2rtc for. In this initial implementation not all go2rtc features will necessarily be available. There will also be some caveats with things like webRTC where you may need to run frigate as host

Basically the bare minimum just to get fast streams viewable via Home Assistant without laginess on startup. Streaming the RTMP feeds directly through HA from separate Frigate host are slow. Planned to use go2rtc to provide WebRTC stream that was much faster via the Frigate lovelace card/integration.

rsnodgrass avatar Oct 14 '22 19:10 rsnodgrass

It depends what all you're using go2rtc for. In this initial implementation not all go2rtc features will necessarily be available. There will also be some caveats with things like webRTC where you may need to run frigate as host

Basically the bare minimum just to get fast streams viewable via Home Assistant without laginess on startup. Streaming the RTMP feeds directly through HA from separate Frigate host are slow. Planned to use go2rtc to provide WebRTC stream that was much faster via the Frigate lovelace card/integration.

To bad it doesn't work well with UI generators like Dwain's Dashboard. This last rebuild of HA I went with it just as an ease of use case setup. Boy I wish I wouldn't and had just shard coded the yaml. Given DD is nice and looks well. But very prone to bugs and breaks with updates, can't work with some outside plugins like WebRTC's card, and lack of any real follow through development that would allow it to work with WebRTC

I tried adding it and it broke the setup to where I could not remove it except by manually editing the file.

LordNex avatar Oct 23 '22 18:10 LordNex

Can't wait to see TensorRT. I would love to have Frigate on my 4gig Jetson Nano with the Coral TPU attached to be able to get the best out of the object detection while also having a GPU to encode and decode streams.

LordNex avatar Oct 23 '22 18:10 LordNex

Can't wait to see TensorRT. I would love to have Frigate on my 4gig Jetson Nano with the Coral TPU attached to be able to get the best out of the object detection while also having a GPU to encode and decode streams.

I'm confused, that setup wouldn't use TensorRT unless you mean using that and a coral.

What you're describing should be possible today.

NickM-27 avatar Oct 23 '22 18:10 NickM-27

Yeah you should already to be able to accomplish what you stated. Tensorrt is to use the GPU for detection. Jetsons have an nvdec chip in them. You should be able follow nvidia hwaccel docs to get where you want. If you have performance issues, apply the settings -threads 1 -surfaces 10 in this line: -c:v h264_cuvid -threads 1 -surfaces 10

this will limit the decoding hardware to the minimum required memory needed for a successful decode. (According to NVIDIA docs 8 surfaces is the minimum needed for a good decode so you can probably get away with less if needed, play with it). I don’t know if the one you have has DLA cores or what but if their are multiple GPUs displayed when you do nvidia-smi you need to add -gpus “corresponding GPU number”. So your hwaccel should look like -c:v h264_cuvid -gpus “1” -threads 1 -surfaces 10

-gpus setting not needed if only a single one or the one you want to use is GPU 0. If it doesn’t work let me know as their is another way to engage nvidia hw decoding with ffmpeg if this does not work. It’s just consumes more memory on the gpu and isn’t ideal unless all work is staying on the GPU which with frigate it current doesn’t.

The other setting nvidia explicitly recommends when decoding with ffmpeg is -vsync 0 before the hwaccel args to prevent NVDEC from accidentally duplicated frames when it shouldnt. I have not really seen much of a difference either way with that setting but it is stated it should always be used when decoding if possible.

kdill00 avatar Oct 24 '22 18:10 kdill00

@user897943 all of these are potential future improvements, but as the release goals at the top show: the focus for 0.12 is to have frigate be more stable in this case meaning not crash and continue to record even when storage is almost full.

NickM-27 avatar Oct 28 '22 23:10 NickM-27

Awesome but what strategy will be used to manage?

As was implemented in https://github.com/blakeblackshear/frigate/pull/3942 if frigate detects that there is not enough space for 1 hour of recordings then frigate will delete recordings from oldest to newest until there is a total space for 2 hours of recording and continue this cycle. If a user fills their storage with unrelated files and frigate has no more recordings to delete then it will not crash on being unable to move recording to the recordings drive.

NickM-27 avatar Oct 29 '22 00:10 NickM-27

Hi everyone,

I see that it's planned for the next release to support GPU inferencing with TensorRT. I've been wondering regarding whether it's also planned to support using both GPU (TensorRT) and Coral for inferencing at the same time. Something like : detectors: coral: type: edgetpu device: usb cuda: type: tensorrt

If so, then it will probably require different models, per detector type (I presume no one will want a different model for different instances of the same detector type).

So then the config will probably have to look like:

detectors: coral: type: edgetpu device: usb model: < optional model config> cuda: type: tensorrt model:

Is that something being considered?

felalex avatar Nov 02 '22 08:11 felalex

I've been wondering regarding whether it's also planned to support using both GPU (TensorRT) and Coral for inferencing at the same time

Yes. A mixed set of detectors is already supported.

blakeblackshear avatar Nov 02 '22 10:11 blakeblackshear

I've been wondering regarding whether it's also planned to support using both GPU (TensorRT) and Coral for inferencing at the same time

Yes. A mixed set of detectors is already supported.

For a mixed set of detectors, I think the model configuration will be detector-framework-specific. Do we need to tweak the model config to account for this?

NateMeyer avatar Nov 06 '22 17:11 NateMeyer

For a mixed set of detectors, I think the model configuration will be detector-framework-specific. Do we need to tweak the model config to account for this?

I think that will be necessary, yea. Perhaps the model config should now be nested under the detector.

blakeblackshear avatar Nov 06 '22 17:11 blakeblackshear

Yeah you should already to be able to accomplish what you stated. Tensorrt is to use the GPU for detection. Jetsons have an nvdec chip in them. You should be able follow nvidia hwaccel docs to get where you want. If you have performance issues, apply the settings -threads 1 -surfaces 10 in this line: -c:v h264_cuvid -threads 1 -surfaces 10

this will limit the decoding hardware to the minimum required memory needed for a successful decode. (According to NVIDIA docs 8 surfaces is the minimum needed for a good decode so you can probably get away with less if needed, play with it). I don’t know if the one you have has DLA cores or what but if their are multiple GPUs displayed when you do nvidia-smi you need to add -gpus “corresponding GPU number”. So your hwaccel should look like -c:v h264_cuvid -gpus “1” -threads 1 -surfaces 10

-gpus setting not needed if only a single one or the one you want to use is GPU 0. If it doesn’t work let me know as their is another way to engage nvidia hw decoding with ffmpeg if this does not work. It’s just consumes more memory on the gpu and isn’t ideal unless all work is staying on the GPU which with frigate it current doesn’t.

The other setting nvidia explicitly recommends when decoding with ffmpeg is -vsync 0 before the hwaccel args to prevent NVDEC from accidentally duplicated frames when it shouldnt. I have not really seen much of a difference either way with that setting but it is stated it should always be used when decoding if possible.

Thank you for such a detailed explanation. I'm going to give it a try here soon( have to disassemble the case to get to the SD Card ). I'll start from scratch with the Jetson Nano SDK Image and try and add from there. Mine is the 4 gig Nano that was out prior to the big chip shortage and I know it has a buttload of CUDA cores. But I'm not sure about DLA cores, I'll have to check. But I don't remember it showing multiple GPUs when running jtop.

Currently I've been having pretty good success just utilizing my 40 core PowerEdge R620 and the TPU attached to a passed through usb adapter. I'm then leveraging DoubleTake and CompreFace on my Home Assistant install for recognition and final verifications. So all I'm looking for Frigate to do is the heavy NVR load, RTMP, go2rtc, or decoding/encoding of stream feeds to Home Assistant and devices, and then utilize the TPU for recognition of Person, Face, or Car and send those to DoubleTake via MQTT for processing.

Ultimately I would love to see added ability probably from DoubleTake to take URLs and "scrap" images with the corresponding information and present that as well. Places such as the Sex Offenders List, Departmental of Corrections, and Social Media should allow us to be able to detect who is at our door and as much information as possible before we open the door. I know networking and IT like the back of my hand but I wouldn't consider myself a programmer by any means. Although I'm trying to learn.

LordNex avatar Nov 13 '22 22:11 LordNex

Personally I would much prefer this solution, unix timestamps are annoying to work with as a human.

This will probably be just as easy, but why do you need to interact with the segment data? I think of them like internal binary blob storage for the database not intended to be used directly.

As per some of my feature requests I GREATLY prefer self documented files that don't depend on the Application OR the DB to work to function.. Having a logical / human readable / searchable file system means I could backup a days worth of videos without needing Frigate to view them or know when they were made or if the DB corrupts or something I still can go back through historic videos with file system search and a video player.

bagobones avatar Nov 20 '22 03:11 bagobones

Thank you for such a detailed explanation. I'm going to give it a try here soon( have to disassemble the case to get to the SD Card ). I'll start from scratch with the Jetson Nano SDK Image and try and add from there. Mine is the 4 gig Nano that was out prior to the big chip shortage and I know it has a buttload of CUDA cores. But I'm not sure about DLA cores, I'll have to check. But I don't remember it showing multiple GPUs when running jtop.

Replied to you in another thread - please share the outcome. I also have Nano 4GB sitting waiting for good use and GPU-accelerated object detection with HA would be the perfect use of it.

GCV-Sleeper-Service avatar Nov 22 '22 04:11 GCV-Sleeper-Service

With the new presets option, perhaps the following can also be squeezed into 0.12?

  • https://github.com/blakeblackshear/frigate/issues/4369

felipecrs avatar Nov 30 '22 02:11 felipecrs

With the new presets option, perhaps the following can also be squeezed into 0.12?

  • https://github.com/blakeblackshear/frigate/issues/4369

Feel free to make a PR

NickM-27 avatar Nov 30 '22 02:11 NickM-27

@NickM-27 man you are on fire. Thank you very very much for fixing and implementing one feature after another. I really look forward every day to peek into the commits to see what you and blake did this time 😄

Thank you ❤️‍🔥

herostrat avatar Nov 30 '22 07:11 herostrat

@NickM-27 man you are on fire.

Thank you very very much for fixing and implementing one feature after another. I really look forward every day to peek into the commits to see what you and blake did this time 😄

Thank you ❤️‍🔥

Ditto here. I can't wait to beta test this out!

LordNex avatar Nov 30 '22 15:11 LordNex