frigate icon indicating copy to clipboard operation
frigate copied to clipboard

Feature Request - Configurable Retention By Size

Open XtremeOwnageDotCom opened this issue 1 year ago • 11 comments

Nearly exactly what is in #994. However, since we cannot properly collaborate- my other ticket is completely locked, and completely inaccessible to me.

The request is simple. The ability to specify a retention by total space used.

Example-

record:
  # Optional: Enable recording (default: shown below)
  # WARNING: If recording is disabled in the config, turning it on via
  #          the UI or MQTT later will have no effect.
  enabled: False
  # Optional: Number of minutes to wait between cleanup runs (default: shown below)
  # This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
  expire_interval: 60
  # Optional: Retention settings for recording
  retain:
    # Optional: Number of days to retain recordings regardless of events (default: shown below)
    # NOTE: This should be set to 0 and retention should be defined in events section below
    #       if you only want to retain recordings of events.
    days: 0
    # Set maximum size of recordings.
    max_size: 64G

And, making such a setting specifiable on individual cameras/events/etc.

The reason?

Because when you are only recording EVENTS, and the events occurs randomly at unknown intervals, it makes it extremely difficult to predict the amount of disk space required.

If you are recording everything, sure, you can estimate the amount of required storage, for a given number of days. However, when yo u are only interested in recording events, this is not exactly the case.

XtremeOwnageDotCom avatar Jan 09 '24 19:01 XtremeOwnageDotCom

How would you expect it to work on the camera level?

Imagine a scenario where you have:

  • camera 1 has 10GB specified
  • camera 2 has 20GB specified

camera 2 is fully utilized and camera 1 does has not had any events in a while so it has no storage used, does camera 2 get cleaned up even though the total pool would be 30GB?

Furthermore, I think having a global option further confuses things because some users may think that the global option is a total for all cameras while that typically means each value applies to the camera.

Personally, I think this would be a lot simpler to implement as a global option only where all cameras share the pool and the cameras with the most activity will naturally take more space.

NickM-27 avatar Jan 09 '24 19:01 NickM-27

A global option with a shared pool for all cameras seems more useful and understandable

mabed-fr avatar Jan 09 '24 20:01 mabed-fr

Personally, I think this would be a lot simpler to implement as a global option only where all cameras share the pool and the cameras with the most activity will naturally take more space.

The global option would be just fine, IMO.

The route Blue Iris uses, is pretty simple to manage. You have "locations", and those locations have a size / retention / action specified. You can point different cameras/events/etc, at whichever location makes the most sense. You can also use this, to keep newer / more important events on faster storage, and use slower storage for archiving. Its not a perfect solution, but, works quite well.

Also- on another note- You closed #994 with-

closing as the original request for frigate cleaning up recordings when storage is full has been implemented since 0.12, subsequent more specific feature requests can be made.

Any details on this? I literally just fixed my Frigate instance (running a newer 0.13 beta), because it was completely dead due to its data storage location being full on disk space. Since- it was unable to write to the location due to it being full, it instead, just went into a crash loop.

XtremeOwnageDotCom avatar Jan 09 '24 20:01 XtremeOwnageDotCom

Makes sense, I think time based retention will still be the primary metric but that may change over time.

Any details on this? I literally just fixed my Frigate instance (running a newer 0.13 beta), because it was completely dead due to its data storage location being full on disk space. Since- it was unable to write to the location due to it being full, it instead, just went into a crash loop.

without logs there is no way to know what specifically happened, the storage cleanup doesn't trigger until 5 minutes after frigate has started and every 5 minutes thereafter. We have many users using this setup (filling storage and only having frigate clear when storage is full) and many of these users have confirmed this feature in 0.13 as well

NickM-27 avatar Jan 09 '24 20:01 NickM-27

a user also provided this image, which is pretty cool to see

https://github.com/blakeblackshear/frigate/discussions/8366#discussioncomment-7512464

img

NickM-27 avatar Jan 09 '24 20:01 NickM-27

Has this been implemented? Before 14 my countless "retains" seemed to work, now my recordings overlap/overflow and frigate dies, nice to haves:

  1. A single place to set a global storage size - this would prevent total failure when storage runs out. (as 14 now behaves)
  2. Documentation
  3. Near as I can tell there are dozens of areas to set retentions, all by days, none by size..

doug62 avatar Sep 23 '24 22:09 doug62

if the drive frigate records to is running out of storage then it will delete recordings to ensure it has at least 2 hours of recording time

NickM-27 avatar Sep 23 '24 22:09 NickM-27

@NickM-27 - I believe it used to work that way until the 14 release, I upgraded versions without changing other configs and it no longer deletes - is there a link to current doc for "retain" and it looks like it could be set in several areas.

doug62 avatar Sep 23 '24 23:09 doug62

it does work, we have had a number of users confirm and a number of users that rely on it working. If you have some specific case where it does not work then you should be creating a bug report so we can look into your config, logs, etc.

recording retentions is documented at https://docs.frigate.video/configuration/record

NickM-27 avatar Sep 23 '24 23:09 NickM-27

@NickM-27 It no longer works, might be best to mention that I use Kubernetes volumes, your code may not be detecting volume size correctly, could you point me to your source file where you apply this logic?

doug62 avatar Sep 23 '24 23:09 doug62

If you want help figuring this out then you should create a bug or support discussion, this is not the right place to debug something unrelated to this request

NickM-27 avatar Sep 23 '24 23:09 NickM-27