supervisor
supervisor copied to clipboard
When mounted network device fails, Frigate addon starts to store data locally. (filling up localdrive)
Describe the issue you are experiencing
Using the Frigate addon for cctv and have a network storage device added in homeassistant.local:8123/config/storage as frigate. This creates a /media/frigate folder, as described in the frigate manual. This work as suggested, and stores all data on the network device. However, the mounted storage apparently sometimes encounters a hiccup of some sort. This results in the storage device not being mounted correctly anymore. I have to figure out why this happens.
The problem is, when this happens, the /media/frigate folder is created locally, and the frigate addon starts to store data locally, filling up the SDD HA is stored on, which is too small for cctv data.
I have to stop the addon, remove the network device, delete the local /media/frigate folder and restart to fix this.
What type of installation are you running?
Home Assistant OS
Which operating system are you running on?
Home Assistant Operating System
Steps to reproduce the issue
See issue above
Anything in the Supervisor logs that might be useful for us?
nothing regarding this issue currently, but Ill update this when it happens again.
[32m23-10-21 15:33:21 INFO (MainThread) [supervisor.api.middleware.security] /backups access from cebe7a76_hassio_google_drive_backup[0m
[32m23-10-21 15:33:21 INFO (MainThread) [supervisor.api.middleware.security] /supervisor/info access from cebe7a76_hassio_google_drive_backup[0m
[32m23-10-21 15:33:21 INFO (MainThread) [supervisor.api.middleware.security] /backups access from cebe7a76_hassio_google_drive_backup[0m
[32m23-10-21 15:33:45 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token[0m
[33m23-10-21 15:34:17 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 15:35:48 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 15:39:27 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 15:44:33 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 15:49:39 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 15:54:45 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 15:59:51 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[32m23-10-21 16:03:38 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state running[0m
[32m23-10-21 16:03:38 INFO (MainThread) [supervisor.resolution.checks.base] Run check for dns_server_ipv6_error/dns_server[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for security/core[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for docker_config/system[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for free_space/system[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for dns_server_failed/dns_server[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for multiple_data_disks/system[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for trust/supervisor[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for pwned/addon[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for ipv4_connection_problem/system[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.check] System checks complete[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state running[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.fixup] Starting system autofix at state running[0m
[32m23-10-21 16:03:39 INFO (MainThread) [supervisor.resolution.fixup] System autofix complete[0m
[32m23-10-21 16:03:53 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token[0m
[33m23-10-21 16:04:57 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:10:03 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:15:09 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:20:15 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:25:21 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:30:27 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[32m23-10-21 16:33:59 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token[0m
[33m23-10-21 16:35:33 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:40:39 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:45:45 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:50:51 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 16:55:57 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 17:01:03 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state running[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for dns_server_ipv6_error/dns_server[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for security/core[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for docker_config/system[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for free_space/system[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for dns_server_failed/dns_server[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for multiple_data_disks/system[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for trust/supervisor[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for pwned/addon[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for no_current_backup/system[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.checks.base] Run check for ipv4_connection_problem/system[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.check] System checks complete[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state running[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.fixup] Starting system autofix at state running[0m
[32m23-10-21 17:03:39 INFO (MainThread) [supervisor.resolution.fixup] System autofix complete[0m
[32m23-10-21 17:04:00 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token[0m
[32m23-10-21 17:04:02 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json[0m
[33m23-10-21 17:06:09 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[32m23-10-21 17:10:42 INFO (MainThread) [supervisor.host.info] Updating local host information[0m
[32m23-10-21 17:10:43 INFO (MainThread) [supervisor.host.services] Updating service information[0m
[32m23-10-21 17:10:43 INFO (MainThread) [supervisor.host.network] Updating local network information[0m
[32m23-10-21 17:10:43 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information[0m
[32m23-10-21 17:10:43 INFO (MainThread) [supervisor.host.manager] Host information reload completed[0m
[33m23-10-21 17:11:15 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 17:16:21 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 17:21:27 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 17:26:33 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 17:31:39 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[32m23-10-21 17:34:00 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token[0m
[33m23-10-21 17:36:45 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 17:41:51 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
[33m23-10-21 17:46:57 WARNING (MainThread) [supervisor.addons.options] Unknown option 'logins' for Mosquitto broker (core_mosquitto)[0m
System Health information
System Information
| version | core-2023.10.4 |
|---|---|
| installation_type | Home Assistant OS |
| dev | false |
| hassio | true |
| docker | true |
| user | root |
| virtualenv | false |
| python_version | 3.11.5 |
| os_name | Linux |
| os_version | 6.1.56 |
| arch | x86_64 |
| timezone | Europe/Amsterdam |
| config_dir | /config |
Home Assistant Community Store
| GitHub API | ok |
|---|---|
| GitHub Content | ok |
| GitHub Web | ok |
| GitHub API Calls Remaining | 5000 |
| Installed Version | 1.33.0 |
| Stage | running |
| Available Repositories | 1377 |
| Downloaded Repositories | 32 |
Home Assistant Cloud
| logged_in | true |
|---|---|
| subscription_expiration | 18 november 2023 om 01:00 |
| relayer_connected | false |
| relayer_region | null |
| remote_enabled | true |
| remote_connected | false |
| alexa_enabled | false |
| google_enabled | true |
| remote_server | null |
| certificate_status | null |
| can_reach_cert_server | ok |
| can_reach_cloud_auth | failed to load: timeout |
| can_reach_cloud | ok |
Home Assistant Supervisor
| host_os | Home Assistant OS 11.0 |
|---|---|
| update_channel | stable |
| supervisor_version | supervisor-2023.10.0 |
| agent_version | 1.6.0 |
| docker_version | 24.0.6 |
| disk_total | 78.7 GB |
| disk_used | 18.1 GB |
| healthy | true |
| supported | true |
| board | ova |
| supervisor_api | ok |
| version_api | ok |
| installed_addons | Home Assistant Google Drive Backup (0.111.1), AppDaemon (0.13.6), Duck DNS (1.15.0), Mosquitto broker (6.3.1), Terminal & SSH (9.7.1), Frigate (0.12.1), MQTT Explorer (browser-1.0.3), Spotweb (1.5.4-9), MariaDB (2.6.1), Tailscale (0.12.0), NGINX Home Assistant SSL proxy (3.5.0), phpMyAdmin (0.8.9) |
Dashboards
| dashboards | 2 |
|---|---|
| resources | 18 |
| views | 12 |
| mode | storage |
Recorder
| oldest_recorder_run | 11 oktober 2023 om 04:28 |
|---|---|
| current_recorder_run | 21 oktober 2023 om 15:18 |
| estimated_db_size | 358.31 MiB |
| database_engine | sqlite |
| database_version | 3.41.2 |
Supervisor diagnostics
config_entry-hassio-ad4f49c452b7318a023530f2873844c3.json.txt
Additional information
No response
That is confusing. Supervisor has logic in place to handle that. When the mount fails it should try to bind mount a read-only folder in the spot where the network share should be specifically to prevent what you're seeing from happening: https://github.com/home-assistant/supervisor/blob/6d021c1659b6d6139f6d0991cb141d76f8ca0ba3/supervisor/mounts/manager.py#L282-L291
So please share the logs next time you encounter this situation. I need to see how that read-only block is failing.
I will. I made an automation that notifies me as soon as the SSD begins to fill up again.
I think I have the same issue but slightly different. Therefore, I will chime in here. If you want me to open a new issue please tell.
I have the same issue with mounting a network share for backups (in my case the SMB mount is called Backups. If the network share goes offline, the local folder in /mnt/data/supervisor/mounts/Backups is filled with subsequent backups (that start while the share is offline), thus taking up local drive space.
I'm not sure if this only occurs if the share goes offline during a backup or if the share is offline when a backup starts or both. I'd need to do some testing.
When the network share is back online, the Supervisor is unable to mount the share again, since the path already exists.
Currently my only fix is to delete the /mnt/data/supervisor/mounts/Backups folder and its content via the console and then remounting via the Supervisor (or the UI) works.
Is it possible to write backups to a temporary local folder (until e.g. the disk is 80% full) and as soon as the default backup share is back online move those temporary local backups to the share?
Thanks a lot for looking into this. In principle I'm loving the network mounts features!
Here is my system information:
System Information
| version | core-2023.10.2 |
|---|---|
| installation_type | Home Assistant OS |
| dev | false |
| hassio | true |
| docker | true |
| user | root |
| virtualenv | false |
| python_version | 3.11.5 |
| os_name | Linux |
| os_version | 6.1.56 |
| arch | x86_64 |
| timezone | Europe/Berlin |
| config_dir | /config |
Home Assistant Community Store
| GitHub API | ok |
|---|---|
| GitHub Content | ok |
| GitHub Web | ok |
| GitHub API Calls Remaining | 4998 |
| Installed Version | 1.32.1 |
| Stage | running |
| Available Repositories | 1390 |
| Downloaded Repositories | 27 |
Home Assistant Cloud
| logged_in | false |
|---|---|
| can_reach_cert_server | ok |
| can_reach_cloud_auth | ok |
| can_reach_cloud | ok |
Home Assistant Supervisor
| host_os | Home Assistant OS 11.0 |
|---|---|
| update_channel | stable |
| supervisor_version | supervisor-2023.10.1 |
| agent_version | 1.6.0 |
| docker_version | 24.0.6 |
| disk_total | 93.8 GB |
| disk_used | 39.4 GB |
| healthy | true |
| supported | true |
| board | ova |
| supervisor_api | ok |
| version_api | ok |
| installed_addons | Samba share (10.0.2), AppDaemon (0.13.4), File editor (5.6.0), Advanced SSH & Web Terminal (15.1.0), deCONZ (6.20.0), Studio Code Server (5.13.0), NGINX Home Assistant SSL proxy (3.1.1), Promtail (2.2.0), ESPHome (2023.10.3), Piper (1.4.0), Music Assistant BETA (2.0.0b74), openWakeWord (1.8.2), Whisper (1.0.0), Silicon Labs Multiprotocol (2.3.2), Matter Server (4.10.0) |
Dashboards
| dashboards | 4 |
|---|---|
| resources | 14 |
| views | 18 |
| mode | storage |
Recorder
| oldest_recorder_run | September 3, 2023 at 11:01 |
|---|---|
| current_recorder_run | October 31, 2023 at 02:35 |
| estimated_db_size | 11810.61 MiB |
| database_engine | sqlite |
| database_version | 3.41.2 |
Yea, looks like the same problem as me, also the steps to fix it. It has not happened since last time.
(Im currently struggling with db getting corrupt every 1/2 days, but I narrowed that down to probably bad RAM. (if that even can cause the db to become corrupt)...
@mdegat01 It just happened again. I stopped the frigate add-on and tried to remount the network device without deleting it. Resulted in the following error:
23-11-01 17:53:31 INFO (MainThread) [supervisor.mounts.manager] Removing mount: frigate
23-11-01 17:53:31 INFO (MainThread) [supervisor.mounts.manager] Creating or updating mount: frigate
23-11-01 17:53:31 INFO (MainThread) [supervisor.mounts.mount] Mount frigate is not mounted, mounting instead of reloading
23-11-01 17:53:31 INFO (MainThread) [supervisor.mounts.mount] Mount frigate still activating, waiting up to 30 seconds to complete
23-11-01 17:53:42 ERROR (MainThread) [supervisor.mounts.mount] Cannot mount bind_frigate at /data/media/frigate because it is not empty
Let me know if you need more info.
This is in the log related to mount yesterday:
2023-10-31 21:06:42.681 ERROR (MainThread) [homeassistant.components.hassio] Failed to to call /mounts/frigate - Could not reload mount frigate due to: Transaction for mnt-data-supervisor-mounts-frigate.mount/start is destructive (mnt-data-supervisor-mounts-frigate.mount has 'stop' job queued, but 'start' is included in transaction).
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.
I think I have the same issue but slightly different. Therefore, I will chime in here. If you want me to open a new issue please tell.
I have the same issue with mounting a network share for backups (in my case the SMB mount is called
Backups. If the network share goes offline, the local folder in/mnt/data/supervisor/mounts/Backupsis filled with subsequent backups (that start while the share is offline), thus taking up local drive space. I'm not sure if this only occurs if the share goes offline during a backup or if the share is offline when a backup starts or both. I'd need to do some testing. When the network share is back online, the Supervisor is unable to mount the share again, since the path already exists. Currently my only fix is to delete the/mnt/data/supervisor/mounts/Backupsfolder and its content via the console and then remounting via the Supervisor (or the UI) works.Is it possible to write backups to a temporary local folder (until e.g. the disk is 80% full) and as soon as the default backup share is back online move those temporary local backups to the share?
Thanks a lot for looking into this. In principle I'm loving the network mounts features!
Here is my system information:
System Information
version core-2023.10.2 installation_type Home Assistant OS dev false hassio true docker true user root virtualenv false python_version 3.11.5 os_name Linux os_version 6.1.56 arch x86_64 timezone Europe/Berlin config_dir /config Home Assistant Community Store Home Assistant Cloud Home Assistant Supervisor Dashboards Recorder
It occurred again today. Here are two screenshots.
The system is now: Core 2023.11.0 Supervisor 2023.11.6 Operating System 11.2 Frontend 20231030.1
I'd be glad to provide more info, if that'd help figuring this out.
I'm having the same issue.
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.
It's still happening.
This has caught me out a few times in the past! It's an issue again for me now as rather than a separate NAS server I'm now using the SambaNAS add-on in HA with an external USB drive plugged into my HAOS machine. I think the problem is, the Frigate add-on is starting before the SambaNAS add-on, so cannot access the external drive, and instead creates its folders on the local drive as mentioned above (which also as mentioned is an issue as you cannot then subsequently mount the external drive). I think the easiest solution for me is just NOT automatically start Frigate on boot, but instead wait a few minutes and check the external drive is accessible before starting it via an automation.
Your idea made me think, and I think you might be right. What I did now is mount a USB storage with the Samba NAS addon. This mounts the USB as /media/FRIGATE and I created a /media/frigate symlink to /media/FRIGATE
Your idea made me think, and I think you might be right. What I did now is mount a USB storage with the Samba NAS addon. This mounts the USB as /media/FRIGATE and I created a /media/frigate symlink to /media/FRIGATE
that's interesting, does that get around this issue? would you mind explaining the process please, symlink isn't something I've encountered before!
Your idea made me think, and I think you might be right. What I did now is mount a USB storage with the Samba NAS addon. This mounts the USB as /media/FRIGATE and I created a /media/frigate symlink to /media/FRIGATE
that's interesting, does that get around this issue? would you mind explaining the process please, symlink isn't something I've encountered before!
Use SSH to login HA cli (or use Terminal & SSH addon)
type:
ln -s /media/FRIGATE /media/frigate
to check type:
cd /media
and then
ls
you should see two folders now FRIGATE and frigate
The lowercase folder is just a symbolic or soft link to the uppercase
Architecture discussion on a proposed fixed to the Network Storage issues: https://github.com/home-assistant/architecture/discussions/1033
It just happened again, but now I noticed something else. It only happens after a cold reboot/shutdown. When I shutdown/restart HA via HA itself, it will reboot and mount the usb storages without any problem.
But when, for example, I reboot proxmox, or the machine itself, it fails to load the mount properly, or after a power outage, HA will start with a failed mount, and resort to creating it locally and fill the local ssd.
I then stop the samba and the frigate addon, delete the locally created folders in /media/frigate and the start the samba and frigate addon (in that chronological order).
Then it works again like its supposed to.
Yeah, I hadn't noticed what cause it, but sometimes HA doesn't mount the share on boot and I have to go to Settings > System > Storage and "refresh" each share to get them to mount.
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.
Still happening regularly...
Hm, from what I understand, this should largely be addressed with #4882, which is part of Supervisor 2024.02.0 and newer.
Today, do you get a Repair issue on the frontend when this is happening?
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.