supervisor
supervisor copied to clipboard
Re-adding Network Storage with same name fails
Describe the issue you are experiencing
I previously had a NAS set up to save my frigate data to. The name MUST be “frigate” in order for the add-on to function properly. I previously had this set up and working properly, but upgraded to a different NAS recently. After deleting the original network storage, rebooting, and going to re-add it with the new server and share it fails indefinitely. Changing the name to anything other than a previously used name works flawlessly.
What type of installation are you running?
Home Assistant OS
Which operating system are you running on?
Home Assistant Operating System
Steps to reproduce the issue
- Add a network storage
- Delete the network storage
- Reboot host
- Add a network storage with the same name as step 1, but different server/share (should fail to add)
Anything in the Supervisor logs that might be useful for us?
23-09-24 13:06:34 ERROR (MainThread) [supervisor.mounts.mount] Reloading frigate did not succeed. Check host logs for errors from mount or systemd unit mnt-data-supervisor-mounts-frigate.mount for details.
23-09-24 13:06:34 ERROR (MainThread) [supervisor.mounts.mount] Could not unmount frigate due to: Transaction for mnt-data-supervisor-mounts-frigate.mount/stop is destructive (mnt-data-supervisor-mounts-frigate.mount has 'start' job queued, but 'stop' is included in transaction).
System Health information
System Information
version | core-2023.9.2 |
---|---|
installation_type | Home Assistant OS |
dev | false |
hassio | true |
docker | true |
user | root |
virtualenv | false |
python_version | 3.11.5 |
os_name | Linux |
os_version | 6.1.45 |
arch | x86_64 |
timezone | America/Chicago |
config_dir | /config |
Home Assistant Community Store
GitHub API | ok |
---|---|
GitHub Content | ok |
GitHub Web | ok |
GitHub API Calls Remaining | 4966 |
Installed Version | 1.32.1 |
Stage | running |
Available Repositories | 1363 |
Downloaded Repositories | 36 |
Home Assistant Cloud
logged_in | true |
---|---|
subscription_expiration | May 6, 2024 at 7:00 PM |
relayer_connected | true |
relayer_region | us-east-1 |
remote_enabled | true |
remote_connected | true |
alexa_enabled | false |
google_enabled | false |
remote_server | us-east-1-2.ui.nabu.casa |
certificate_status | ready |
can_reach_cert_server | ok |
can_reach_cloud_auth | ok |
can_reach_cloud | ok |
Home Assistant Supervisor
host_os | Home Assistant OS 10.5 |
---|---|
update_channel | stable |
supervisor_version | supervisor-2023.09.2 |
agent_version | 1.5.1 |
docker_version | 23.0.6 |
disk_total | 78.0 GB |
disk_used | 37.7 GB |
healthy | true |
supported | true |
board | ova |
supervisor_api | ok |
version_api | ok |
installed_addons | Mosquitto broker (6.3.1), Ring-MQTT with Video Streaming (5.6.2), File editor (5.6.0), Terminal & SSH (9.7.1), Node-RED (14.5.0), Z-Wave JS UI (1.16.0), PS5 MQTT (1.3.1), Network UPS Tools (0.12.1), Crowdsec (1.5.2-ha1), ESPHome (2023.8.3), Double Take (1.13.1), Exadel CompreFace (1.1.0), Frigate Beta (0.13.0) (0.13.0-beta1) |
Dashboards
dashboards | 2 |
---|---|
resources | 20 |
views | 11 |
mode | storage |
Recorder
oldest_recorder_run | July 26, 2023 at 3:11 PM |
---|---|
current_recorder_run | September 24, 2023 at 1:12 AM |
estimated_db_size | 2585.71 MiB |
database_engine | sqlite |
database_version | 3.41.2 |
Supervisor diagnostics
No response
Additional information
Not isolated to frigate, I did the same thing with my backups. Removed old and added new with the same name and received the exact same errors. Simply changing the name from “Backups” to “Backups_2” resolved the issue for this instance. As mentioned previously, this cannot be done for frigate as it is a requirement of the add-on for it to be named “frigate”.
Temporary workaround:
ha> login # cd /mnt/data/supervisor/mounts # ls -lh frigate | wc -l 0 # rmdir frigate
I think a possible solution would be to add a random suffix (uuid, or incrementing number) to the end of the named mount (let's say frigate
would become frigate.123
and then bind mount frigate.123
at frigate
. (Or anyway of decoupling the named mount from the FS to avoid conflcits)
Some background info on bind mounts vs symlinks: https://unix.stackexchange.com/questions/49623/are-there-any-drawbacks-from-using-mount-bind-as-a-substitute-for-symbolic-lin
As a follow-up: warning about stragglers in old directories (created by HA) could be nice.
I now get a similar error after restoring ha from previous NAS. The new NAS has a different IP Address and that might have confused homeassistant. The error comes up when trying to add a network storage location. It looks like this:
Could not unmount SynologyBackup due to: Transaction for mnt-data-supervisor-mounts-SynologyBackup.mount/stop is destructive (mnt-data-supervisor-mounts-SynologyBackup.mount has 'start' job queued, but 'stop' is included in transaction).
Unfortunately, changing the name doesn´t solve the issue for me. I have also tried multiple times to restore from the old backup but the error is still there. I really don´t know how to fix it if even restoring is not an option. Any suggestions on what I could try next would be appreciated.
I would like to add, that I have already tried to exec into supervisor and delete the folders at /data/mounts/ suggested by this comment: https://github.com/home-assistant/supervisor/issues/4358#issuecomment-1624160038 but that didn´t help.
Should I create a new issue, since Í don´t have a problem with adding a drive with the same name?
Unfortunately, changing the name doesn´t solve the issue for me.
Do you get the same error in that case?
Did you try deleting, restart and then add again?
Should I create a new issue, since Í don´t have a problem with adding a drive with the same name?
Yeah a separate issue along with the logs is probably worth here. Also add host logs after an attempt adding the network storage.
I tried with multiple different names and each time, a new mounts folder was created, even though there was no entry in the UI. I then simply restored from a very old backup where I didn´t have the network storage yet and copied the data manually and via partial backup restores.
Now, everything works apart from one minor bug: In the UI, there is the following error when creating a backup:
However, the backup still finishes in the Background without any error in the log.
Thank you again @agners , for the fast reply!
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.
This is still an issue. Please re-open.
Architecture discussion on a proposed fixed to the Network Storage issues: https://github.com/home-assistant/architecture/discussions/1033
@xAcrosonicx did you find a fix for this? I have the exact same issue.
@DT00689 if you follow the remediation steps and upgrade your HA this shouldn't happen again
@jmealo I must be dense. What remediation steps? And I am on the latest version of HA.
@DT00689 Depending on how you run HA you'll need to remove a mount directory, once you do that, you can readd the network storage and it shouldn't fail. If you don't want to dig thru tickets or figure that out you can name the mount something different and the issue shouldn't happen again.
@jmealo I am running HA OS on a bare metal install. I tried the cmnds below but it was saying that the # cd /mnt/data/supervisor/mounts does not exist... Sorry, ignorant when it comes to running cmnds. Is there a guide you could point me to for deleting a mount?
ha> login
cd /mnt/data/supervisor/mounts
ls -lh frigate | wc -l
0
rmdir frigate
@DT00689 I would just name the mount something other than frigate
if that's an option. Sorry, I run HAOS in a KVM so the steps for me were a bit different. Let me know if that works for you.
@DT00689: Sorry to send you digging, but if you look at the linked issues on this meta-issue, there are various fixes and commands proposed for different deployment scenarios: https://github.com/home-assistant/supervisor/issues/4866
The underlying bug, which creates the condition you're facing has been fixed, but it won't remove the directory that's causing the "same name" issue now.
@jmealo No worries. I figured out how to delete the dir. For it to take, I had to reboot the host not just HA.
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.