supervisor icon indicating copy to clipboard operation
supervisor copied to clipboard

Reload Network storage doesn't work

Open crug80 opened this issue 3 months ago • 19 comments

Describe the issue you are experiencing

Scenario: I have a network storage configured as CIFS to expand HA storage for backup. All is working ok until I restart the NAS. Correctly a notification appears telling me that I have a problem with the newtork storage, but if I try to click on "Reload" to try a new connection (I know that the NAS is ready), it fails immediately suggesting me to check the Supervisor logs.

But nothing is logged there , except the first error, when the NAS was rebooting and they are the only logs I can attach to this issue.

After 2 hours I'still trying to manually reload it, but it won't work. It fails and nothing is logged.

Probably the network storage was already connected, but the notification remaining active and for that reason the Reload doesn't work. To remove the notification I have to restart HA.

What type of installation are you running?

Home Assistant OS

Which operating system are you running on?

Home Assistant Operating System

Steps to reproduce the issue

  1. Connect a CIFS network storage
  2. Unplug the network storage so HA will find it disconnected
  3. Try to repair/reload the network storage after having it plugged in again. ...

Anything in the Supervisor logs that might be useful for us?

2025-09-16 21:36:38.856 INFO (MainThread) [supervisor.mounts.manager] Reloading mount: HABackup
2025-09-16 21:36:38.880 ERROR (MainThread) [supervisor.mounts.mount] Reloading HABackup did not succeed. Check host logs for errors from mount or systemd unit mnt-data-supervisor-mounts-HABackup.mount for details.
2025-09-16 21:36:38.882 INFO (MainThread) [supervisor.resolution.module] Create new suggestion execute_reload - mount / HABackup
2025-09-16 21:36:38.882 INFO (MainThread) [supervisor.resolution.module] Create new suggestion execute_remove - mount / HABackup
2025-09-16 21:36:38.882 INFO (MainThread) [supervisor.resolution.module] Create new issue mount_failed - mount / HABackup

System information

System Information

version core-2025.9.3
installation_type Home Assistant OS
dev false
hassio true
docker true
container_arch aarch64
user root
virtualenv false
python_version 3.13.7
os_name Linux
os_version 6.12.34-haos-raspi
arch aarch64
timezone Europe/Rome
config_dir /config
Home Assistant Community Store
GitHub API ok
GitHub Content ok
GitHub Web ok
HACS Data ok
GitHub API Calls Remaining 4987
Installed Version 2.0.5
Stage running
Available Repositories 2235
Downloaded Repositories 8
Home Assistant Cloud
logged_in false
can_reach_cert_server ok
can_reach_cloud_auth ok
can_reach_cloud ok
Home Assistant Supervisor
host_os Home Assistant OS 16.1
update_channel stable
supervisor_version supervisor-2025.09.0
agent_version 1.7.2
docker_version 28.3.3
disk_total 228.5 GB
disk_used 24.3 GB
nameservers 192.168.80.1
healthy true
supported true
host_connectivity true
supervisor_connectivity true
ntp_synchronized true
virtualization
board rpi5-64
supervisor_api ok
version_api ok
installed_addons Studio Code Server (5.19.3), lxp-bridge (v0.13.0), InfluxDB (5.0.2), Advanced SSH & Web Terminal (21.0.3), Log Viewer (0.17.1), Mosquitto broker (6.5.2), Grafana (11.0.0), DSS VoIP Notifier for ARM (4.1.0), SQLite Web (4.4.0), Piper (1.6.4), Whisper (2.6.0), Music Assistant Server (2.5.8)
Dashboards
dashboards 3
resources 2
views 12
mode storage
Network Configuration
adapters lo (disabled), end0 (enabled, default, auto), docker0 (disabled), hassio (disabled), veth00fe7ac (disabled), veth9d5c14d (disabled), vethe16c9f8 (disabled), vethc01d6e6 (disabled), veth629f4d1 (disabled), veth742ce46 (disabled), veth774ef82 (disabled), vethb5ccf80 (disabled), veth4eb381d (disabled), veth839d767 (disabled), veth450bd63 (disabled), veth0e715ff (disabled), veth08ec2a9 (disabled)
ipv4_addresses lo (127.0.0.1/8), end0 (192.168.80.10/24), docker0 (172.30.232.1/23), hassio (172.30.32.1/23), veth00fe7ac (), veth9d5c14d (), vethe16c9f8 (), vethc01d6e6 (), veth629f4d1 (), veth742ce46 (), veth774ef82 (), vethb5ccf80 (), veth4eb381d (), veth839d767 (), veth450bd63 (), veth0e715ff (), veth08ec2a9 ()
ipv6_addresses lo (::1/128), end0 (fe80::3ccc:8ec4:e3eb:4b10/64), docker0 (fe80::a81c:feff:fedb:deb4/64), hassio (fe80::f0c6:e3ff:fef8:d01a/64), veth00fe7ac (fe80::9c5e:65ff:fee6:f953/64), veth9d5c14d (fe80::dc7b:69ff:fe7c:6db3/64), vethe16c9f8 (fe80::30ef:76ff:fece:74da/64), vethc01d6e6 (fe80::6cd8:3fff:fedc:ed55/64), veth629f4d1 (fe80::dc77:35ff:fe32:f981/64), veth742ce46 (fe80::e463:1bff:fedd:5027/64), veth774ef82 (fe80::b84c:27ff:fe24:42ec/64), vethb5ccf80 (fe80::f02d:93ff:fec5:a4/64), veth4eb381d (fe80::8cc5:d9ff:fecd:b161/64), veth839d767 (fe80::e4ad:5bff:fe23:8f88/64), veth450bd63 (fe80::3c47:7bff:fe4f:3f86/64), veth0e715ff (fe80::74e1:b8ff:fe9b:4df4/64), veth08ec2a9 (fe80::8cc:caff:fe84:4808/64)
announce_addresses 192.168.80.10, fe80::3ccc:8ec4:e3eb:4b10
Recorder
oldest_recorder_run 15 agosto 2025 alle ore 20:08
current_recorder_run 13 settembre 2025 alle ore 16:36
estimated_db_size 1658.51 MiB
database_engine sqlite
database_version 3.48.0

Supervisor diagnostics

config_entry-hassio-3f7684dded3f8b47ce6dc9b4dd040e14.json

Additional information

No response

crug80 avatar Sep 16 '25 20:09 crug80

Probably related to issue #6034

crug80 avatar Sep 16 '25 21:09 crug80

Can you retest this with https://github.com/home-assistant/supervisor/pull/6197 included (part of Supervisor 2025.09.3, on stable channel later today). This might fix your issue.

agners avatar Sep 29 '25 12:09 agners

There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment šŸ‘ This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Oct 29 '25 12:10 github-actions[bot]

Can you retest this with #6197 included (part of Supervisor 2025.09.3, on stable channel later today). This might fix your issue.

Sorry for my late answer. I tested the same scenario and now no errors are logged neither warnings in case of NAs restart.. So, can I suppose that network storage is automatically re-attached without raising warnings?!

crug80 avatar Oct 29 '25 13:10 crug80

Just to add that #6034 went stale, but this same issue is still causing a lot of chat on that ticket.

I have to reboot HA to clear this problem, ie since yesterday's reboot, I have now have 6 messages I can't remove:

Image

If #6034 isn't being followed any more, I'll hop over to this ticket

I took a look at #6197 but couldn't follow what to do, where, but can try something if that'll help? I'm on 2025.11.2 now

WarmChocolateCake avatar Nov 17 '25 22:11 WarmChocolateCake

Similarly, using Reload doesn't work and I have to reboot HA to clear this problem. When I press Reload I see the following: Image

I'm on core 2025.11.2, with OS 16.3 and frontend 20251105.0.

I have my NAS drive scheduled to power down around midnight to save energy (and noise), so I get this Network storage device failed message every day, and they accumulate with one new message every day, until I reboot HA. Would it also be possible to reopen #6034 please as there are quite a few people reporting the same issue?

Aside from seeing this warning, the network drive is working and the backups do complete successfully, as I have set the NAS drive to wake up about 15 minutes before HA runs its backup. However, I still get the Network storage device failed message every day when the drive goes to sleep at night and, as above, there is no way to clear them other than a reboot.

jam3sward avatar Nov 18 '25 17:11 jam3sward

I can confirm that the issue is back again (2025.11.1)... Reload doesn't work and you have to restart HA to see the "repair option" disappeared. No logs related to network storage under supervisor.. The only line is related to the time when my NAS was restarting:

2025-11-18 20:31:32.055 ERROR (MainThread) [supervisor.mounts.mount] Restarting HABackup did not succeed. Check host logs for errors from mount or systemd unit mnt-data-supervisor-mounts-HABackup.mount for details.

In the same time the host logged:

2025-11-18 19:31:21.818 jarvis systemd[1]: Reloading Supervisor cifs mount: HABackup...
2025-11-18 19:31:21.847 jarvis systemd[1]: mnt-data-supervisor-mounts-HABackup.mount: Mount process exited, code=killed, status=15/TERM
2025-11-18 19:31:21.848 jarvis systemd[1]: Failed unmounting Supervisor cifs mount: HABackup.
2025-11-18 19:36:51.546 jarvis kernel: CIFS: VFS: \\192.168.80.201 has not responded in 180 seconds. Reconnecting...
2025-11-18 19:41:33.658 jarvis kernel: CIFS: VFS: \\192.168.80.200 has not responded in 180 seconds. Reconnecting..

crug80 avatar Nov 18 '25 21:11 crug80

Following as I have the same use case and issue.

Thanks!

EPiSKiNG avatar Nov 23 '25 18:11 EPiSKiNG

I have the same issue in 2025.11.1

Thank you Rhys

evenmoregarlic avatar Nov 25 '25 22:11 evenmoregarlic

Same issue here (also related to #6034). I’m using NFS storage over a WireGuard tunnel. After a reboot the error clears, but an even faster workaround is runningha supervisor restart— the message disappears within a minute.

Cheers, Mark

CommanderTux avatar Nov 27 '25 19:11 CommanderTux

Many thanks @CommanderTux , it seems that it works. So I create an automation that triggers everyday my nas turns on.

fernbrun avatar Nov 28 '25 07:11 fernbrun

Many thanks @CommanderTux , it seems that it works. So I create an automation that triggers everyday my nas turns on.

Interested in the trick, what is your automation? A command line one?

Skuair avatar Nov 28 '25 07:11 Skuair

add this to configurations.yaml

shell_command: restart_supervisor: "ha supervisor restart"

then in an automation add this action :

  • action: shell_command.restart_supervisor

fernbrun avatar Nov 28 '25 07:11 fernbrun

add this to configurations.yaml

shell_command: restart_supervisor: "ha supervisor restart"

then in an automation add this action :

  • action: shell_command.restart_supervisor

From my side, the action does not work, it returns: stdout: "" stderr: "/bin/sh: ha: not found" returncode: 127

Seems an authorization problem (because it works inside the ssh terminal addon). I have HA OS, all up to date.

Skuair avatar Nov 28 '25 10:11 Skuair

I'm also finding #6034 annoying. And it's getting worse - I'm now up to 12 of these notifications I can't get rid of.

TimWardCam avatar Nov 30 '25 17:11 TimWardCam

Guys! Good news! I have got it to work! Whilst I would much appreciate the system to discover my NAS automatically, that workaround does prevent the endless loop of multiple subject notifications.

First of all, look at the following threads: Shell Command Execute hassio commands over ssh

Based on the 1st thread linked, in the terminal (Advanced SSH & Web Terminal), I created a new SSH key pair and added the public key to ~/.ssh/authorized_keys.

Then, I have got created a file /config/.ssh/known_hosts. It can be done:

  • either with ssh-keyscan,
  • or, if that command should fail like in my case, with just an attempt to ssh to my HA ip address from the internal terminal.

Then, I configured authorized keys for both my ssh user and the public ssh key cretaed above, in Advanced SSH & Web Terminal (Settings > Add-ons) to have the possibility to authenticate by a key on logging in.

Last but one, I checked the HASSIO_TOKEN with the command set | grep HASSIO_TOKEN. It will be required to invoke a hassio command.

Finally, in the terminal, I was able to execute:

ssh -i /config/.ssh/id_ed25519 -o UserKnownHostsFile=/config/.ssh/known_hosts -p <my_ssh_port> <my_ssh_user>@<my_ha_ip> 'ha --api-token "<my_hassio_token>" su restart'

The same command can be used in the @fernbrun 's scenario, i.e., a configuration of a shell command in configurations.yaml:

shell_command:
  restart_supervisor: "/config/shell/su_restart.sh"

The only difference in my solution is that I put that looong ssh command into the shell file su_restart.sh and referred to it in yaml.

jm314159 avatar Nov 30 '25 18:11 jm314159

Since the error messages are piling up on my system as well, I was searching for a workaround. My NAS for backups is shutting down for the night and only comes back only at 12.

The provided shell command workaround does not work for me with 2025.11 on HAOS. As an alternative, you can simply use curl to post the restart endpoint of the supervisor. The token is already set in the executing environment.

Tested with home assistant 2025.11.3, HAOS 16.3 and Supervisor 2025.11.5

shell_command:
  restart_supervisor: 'curl -X POST -H "Authorization: Bearer $SUPERVISOR_TOKEN" http://supervisor/supervisor/restart'

Dazzel avatar Dec 03 '25 13:12 Dazzel

@Dazzel , this works for me as well !

fernbrun avatar Dec 03 '25 14:12 fernbrun

Yes, i confirm @Dazzel 's command works.

Skuair avatar Dec 03 '25 14:12 Skuair