operating-system icon indicating copy to clipboard operation
operating-system copied to clipboard

Console log lines after uprgading to 12.0 - CIFS: VFS: server does not advertise interfaces

Open vturekhanov opened this issue 11 months ago • 8 comments

Describe the issue you are experiencing

I just upgraded the HA OS from 11.5 to 12.0 and found the following log lines in the HA console (one line approximately per minute). There were no issues on 11.5. I didn't make any configuration changes. Network shares work fine, files are accessible.

What operating system image do you use?

ova (for Virtual Machines)

What version of Home Assistant Operating System is installed?

12.0

Did you upgrade the Operating System.

Yes

Steps to reproduce the issue

  1. Upgrade HA OS from 11.5 to 12.0 with configured network storage connection to macOS using CIFS.

Anything in the Supervisor logs that might be useful for us?

Nothing related to this issue.

Anything in the Host logs that might be useful for us?

Feb 27 04:03:50 hass kernel: CIFS: VFS: server 192.168.1.12 does not advertise interfaces
Feb 27 04:04:51 hass kernel: CIFS: VFS: server 192.168.1.12 does not advertise interfaces
Feb 27 04:05:52 hass kernel: CIFS: VFS: server 192.168.1.12 does not advertise interfaces
Feb 27 04:06:54 hass kernel: CIFS: VFS: server 192.168.1.12 does not advertise interfaces
Feb 27 04:07:55 hass kernel: CIFS: VFS: server 192.168.1.12 does not advertise interfaces
Feb 27 04:08:57 hass kernel: CIFS: VFS: server 192.168.1.12 does not advertise interfaces
Feb 27 04:09:58 hass kernel: CIFS: VFS: server 192.168.1.12 does not advertise interfaces

System information

No response

Additional information

No response

vturekhanov avatar Feb 27 '24 04:02 vturekhanov

Although these lines were not shown with the previous kernel version, they reportedly should not appear when multichannel or max_channels are not specified as mount options: https://lore.kernel.org/all/CANT5p=p4+7uiWFBa6KBsqB1z1xW2fQwYD8gbnZviCA8crFqeQw@mail.gmail.com/

That should not be the case of mounts created from HA. Can you check what mount | grep cifs shows? You will need to execute it directly in the root shell of the VM (use login in HA CLI to enter it), as not all mounts are visible in the core-ssh container.

sairon avatar Feb 27 '24 18:02 sairon

Below is the output of the mount | grep cifs command in the root shell of the VM.

//192.168.1.12/folder on /mnt/data/supervisor/mounts/folder type cifs (rw,relatime,vers=default,cache=strict,username=user,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.12,file_mode=0755,dir_mode=0755,soft,nounix,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1)
//192.168.1.12/folder on /mnt/data/supervisor/media/folder type cifs (rw,relatime,vers=default,cache=strict,username=user,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.12,file_mode=0755,dir_mode=0755,soft,nounix,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1)

vturekhanov avatar Feb 27 '24 18:02 vturekhanov

+1 having the same issue. Happy to provide logs if necesssary.

ripple7511 avatar Feb 28 '24 04:02 ripple7511

@ripple7511 what is your server device?

agners avatar Feb 29 '24 15:02 agners

@agners , running in an LXC on ProxMox.

ripple7511 avatar Feb 29 '24 18:02 ripple7511

I have same error on console - CIFS: VFS: server 10.10.10.10 does not advertise interfaces HomeAssistant OS 12.0 - the server is a NAS - but share mounted & readable

mounts info: default_backup_mount: null mounts:

  • name: Streaming read_only: false server: 10.10.10.10 share: Recordings state: active type: cifs usage: media version: null

beralios avatar Feb 29 '24 20:02 beralios

@ripple7511 @beralios Can you provide any more details about the server side? What version of Samba is it running, ideally also share the config?

The messages should be fairly harmless but they should not appear in case the multichannel options are not specified, so I'd like to pinpoint the cause and share it with upstream maintainers. Unfortunately I'm so far unable to reproduce the issue.

sairon avatar Mar 01 '24 10:03 sairon

Server is a QNAP ver QTS5.1.5 - Connection is 2 Nic in bonding - Server in Workgroup with No SMB multichannel image here the extract of smb.conf: [global] passdb backend = smbpasswd workgroup = WORKGROUP security = USER server string=NAS Server encrypt passwords = Yes username level = 0 map to guest = Never max log size = 10 socket options = TCP_NODELAY SO_KEEPALIVE os level = 20 preferred master = no dns proxy = No smb passwd file=/etc/config/smbpasswd username map = /etc/config/smbusers guest account = guest directory mask = 0777 create mask = 0777 oplocks = yes locking = yes disable spoolss = yes load printers = no veto files = /.AppleDB/.AppleDouble/.AppleDesktop/:2eDS_Store/Network Trash Folder/Temporary Items/TheVolumeSettingsFolder/.@__thumb/.@__desc/:2e*/.@__qini/.Qsync/.@upload_cache/.qsync/.qsync_sn/.@qsys/.streams/.digest/ delete veto files = yes map archive = no map system = no map hidden = no map read only = no deadtime = 10 restrict anonymous = 2 server role = auto use sendfile = yes unix extensions = no store dos attributes = yes client ntlmv2 auth = yes dos filetime resolution = no follow symlinks = yes wide links = yes force unknown acl user = yes template homedir = /share/homes/DOMAIN=%D/%U inherit acls = no domain logons = no min receivefile size = 256 case sensitive = auto domain master = auto local master = no enhance acl v1 = yes remove everyone = no conn log = no kernel oplocks = no min protocol = SMB2_10 smb2 leases = yes durable handles = yes kernel share modes = no posix locking = no lock directory = /share/CACHEDEV1_DATA/.samba/lock state directory = /share/CACHEDEV1_DATA/.samba/state cache directory = /share/CACHEDEV1_DATA/.samba/cache printcap cache time = 0 acl allow execute always = yes server signing = disabled aio read size = 1 aio write size = 0 streams_depot:delete_lost = yes streams_depot:check_valid = no fruit:nfs_aces = no fruit:veto_appledouble = no winbind expand groups = 1 winbind scan trusted domains = no pid directory = /var/lock printcap name = /dev/null printing = bsd show add printer wizard = no invalid users = guest wins support = no host msdfs = yes winbind max clients = 2000 winbind max domain connections = 2 kerberos method = secrets only server schannel = yes server kernel smbd support = no client ipc min protocol = CORE server multi channel support = no winbind enum groups = Yes winbind enum users = Yes vfs objects = shadow_copy2 widelinks catia fruit qnap_macea streams_depot aio_pthread wsp backend = elasticsearch rpc_daemon:wspd = fork elasticsearch:address = localhost elasticsearch:port = 5028 elasticsearch:mappings = /usr/local/samba/share/samba/wsp/wsp_for_qsirch_API_backend_mapping_v2.json wsp = yes

edit : I enabled also "SMB multichannel" as a test - but same result

Thanks

beralios avatar Mar 01 '24 17:03 beralios

I have the same problem. After some time HA isn't cant be reached anymore, neither with the IP, homeassistant.local or via cloudflared. Get the same error messages in VirtualBox (running on my NAS). After shutdown and restart of the vm, it works again for some time. The whole time HA seems to run without issues as the history of sensors still gets tracked.

angeeinstein avatar Mar 04 '24 19:03 angeeinstein

I've also observed this message on a QNAP NAS immediately after applying the latest HA updated.

hargcore avatar Mar 07 '24 17:03 hargcore

I see the same messages repeated over and over with my QNAP TS-853A, These are the settings in QNAP for that share: billede

fribse avatar Mar 09 '24 09:03 fribse

I started having this problem with my QNAP TS-851 QTS 4.5.4.2627 (20231225), samba 4.10.18 from a Fedora server 39 client after upgrading its samba packages.

2024-02-11T17:17:36-0500 SUBDEBUG Upgrade: samba-common-2:4.19.4-3.fc39.noarch 2024-02-11T17:18:03-0500 SUBDEBUG Upgrade: samba-client-libs-2:4.19.4-3.fc39.x86_64 2024-02-11T17:18:03-0500 SUBDEBUG Upgrade: samba-common-libs-2:4.19.4-3.fc39.x86_64 2024-02-11T17:18:04-0500 SUBDEBUG Upgrade: samba-libs-2:4.19.4-3.fc39.x86_64 2024-02-11T17:18:04-0500 SUBDEBUG Upgrade: samba-dcerpc-2:4.19.4-3.fc39.x86_64 2024-02-11T17:18:04-0500 SUBDEBUG Upgrade: samba-winbind-modules-2:4.19.4-3.fc39.x86_64 2024-02-11T17:18:04-0500 SUBDEBUG Upgrade: samba-ldb-ldap-modules-2:4.19.4-3.fc39.x86_64 2024-02-11T17:18:04-0500 SUBDEBUG Upgrade: samba-common-tools-2:4.19.4-3.fc39.x86_64 2024-02-11T17:18:52-0500 SUBDEBUG Upgrade: samba-winbind-2:4.19.4-3.fc39.x86_64

2024-02-11T17:19:33-0500 SUBDEBUG Upgraded: samba-winbind-2:4.19.3-1.fc39.x86_64 2024-02-11T17:19:34-0500 SUBDEBUG Upgraded: samba-dcerpc-2:4.19.3-1.fc39.x86_64 2024-02-11T17:19:34-0500 SUBDEBUG Upgraded: samba-common-tools-2:4.19.3-1.fc39.x86_64 2024-02-11T17:19:34-0500 SUBDEBUG Upgraded: samba-ldb-ldap-modules-2:4.19.3-1.fc39.x86_64 2024-02-11T17:19:36-0500 SUBDEBUG Upgraded: samba-winbind-modules-2:4.19.3-1.fc39.x86_64 2024-02-11T17:19:36-0500 SUBDEBUG Upgraded: samba-libs-2:4.19.3-1.fc39.x86_64 2024-02-11T17:19:38-0500 SUBDEBUG Upgraded: samba-client-libs-2:4.19.3-1.fc39.x86_64 2024-02-11T17:19:38-0500 SUBDEBUG Upgraded: samba-common-libs-2:4.19.3-1.fc39.x86_64 2024-02-11T17:19:39-0500 SUBDEBUG Upgraded: samba-common-2:4.19.3-1.fc39.noarch

Hope this helps.

pstimmons avatar Mar 09 '24 19:03 pstimmons

After rebooting my QNAP today I noticed the same console messages:

obraz

I run HAOS 12.0 in a virtual machine (Virtualization Station), NAS is TS-451.

klu16 avatar Mar 12 '24 13:03 klu16

Just upgraded to 12.1. Same thing.

vturekhanov avatar Mar 14 '24 10:03 vturekhanov

Same for me. Also VM in Virtualization Station on QNAP NAS.

mglaabde avatar Mar 15 '24 10:03 mglaabde

Same here: HA running on Win10 Prof, VMware Workstation 15 Player; Connected with QNAP QTS 4.3.3.2644

antennus avatar Mar 16 '24 07:03 antennus

Just upgraded to 12.1. Same thing.

Same here. I run HA on VirtualBox of Macbook. Have you tried to restore to 11.5 to double check? Although the network share still works, but no more new network storage can be added. If you remove the old network share, it can't be added back. Looks like the connection has failed.

js4jiang5 avatar Mar 18 '24 00:03 js4jiang5

Same problem. QNAP NAS, HASSIO OVA in Virtualization Station. Also have some SAMBA shares being accessed on the NAS.

image

V4ler1an avatar Mar 18 '24 17:03 V4ler1an

I have created a build with a patch that should resolve the issue. I (and Shyam, one of the linux-cifs developers who I consulted the issue with) will appreciate if anyone who has these issues could give it a shot and report back if it's resolved - though it's a bit trickier to do so. Here's a step-by-step guide:

  1. Download the appropriate .raucb update from the build page (there's ova and generic-x86-64 build available) - unfortunately you must be logged in on GH, this can't be done using curl on the device directly.
  2. Since the build is a ZIP archive, unzip it using your preferred utility.
  3. Copy the obtained .raucb file to a place which is reachable from the device. In this case reusing some of the SMB shares is probably the most straightforward option.
  4. Log in to the root shell (this can only be done by connecting directly to the device - for bare-metal generic-x86-64 - or to the VM console, it can't be done over SSH): in the HA CLI type login
  5. Install the update - e.g. from a "folder" media share: rauc install /mnt/data/supervisor/media/folder/haos_ova-12.1.dev1710853904.raucb
  6. Reboot the host.
  7. Check logs.

Otherwise the build is identical with 12.1, so there shouldn't be any surprises. After testing you can revert back to the previous version by going into the root shell and running rauc status mark-active other && reboot.

sairon avatar Mar 19 '24 16:03 sairon

Hi all, I updated to Core 2024.3.3 and the error on console has disappeared.. FYI

beralios avatar Mar 24 '24 13:03 beralios

Hi all, I updated to Core 2024.3.3 and the error on console has disappeared.. FYI

Strange. I've also upgraded to Core 2024.3.3, but the error is still there.

js4jiang5 avatar Mar 24 '24 13:03 js4jiang5

No change here either.

V4ler1an avatar Mar 24 '24 16:03 V4ler1an

@beralios @js4jiang5 @V4ler1an Core has no influence on these log messages whatsoever. Please only report back in this issue if you tried the special OS version I linked above, anything else is currently not expected to remedy it.

sairon avatar Mar 25 '24 08:03 sairon

@beralios @js4jiang5 @V4ler1an Core has no influence on these log messages whatsoever. Please only report back in this issue if you tried the special OS version I linked above, anything else is currently not expected to remedy it.

I've followed your instruction to install 12.1.dev.1710853904. Unfortunately, the issue is not solved. I also found some other thing. After login HA CLI, I used to be able to access HomeAssistant by the command "docker exec -t homeassistant /bin/bash". But now after I run the command, it hangs after I simply type the command "ls". The problem is severer than what I thought.

image

js4jiang5 avatar Mar 25 '24 11:03 js4jiang5

@js4jiang5 Thank you very much for the feedback, I'll report it back in the mailing list.

About the other issue, it's simply a case of missing -i parameter for the docker exec, if you add it, it should work as you expect.

sairon avatar Mar 25 '24 12:03 sairon

@js4jiang5 I forgot about this in the instructions, but could you please also send the output of cat /proc/fs/cifs/DebugData? If you reverted back to the previous version, you should be able to quickly switch back using rauc status mark-active other again. Sorry for that :pray:

sairon avatar Mar 25 '24 12:03 sairon

About the other issue, it's simply a case of missing -i parameter for the docker exec, if you add it, it should work as you expect.

Is that also applicable if HA is not installed in a docker container?

angeeinstein avatar Mar 25 '24 12:03 angeeinstein

I've tried testing this, I'm running it as a VM under PVE, but I can't remember which image was used. I tried the bare-metal, wrong. I tried the qcow-2, that gave me signature size exceeds bundle size. Any ideas on how to test it?

fribse avatar Mar 25 '24 12:03 fribse

Now I tried creating a new test VM, still get the signature size error: Invalid bundle format: Signature size (9223372036855060826) exceeds bundle size

fribse avatar Mar 25 '24 15:03 fribse

@js4jiang5 I forgot about this in the instructions, but could you please also send the output of cat /proc/fs/cifs/DebugData? If you reverted back to the previous version, you should be able to quickly switch back using rauc status mark-active other again. Sorry for that 🙏

The DebugData is shown below image

js4jiang5 avatar Mar 26 '24 04:03 js4jiang5