csi-driver-smb icon indicating copy to clipboard operation
csi-driver-smb copied to clipboard

When mount dies, it is not remounted

Open ibotty opened this issue 4 years ago • 26 comments

What happened:

At one point in time the mount died (most likely due to an unrelated server issue). Every pod using the smb-pv could not be started with an error like the following.

MountVolume.MountDevice fail
ed for volume "pvc-cc41658a-8d11-49e1-8536-d2f73cfe829a" : stat /var/lib/kubelet/plugins/kubernetes.io/cs
i/pv/pvc-cc41658a-8d11-49e1-8536-d2f73cfe829a/globalmount: host is down

Umounting any CIFS-Mount allowed new pods to be deployed.

What you expected to happen: Detecting that the mount died and remounting without having to restart pods.

How to reproduce it: I did not try to reproduce yet.

Anything else we need to know?:

Environment:

  • CSI Driver version: mcr.microsoft.com/k8s/csi/smb-csi:v0.4.0
  • Kubernetes version (use kubectl version):
Server Version: 4.5.0-0.okd-2020-10-15-235428
Kubernetes Version: v1.18.3
  • OS (e.g. from /etc/os-release):
NAME=Fedora
VERSION="32.20200629.3.0 (CoreOS)"
  • Kernel (e.g. uname -a): 5.6.19-300.fc32.x86_64

ibotty avatar Nov 23 '20 15:11 ibotty

pasted from Steven French: Remount is going to be possible with changes Ronnie at Redhat is working on (for the new mount API support for cifs), but remount should not be needed in the case where a server goes down.

SMB3 has very cool features there, and many of them have been implemented in cifs.ko for a very long time. Some specific features beyond support for SMB3 ‘persistsent handles’:

  1. In 5.0 kernel reconnect support for cases where server IP address changed was added. 5.0 also added some important reconnect bug fixes relating to crediting
  2. 4.20 kernel adding dynamic tracing for various events relating to reconnects and why they were triggered
  3. SMB3 has a feature called ‘persistent handles’ that allows reconnecting state (locks etc.) more safely during reconnect, a feature was added in the 5.1 kernel to allow the persistent handle timeout to be configurable (new mount parm “handletimeout=”)

An easy way to think about this is that if the network connection goes down – the Linux SMB3 client reopens the files, reacquires byte range locks, and since the Azure server supports persistent handles, there are more guarantees about reconnect surviving races with other clients.

andyzhangx avatar Nov 24 '20 09:11 andyzhangx

Unfortunately I don't control the server and I don't know what happened.

I am running a 5.6.19-Kernel though, which ought to be new enough for these features. The bug still happened, that for three straight days, no pods could be created on affected nodes. Umounting all cifs-mounts by hand allowed the pods to run again.

ibotty avatar Nov 24 '20 13:11 ibotty

cc @smfrench

andyzhangx avatar Nov 24 '20 14:11 andyzhangx

It happened again. I get the following log lines (a lot of them):

Status code returned 0xc000006d STATUS_LOGON_FAILURE
CIFS VFS: \\XXX Send error in SessSetup = -13

I am pretty sure that it is induced by an unreliable server. But the problem is that csi-driver-smb does not recover. This is on 5.6.19-300.fc32.x86_64

ibotty avatar Nov 30 '20 13:11 ibotty

In the past I have only seen that for the case where the userid or password are misconfigured (password changed on server e.g.).

On Mon, Nov 30, 2020 at 7:12 AM Tobias Florek [email protected] wrote:

It happened again. I get the following log lines (a lot of them):

Status code returned 0xc000006d STATUS_LOGON_FAILURE CIFS VFS: \XXX Send error in SessSetup = -13

I am pretty sure that it is induced by an unreliable server. But the problem is that csi-driver-smb does not recover. This is on 5.6.19-300.fc32.x86_64

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-csi/csi-driver-smb/issues/164#issuecomment-735776817, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSTN5VBDNH64PAWBPMQPD3SSOK3JANCNFSM4T7US4DA .

-- Thanks,

Steve

smfrench avatar Nov 30 '20 15:11 smfrench

Well, it mounted without any problems after a manual umount.

ibotty avatar Nov 30 '20 16:11 ibotty

The command New-SmbGlobalMapping -RemotePath must have in the "-RequirePrivacy $true" otherwise the SMB channel will be reset after 15min and you'll loose access.

New-SmbGlobalMapping -RemotePath '\\FQDN\share\Directory' -Credential $credential -LocalPath G: -RequirePrivacy $true -ErrorAction Stop

marciogmorales avatar Feb 26 '21 22:02 marciogmorales

seems related to this issue: https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/1353 and https://github.com/moby/moby/issues/37863, I will fix it in csi-proxy first, thanks! @marciogmorales

andyzhangx avatar Feb 28 '21 13:02 andyzhangx

BTW, original issue is on linux node, this is another issue on Windows, that's two different issues.

andyzhangx avatar Feb 28 '21 13:02 andyzhangx

worked out a PR to fix in k/k first: https://github.com/kubernetes/kubernetes/pull/99550

andyzhangx avatar Feb 28 '21 13:02 andyzhangx

Yes, I am having these issues (regularly!) on a linux node.

ibotty avatar Feb 28 '21 14:02 ibotty

about the Host is down issue which could lead to pod in Terminating status forever, there is already a PR to address this issue: https://github.com/kubernetes/utils/pull/203#issuecomment-823211671

andyzhangx avatar Apr 20 '21 12:04 andyzhangx

would be fixed by this PR: https://github.com/kubernetes/kubernetes/pull/101305

andyzhangx avatar Apr 21 '21 03:04 andyzhangx

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jul 20 '21 04:07 fejta-bot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 19 '21 04:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 18 '21 05:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 18 '21 05:09 k8s-ci-robot

@andyzhangx should this be reopened for follow up on linux nodes?

faandg avatar Jan 25 '22 13:01 faandg

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Feb 24 '22 14:02 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 24 '22 14:02 k8s-ci-robot

This is still an issue. @andyzhangx

NotANormalNerd avatar Mar 20 '22 13:03 NotANormalNerd

I am also seeing the Host is down issue whenever one of the following occurs:

  • Upgrade csi-driver-smb deployment or restart of csi-smb-node pod
  • Network connection between NAS and cluster is interrupted temporarily

The biggest problem here is that this failure mode is completely silent; the PV/PVCS/drivers all report healthy, and the pod only crashes if it isn't robust enough to catch an fs issue and tries to read/write the mount.

The only fix seems to be to delete the PV / PVC, then delete the pod, wait for the pv to close, then recreate everything which is really awful. Is there a way to force the csi driver to recreate everything?

Alternatively, a workaround might be to deploy a sidecar to the smb-node drivers and either force remount the cifs shares or at the very least change the health status to unhealthy in order to help detect this problem.

@andyzhangx can you please reopen this?

MiddleMan5 avatar Jul 20 '22 19:07 MiddleMan5

could you share what's the linux kernel version and k8s version when you hit host is down? There is autoreconnect in smb kernel driver

andyzhangx avatar Jul 21 '22 00:07 andyzhangx

Ubuntu 18.04.4 LTS Linux version 4.15.0-189-generic Kubernetes 1.23.0

The network storage that we are using supports smb version <= 2.0

MiddleMan5 avatar Jul 21 '22 16:07 MiddleMan5

4.15 kernel is more than four and a half years old, are you able to upgrade to 5.x kernel? The CSI driver relies on smb kernel driver to do the reconnect.

andyzhangx avatar Jul 22 '22 02:07 andyzhangx

@andyzhangx unfortunately no, we have 20+ nodes running Ubuntu 18.04, and migrating to a different distro or kernel version is not currently feasible.

Automatically reconnecting is not the biggest issue in my eyes, it's the fact that the failure is completely silent.

Is there any process or section of the driver that periodically checks the mounts that could be improved? I'd be interested in opening a PR, but not entirely sure where to start.

Documenting the absolute minimum kernel version would be good idea here, but it's still kind of lame there isn't a way to make a request to the CSI driver to force remount the volumes.

I'll try to find the minimum kernel version tomorrow unless you know it off the top of your head.

MiddleMan5 avatar Jul 22 '22 02:07 MiddleMan5

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 20 '22 02:10 k8s-triage-robot

This is still a problem, and is a massive pain anytime the csi drivers are redeployed or upgraded.

We now have 20 nodes that can't be upgraded, and we currently have no alternative solutions.

MiddleMan5 avatar Oct 20 '22 02:10 MiddleMan5

/remove-lifecycle stale

MiddleMan5 avatar Oct 20 '22 02:10 MiddleMan5

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 18 '23 03:01 k8s-triage-robot