csi-driver-smb
csi-driver-smb copied to clipboard
Possible old bug - getting reports of globalmount does not exist sporadically.
What happened: Trying to mount a known drive occasionally fails with the message
I0122 16:14:54.715136 1 nodeserver.go:79] NodePublishVolume: mounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/
What you expected to happen: I expected the volume to be mounted - and the pod to start.
How to reproduce it: Unknown - sometimes it works, and sometimes it doesn't
Anything else we need to know?:
- Other pods on the same node - seem to be able to access this pv - ( sometimes )
- K8s Version 1.21
- Baremetal K8s
- Driver info: I1119 13:12:35.354327 1 smb.go:93] DRIVER INFORMATION:
Build Date: "2023-09-11T23:25:57Z" Compiler: gc Driver Name: smb.csi.k8s.io Driver Version: v1.13.0 Git Commit: "" Go Version: go1.20.5 Platform: linux/amd64
Streaming logs below: I1119 13:12:35.401233 1 mount_linux.go:284] Detected umount with safe 'not mounted' behavior I1119 13:12:35.401925 1 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME I1119 13:12:35.401945 1 driver.go:93] Enabling controller service capability: SINGLE_NODE_MULTI_WRITER I1119 13:12:35.401949 1 driver.go:93] Enabling controller service capability: CLONE_VOLUME I1119 13:12:35.401968 1 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER I1119 13:12:35.401973 1 driver.go:112] Enabling volume access mode: SINGLE_NODE_READER_ONLY I1119 13:12:35.401976 1 driver.go:112] Enabling volume access mode: SINGLE_NODE_SINGLE_WRITER I1119 13:12:35.401979 1 driver.go:112] Enabling volume access mode: SINGLE_NODE_MULTI_WRITER I1119 13:12:35.401982 1 driver.go:112] Enabling volume access mode: MULTI_NODE_READER_ONLY I1119 13:12:35.401987 1 driver.go:112] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER I1119 13:12:35.401990 1 driver.go:112] Enabling volume access mode: MULTI_NODE_MULTI_WRITER I1119 13:12:35.401994 1 driver.go:103] Enabling node service capability: STAGE_UNSTAGE_VOLUME I1119 13:12:35.401999 1 driver.go:103] Enabling node service capability: SINGLE_NODE_MULTI_WRITER I1119 13:12:35.402001 1 driver.go:103] Enabling node service capability: VOLUME_MOUNT_GROUP I1119 13:12:35.402005 1 driver.go:103] Enabling node service capability: GET_VOLUME_STATS I1119 13:12:35.403111 1 server.go:118] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} I1119 13:12:35.969227 1 utils.go:76] GRPC call: /csi.v1.Identity/GetPluginInfo I1119 13:12:35.969256 1 utils.go:77] GRPC request: {} I1119 13:12:35.975036 1 utils.go:83] GRPC response: {"name":"smb.csi.k8s.io","vendor_version":"v1.13.0"} I1119 13:12:36.063644 1 utils.go:76] GRPC call: /csi.v1.Identity/GetPluginInfo I1119 13:12:36.063671 1 utils.go:77] GRPC request: {} I1119 13:12:36.063755 1 utils.go:83] GRPC response: {"name":"smb.csi.k8s.io","vendor_version":"v1.13.0"} I1119 13:12:36.599218 1 utils.go:76] GRPC call: /csi.v1.Node/NodeGetInfo I1119 13:12:36.599236 1 utils.go:77] GRPC request: {} I1119 13:12:36.599285 1 utils.go:83] GRPC response: {"node_id":"xxxxx"} I1119 13:21:23.734807 1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
please note: if I create the globablmount directory in the node manually - the mount continues, and the pod works -
kubectl exec -n kube-system -it csi-smb-node-v557q -c smb -- mkdir -p /var/lib/kubelet/plugins/kubernetes.io/csi/pv/<myPVvolumeName>/globalmount
I found this solution here: https://github.com/kubernetes-csi/csi-driver-smb/issues/302
I'm also getting that error sporadically.
As @maprager mentioned, sometimes it works and sometimes it doesn't.
what's the volumeHandle value of your PVs? could you make sure there is no conflict of those volumeHandle values in all of your PVs?
Same here. Seems like the directory is not created.
same issue here.
I create manually folder has workaround mentioned but the plugin delete it and I can't start my pod
same issue here.
Warning FailedMount 40m (x205 over 19h) kubelet MountVolume.SetUp failed for volume "aicp-local-pv-ae308314-e8c4-443d-a9a1-dee92cc53759" : rpc error: code = Internal desc = Could not mount "/var/lib/kubelet/plugins/kubernetes.io/csi/smb.csi.k8s.io/6eede1929b708423990deaef8efd0ddc79452b5dd7012f2b551c5ca5772e71d5/globalmount" at "/var/lib/kubelet/pods/737fd7af-d844-491e-8b4d-21227a7cb697/volumes/kubernetes.io~csi/aicp-local-pv-ae308314-e8c4-443d-a9a1-dee92cc53759/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments: -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/smb.csi.k8s.io/6eede1929b708423990deaef8efd0ddc79452b5dd7012f2b551c5ca5772e71d5/globalmount /var/lib/kubelet/pods/737fd7af-d844-491e-8b4d-21227a7cb697/volumes/kubernetes.io~csi/aicp-local-pv-ae308314-e8c4-443d-a9a1-dee92cc53759/mount
Output: mount: /var/lib/kubelet/pods/737fd7af-d844-491e-8b4d-21227a7cb697/volumes/kubernetes.io~csi/aicp-local-pv-ae308314-e8c4-443d-a9a1-dee92cc53759/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/smb.csi.k8s.io/6eede1929b708423990deaef8efd0ddc79452b5dd7012f2b551c5ca5772e71d5/globalmount does not exist.
dmesg(1) may have more information after failed mount system call.
Hello all,
In fact you need to double check your PV csi configuration and mount options:
...
spec:
csi:
driver: smb.csi.k8s.io
volumeHandle: <UNIQUE_ID_PV_IN_CLUSTER>
volumeAttributes:
source: <SMB_FULL_PATH>
nodeStageSecretRef:
name: <SECRET_NAME>
namespace: <SECRET_NAMESPACE>
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- vers=3.0
- uid=<UID>
- gid=<GID>
- forceuid
- forcegid
- nosharesock
- mfsymlinks
- cache=strict
- noserverino
volumeMode: Filesystem
UNIQUE_ID_PV_IN_CLUSTER: Must be a unique string ID in your cluster for your PV SMB_FULL_PATH: //<FQDN or IP>/<SMB_PATH> FQDN or IP: Try with IP because cluster may cant resolve FQDN
Credentials: SECRET_NAMESPACE and SECRET_NAME with username and password fields
UID/GID should correspond to container process owner
Hope this help