csi-driver-smb icon indicating copy to clipboard operation
csi-driver-smb copied to clipboard

MountVolume.NodeExpandVolume failed error for volume declared as read-only file system

Open Zombro opened this issue 1 year ago • 3 comments

What happened:

Mounting an smb filesystem declared as read-only in the .spec.template.spec.volumes[*] triggers an error in kubelet logs. the scheduling / deployment / filesystem appears to be working, but this event fires:

MountVolume.NodeExpandVolume failed for volume "smb-config" requested read-only file system

This error / event does not fire if the .spec.template.spec.volumes[*].readOnly is omitted.

What you expected to happen:

No errors reported.

How to reproduce it:

Deploy a simple test workload like below. As presented, it works without errors and the mounted filesystem is RO as expected.

Note the .spec.template.spec.volumes[0].persistentVolumeClaim.readOnly is disabled. When this is enabled, the mentioned error / event fires, but the workload still functions.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: smb-ro-mount
spec:
  replicas: 1
  selector:
    matchLabels:
      app: smb-ro-mount
  template:
    metadata:
      labels:
        app: smb-ro-mount
    spec:
      volumes:
        - name: smb-config
          persistentVolumeClaim:
            claimName: smb-config
            # readOnly: true
      containers:
        - name: smb-ro-mount-example
          image: nginx
          volumeMounts:
            - name: smb-config
              readOnly: true
              mountPath: /config
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: smb-config
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 10Mi
  volumeName: smb-config
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: smb-config
spec:
  capacity:
    storage: 10Mi
  csi:
    driver: smb2.csi.k8s.io
    volumeHandle: smb-config-a1b2c3
    fsType: ext4
    volumeAttributes:
      createSubDir: "true"
      source: \\smbtest.x.net\K8S\config-demo
    nodeStageSecretRef:
      name: SMB-DEMO-CREDS
      namespace: default
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  mountOptions:
    - dir_mode=0555
    - file_mode=0444
    - vers=3.0
  volumeMode: Filesystem

Environment:

  • CSI Driver version: helm v.1.13.0, image: registry.k8s.io/sig-storage/smbplugin:v1.13.0
  • Kubernetes version: 1.28
  • OS: windows server 2022 & ubuntu 22.04.3
  • Kernel(s): 10.0.20348.2159 & 5.15.0-75-generic
  • Install tools: helm

Parting Thoughts

Maybe this isn't an issue directly with csi-smb-driver, but rather a coupling with kubelet & volume CSI operations. It would be nice if the documentation somewhere pointed out this behavior.

Zombro avatar Jan 15 '24 20:01 Zombro

why is MountVolume.NodeExpandVolume triggered? have you expanded a pvc or pv?

andyzhangx avatar Jan 16 '24 07:01 andyzhangx

no, have not expanded.

Zombro avatar Jan 16 '24 16:01 Zombro

I am also seeing this in some of my filestore logs. Only behavior is that I notice is the filestore works fine but not all the time. Sporadically I have had some mounting issues onto pods that cause pods to get stuck in an init stage but it's not all the time. I am wondering if these are connecting (doesn't seem so) but curious why these logs pop up.

JYlag avatar Mar 01 '24 22:03 JYlag

I think i figured out the ultimate solution here.

  • wrapper helm chart that declares at least one StorageClass whose provisioner references driver smb.csi.k8s.io, and subcharts this csi-driver-smb chart
  • ensure the StorageClass declares allowVolumeExpansion: false
  • ensure any of your dependent csi.smb PVs & PVCs ref the StorageClass

dug through a lot of storage api and csi code to reach this conclusion... then noticed the documentation... https://kubernetes.io/blog/2022/05/05/volume-expansion-ga/#storage-driver-support

Maintainers, could you stuff a StorageClass template and values interface into the helm chart to make our lives easier ?

Zombro avatar May 24 '24 00:05 Zombro

It may be a kubelet issue that was fixed in 1.30. See https://github.com/kubernetes/kubernetes/pull/122508

tydra-wang avatar Jul 02 '24 08:07 tydra-wang