csi-driver-smb icon indicating copy to clipboard operation
csi-driver-smb copied to clipboard

Released PVs Cannot Be Claimed (referencing PVCs stuck in Pending) Until `/spec/claimRef/uid` is Removed

Open chr0n1x opened this issue 1 year ago • 0 comments

Preface

Apologies if this is an upstream bug, feel free to close if so.

What happened:

  1. have a PVC in a helm chart, no pre-existing PV for whatever I am installing.
  2. have a StorageClass for smb (below); storageClassName set to smb in values.yaml for chart above
  3. PV is automatically created by the CSI controller
  4. note that the reclaimPolicy below is set to Reclaim
  5. I DELETE the entire namespace where this helm chart is installed. PV goes into state=Released and files on SMB server persist 👍
  6. I UPDATE the PVC in the values.yaml file to reference the PV that was created via volumeClaim
  7. re-install the chart with the new values.
  8. ERROR: PVC stuck in PENDING state. Events for PVC indicate that the PV I reference is still bound to another claim.
  9. THE FIX for any other poor soul that runs into this I found here; tl;dr is to kubectl edit pv <your pv> and remove the /spec/claimRef/uid field.
# SMB storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    meta.helm.sh/release-name: smb-csi
    meta.helm.sh/release-namespace: kube-system
  labels:
    app.kubernetes.io/managed-by: Helm
  name: smb
  resourceVersion: "10701819"
  uid: ........
mountOptions:
- dir_mode=0777
- file_mode=0777
- nobrl
- cache=none
- noperm
- mfsymlinks
- uid=1001
- gid=1001
parameters:
  csi.storage.k8s.io/node-stage-secret-name: smbcreds
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/provisioner-secret-name: smbcreds
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  source: //nas.lan/k8s.smb-csi-share
  subDir: ns-${pvc.metadata.namespace}--${pv.metadata.name}
provisioner: smb.csi.k8s.io
reclaimPolicy: Retain
volumeBindingMode: Immediate

What you expected to happen:

I expect the PVC to automatically attach to the old PV

How to reproduce it:

Follow steps above.

This happened while playing around with the docker-registry-ui helm chart in https://helm.joxit.dev I wrap this helm chart and have a separate PVC definition (below) in templates/. Values file:

persistence:
  class: smb
  # generated by SMB storage CSI controller
  volumeName: pvc-c463743b-6437-4f45-8e6d-4fd49974ca03
  size: 128Gi

docker-registry-ui:
  registry:
    enabled: true
    dataVolume:
      persistentVolumeClaim:
        claimName: longhorn-docker-registry-pvc

    ingress:
      enabled: true
      annotations:
        nginx.ingress.kubernetes.io/proxy-body-size: "0"
        nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
        nginx.ingress.kubernetes.io/proxy-send-timeout: "600"

  ui:
    ingress:
        enabled: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-docker-registry-pvc
  namespace: docker-registry
spec:
  accessModes:
    - ReadWriteMany

  {{- if (.Values.persistence).class }}
  storageClassName: {{ .Values.persistence.class }}
  {{- end }}

  resources:
    requests:
      storage: {{ (.Values.persistence).size | default "128Gi" }}

  {{- if (.Values.persistence).volumeName }}
  volumeName: {{ .Values.persistence.volumeName }}
  {{- end -}}

Environment:

  • CSI Driver version: 1.9.0
  • Kubernetes version (use kubectl version): 1.30.3
  • OS (e.g. from /etc/os-release): Talos 1.7.6
  • Kernel (e.g. uname -a): linux 6.x
  • Install tools: helm via argocd
  • Others: n/a

chr0n1x avatar Aug 13 '24 01:08 chr0n1x