nfs-subdir-external-provisioner icon indicating copy to clipboard operation
nfs-subdir-external-provisioner copied to clipboard

No pod for nfs provisioner running after Helm install

Open crazyelectron-io opened this issue 3 years ago • 0 comments

I have a K3s cluster (with 3 control/etcd nodes and 6 worker nodes). I deployed the NFS provisioner using the provided helm chart and changed the naming so I can start using multiple instances.

helm install nfs-mmedia-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=10.100.2.245 \
    --set nfs.path=/mmedia \
    --set storageClass.name=nfs-mmedia-client \
    --set storageClass.provisionerName=k8s-sigs.io/nfs-mmedia-provisioner

The test claim:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-mmedia-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

and the pod:

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:stable
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

The storage class is created:

$ kubectl get sc
NAME                 PROVISIONER                          RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
...
nfs-mmedia-client    k8s-sigs.io/nfs-mmedia-provisioner   Delete          Immediate              true                   57m

the PVC states:

$ kubectl describe test-claim
Name:          test-claim
Namespace:     default
StorageClass:  nfs-mmedia-client
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-mmedia-provisioner
               volume.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-mmedia-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       test-pod
Events:
  Type    Reason                Age                  From                         Message
  ----    ------                ----                 ----                         -------
  Normal  ExternalProvisioning  78s (x103 over 26m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-mmedia-provisioner" or manually created by system administrator

and finally:

$ kubectl get events
28m         Warning   FailedScheduling       pod/test-pod                                                                   0/9 nodes are available: 9 pod has unbound immediate PersistentVolumeClaims. preemption: 0/9 nodes are available: 9 Preemption is not helpful for scheduling.
28m         Warning   FailedScheduling       pod/test-pod                                                                   0/9 nodes are available: 9 pod has unbound immediate PersistentVolumeClaims. preemption: 0/9 nodes are available: 9 Preemption is not helpful for scheduling.

I noticed there is no NFS provisioner pod running as $ kubectl get pods -A | grep -i nfs gives no result`. How can I troublesoot this? Looks like an annotation issue, perhaps...

crazyelectron-io avatar Aug 23 '22 09:08 crazyelectron-io