aws-efs-csi-driver
aws-efs-csi-driver copied to clipboard
Volumes can get mounted multiple times, preventing pod deletion
/kind bug
What happened? Trying to delete pod that's using multiple EFS volumes fails very frequently due to failure to unmount the EFS volumes. This seems to happen because some of the volumes got mounted multiple times (with different stunnel port) and kubelet will not attempt to unmount them.
What you expected to happen? If I ask for 10 EFS volumes, the driver would mount 10 volumes and kubelet would unmount them all and the pod deletion doesn't hang.
How to reproduce it (as minimally and precisely as possible)? Using a StorageClass for dynamic EFS provisioning:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
mountOptions:
- tls
parameters:
provisioningMode: efs-ap
fileSystemId: fs-deadbeef
directoryPerms: "700"
basePath: "/dynamic_provisioning"
Create 10 EFS volumes:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-vol-1
annotations:
"volume.beta.kubernetes.io/storage-class": "efs-sc"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-vol-2
annotations:
"volume.beta.kubernetes.io/storage-class": "efs-sc"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
< 7 more ...>
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-vol-10
annotations:
"volume.beta.kubernetes.io/storage-class": "efs-sc"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Create a pod that uses the volumes:
apiVersion: v1
kind: Pod
metadata:
name: busybox-test
labels:
name: busybox-test
spec:
restartPolicy: Never
containers:
- resources:
limits :
cpu: 0.5
image: gcr.io/google_containers/busybox
command:
- "/bin/sh"
- "-c"
- "while true; do date; sleep 1; done"
name: busybox
volumeMounts:
- name: vol-1
mountPath: /mnt/test-1
- name: vol-2
mountPath: /mnt/test-2
- name: vol-3
mountPath: /mnt/test-3
- name: vol-4
mountPath: /mnt/test-4
- name: vol-5
mountPath: /mnt/test-5
- name: vol-6
mountPath: /mnt/test-6
- name: vol-7
mountPath: /mnt/test-7
- name: vol-8
mountPath: /mnt/test-8
- name: vol-9
mountPath: /mnt/test-9
- name: vol-10
mountPath: /mnt/test-10
volumes:
- name: vol-1
persistentVolumeClaim:
claimName: efs-vol-1
- name: vol-2
persistentVolumeClaim:
claimName: efs-vol-2
- name: vol-3
persistentVolumeClaim:
claimName: efs-vol-3
- name: vol-4
persistentVolumeClaim:
claimName: efs-vol-4
- name: vol-5
persistentVolumeClaim:
claimName: efs-vol-5
- name: vol-6
persistentVolumeClaim:
claimName: efs-vol-6
- name: vol-7
persistentVolumeClaim:
claimName: efs-vol-7
- name: vol-8
persistentVolumeClaim:
claimName: efs-vol-8
- name: vol-9
persistentVolumeClaim:
claimName: efs-vol-9
- name: vol-10
persistentVolumeClaim:
claimName: efs-vol-10
Wait for the pod to start (may not necessarily succeed due to a bug in efs-utils), then try to delete it with kubectl delete pod busybox-test
. Depending on the machine load there's good chance the pod deletion would hang indefinitely.
Looking at the CSI node driver container, the logs would contain something like this (cut out parts that are unrelated to the problematic volume, redacted EFS endpoints):
2022-05-11 10:08:07,961 - INFO - Starting TLS tunnel: "/usr/bin/stunnel /var/run/efs/stunnel-config.fs-0123456789.var.lib.kubelet.pods.20b88005-3579-4b4a-a6ce-1b158c326154.volumes.kubernetes.io~csi.pvc-acbe1d27-ed03-4b52-b5e3-bc861cfa7413.mount.20068"
...
2022-05-11 10:08:08,019 - INFO - Executing: "/sbin/mount.nfs4 127.0.0.1:/ /var/lib/kubelet/pods/20b88005-3579-4b4a-a6ce-1b158c326154/volumes/kubernetes.io~csi/pvc-acbe1d27-ed03-4b52-b5e3-bc861cfa7413/mount -o rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,port=20068"
...
2022-05-11 10:10:07,882 - INFO - Starting TLS tunnel: "/usr/bin/stunnel /var/run/efs/stunnel-config.fs-0123456789.var.lib.kubelet.pods.20b88005-3579-4b4a-a6ce-1b158c326154.volumes.kubernetes.io~csi.pvc-acbe1d27-ed03-4b52-b5e3-bc861cfa7413.mount.20309"
...
2022-05-11 10:10:07,936 - INFO - Executing: "/sbin/mount.nfs4 127.0.0.1:/ /var/lib/kubelet/pods/20b88005-3579-4b4a-a6ce-1b158c326154/volumes/kubernetes.io~csi/pvc-acbe1d27-ed03-4b52-b5e3-bc861cfa7413/mount -o rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,port=20309"
...
2022-05-11 10:10:12,457 - INFO - Successfully mounted fs-0123456789.efs.us-east-1.amazonaws.com at /var/lib/kubelet/pods/20b88005-3579-4b4a-a6ce-1b158c326154/volumes/kubernetes.io~csi/pvc-acbe1d27-ed03-4b52-b5e3-bc861cfa7413/mount
2022-05-11 10:10:12,518 - INFO - Successfully mounted fs-0123456789.efs.us-east-1.amazonaws.com at /var/lib/kubelet/pods/20b88005-3579-4b4a-a6ce-1b158c326154/volumes/kubernetes.io~csi/pvc-acbe1d27-ed03-4b52-b5e3-bc861cfa7413/mount
So, we have the pvc-acbe1d27-ed03-4b52-b5e3-bc861cfa7413
PV mounted twice with different stunnel ports. And it seems like kubelet is aware of only one of the mounts and won't attempt to unmount the other one when deleting the pod (?). It is possible to rsh into the node driver container, run umount
manually and get the pod deleted cleanly.
Anything else we need to know?: The problem seems to appear if the volume takes too long to mount: kubelet has a reconciliation loop and if the volume mount test times-out, it will attemt to call mount again. After some time both these attempts succeed but only one of them is then taken in account. Here's some snippets from the kubelet logs (different test, time stamps and PV names won't match the logs above):
May 11 11:54:20.484074 ip-10-0-179-42 hyperkube[1456]: I0511 11:54:20.484038 1456 reconciler.go:258] "operationExecutor.MountVolume started for volume \"pvc-33aaac42-ba38-4600-ac1a-971bc1c052f4\" (UniqueName: \"kubernetes.io/csi/efs.csi.aws.com^fs-00deadbeef::fsap-0123456789\") pod \"busybox-test\" (U
ID: \"e2cd3eca-d99c-40f3-a7ee-07e8309d2e0b\") " pod="default/busybox-test"
...
May 11 11:56:20.490533 ip-10-0-179-42 hyperkube[1456]: E0511 11:56:20.490330 1456 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/efs.csi.aws.com^fs-00deadbeef::fsap-00123456789 podName: nodeName:}" failed. No retries permitted until 2022-05-11 11:56:20.990284249 +0000 UTC m=+1087.245447193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "pvc-33aaac42-ba38-4600-ac1a-971bc1c052f4" (UniqueName: "kubernetes.io/csi/efs.csi.aws.com^fs-00deadbeef::fsap-0123456789") pod "busybox-test" (UID: "e2cd3eca-d99c-40f3-a7ee-07e8309d2e0b") : rpc error: code = DeadlineExceeded desc = context deadline exceeded
...
May 11 11:56:21.019922 ip-10-0-179-42 hyperkube[1456]: I0511 11:56:21.019782 1456 reconciler.go:258] "operationExecutor.MountVolume started for volume \"pvc-33aaac42-ba38-4600-ac1a-971bc1c052f4\" (UniqueName: \"kubernetes.io/csi/efs.csi.aws.com^fs-00deadbeef::fsap-0123456789\") pod \"busybox-test\" (UID: \"e2cd3eca-d99c-40f3-a7ee-07e8309d2e0b\") " pod="default/busybox-test"
...
May 11 11:56:23.424606 ip-10-0-179-42 hyperkube[1456]: I0511 11:56:23.424581 1456 operation_generator.go:756] "MountVolume.SetUp succeeded for volume \"pvc-33aaac42-ba38-4600-ac1a-971bc1c052f4\" (UniqueName: \"kubernetes.io/csi/efs.csi.aws.com^fs-00deadbeef::fsap-0123456789\") pod \"busybox-test\" (UID: \"e2cd3eca-d99c-40f3-a7ee-07e8309d2e0b\") " pod="default/busybox-test
Environment
-
Kubernetes version (use
kubectl version
): v1.23.5+9ce5071 -
Driver version: Reproduced with v1.1.1 and v1.3.7
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This is still plaguing the driver and causes quite some problems to us.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.