Timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume
- Configuration files as same as examples
- AWS S3 bucket created after configuration files applied
Only test pod's mountPath changed to /var/www/html.
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: csi-s3-test-nginx
namespace: default
spec:
containers:
- name: csi-s3-test-nginx
image: nginx
volumeMounts:
- mountPath: /var/www/html
name: webroot
volumes:
- name: webroot
persistentVolumeClaim:
claimName: csi-s3-pvc
readOnly: false
I get an error timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume while creating pods:
$ kubectl get events | tail
6m12s Normal Pulled pod/csi-s3-95nz6 Successfully pulled image "ctrox/csi-s3:v1.2.0-rc.2" in 59.670915168s
6m12s Normal Created pod/csi-s3-95nz6 Created container csi-s3
6m12s Normal Started pod/csi-s3-95nz6 Started container csi-s3
6m9s Normal ExternalProvisioning persistentvolumeclaim/csi-s3-pvc waiting for a volume to be created, either by external provisioner "ch.ctrox.csi.s3-driver" or manually created by system administrator
6m7s Normal Provisioning persistentvolumeclaim/csi-s3-pvc External provisioner is provisioning volume for claim "prod/csi-s3-pvc"
6m5s Normal ProvisioningSucceeded persistentvolumeclaim/csi-s3-pvc Successfully provisioned volume pvc-53f12ea9-9398-49dd-b16c-0454b145b746
2m35s Normal Scheduled pod/csi-s3-test-nginx Successfully assigned prod/csi-s3-test-nginx to minikube
35s Warning FailedAttachVolume pod/csi-s3-test-nginx AttachVolume.Attach failed for volume "pvc-53f12ea9-9398-49dd-b16c-0454b145b746" : timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume pvc-53f12ea9-9398-49dd-b16c-0454b145b746
32s Warning FailedMount pod/csi-s3-test-nginx Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-m66ll]: timed out waiting for the condition
7m22s Normal SuccessfulCreate daemonset/csi-s3 Created pod: csi-s3-95nz6
Is it an network issue? Or any kind of mis-configurations? Thanks.
environment:
Docker version 20.10.18, build b40c2f6
minikube v1.26.1 on Ubuntu 20.04
Client Version: v1.25.1
Kustomize Version: v4.5.7
Server Version: v1.24.3
I had the same problem, I looked at the logs of the csi-attacher-s3 pod, first i saw Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource. I figured it was a k8s version issue, so I updated the container image of the csi-attacher stateful set, from v2.2.1 to canary (the latest).
kubectl -n kube-system set image statefulset/csi-attacher-s3 csi-attacher=quay.io/k8scsi/csi-attacher:canary
Next I got a permission error: `v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:csi-attacher-sa" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope.
I tried to modify the role bindings but I couldn't find the right combinations so I ended up giving the csi-attacher-sa service account cluster-admin privileges as shown below:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-all
subjects:
- kind: ServiceAccount
name: csi-attacher-sa
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
@fallmo I've had the same problem, but you can just do this and it worked for me.
- apiGroups: ["storage.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch", "update", "patch"]
or
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments","storageclass"]
verbs: ["get", "list", "watch", "update", "patch"]
Related to https://github.com/ctrox/csi-s3/issues/72#issuecomment-1149634988
Mine fixed by using this https://github.com/ctrox/csi-s3/pull/70/files
Related to #72 (comment)
Mine fixed by using this https://github.com/ctrox/csi-s3/pull/70/files
did this and it seemed to work (haven't checked it right after) but then I decided to pull the latest from the repo and did
cd deploy/kubernetes
kubectl apply -f provisioner.yaml
kubectl apply -f attacher.yaml
kubectl apply -f csi-s3.yaml
which made it work 👍
So for anyone else who came here after realising that since a Kubernetes upgrade they couldn't create/mount new S3 volumes anymore, I'll save you some time;
- Apply the latest
provisioner,attacherandcsi-s3files from https://github.com/ctrox/csi-s3/tree/master/deploy/kubernetes - Change the
external-attacher-runnerClusterRole to go fromresources: ["volumeattachments"]toresources: ["volumeattachments", "volumeattachments/status", "storageclass"] - Find
quay.io/k8scsi/csi-attacher:v2.2.0on thecsi-attacher-s3StatefulSet and bump it up a major version toquay.io/k8scsi/csi-attacher:v3.1.0 - Apply
That should have it working again until the next (seemingly inevitable) breaking change :)
So for anyone else who came here after realising that since a Kubernetes upgrade they couldn't create/mount new S3 volumes anymore, I'll save you some time;
- Apply the latest
provisioner,attacherandcsi-s3files from https://github.com/ctrox/csi-s3/tree/master/deploy/kubernetes- Change the
external-attacher-runnerClusterRole to go fromresources: ["volumeattachments"]toresources: ["volumeattachments", "volumeattachments/status", "storageclass"]- Find
quay.io/k8scsi/csi-attacher:v2.2.0on thecsi-attacher-s3StatefulSet and bump it up a major version toquay.io/k8scsi/csi-attacher:v3.1.0- Apply
That should have it working again until the next (seemingly inevitable) breaking change :)
After changing the above one pod is went to running state but we are getting this issue. "root@csi-s3-test-nginx:/var/lib/www/html# ls ls: reading directory '.': Input/output error " Any help on this?
@Venkatesh7591 you save my day, it worked. Thank you!
@Venkatesh7591 Hello I have encountered the same problem, have you solved it now
follow the @RobinJ1995 instructions.
Related to #72 (comment)
Mine fixed by using this https://github.com/ctrox/csi-s3/pull/70/files
Many thanks!
issue https://github.com/ctrox/csi-s3/issues/94
this
- apiGroups: ["storage.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch", "update", "patch"]
work for us