csi-s3
csi-s3 copied to clipboard
DigitalOcean example?
Hi, Does anyone have a DigitalOcean example? I can't seem to get it to work with it.
Thanks, Jamie
I do not have any experience with DigitalOcean myself but it should just depend on what Kubernetes/CSI version is supported. If they already support Kubernetes 1.13 I can recommend you try the new release (v1.1.0) that I just uploaded, which should make things a bit easier to deploy.
I'm not running the CSI on DigitalOcean but rather trying to connect the CSI to DigitalOceans S3 storage service.
Also my cluster is running 1.14.
Ah right, their storage should be interoperable with the S3 API so it should "just work". Have you tried just setting access/secret keys and the endpoint to e.g. nyc3.digitaloceanspaces.com
while setting region: ""
?
I think it is just me, I did try setting it up with AWS in the end and I am also having issues there:
I0516 02:08:11.573841 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"csi-s3-pvc", UID:"3adf8529-777d-11e9-9cb4-00163ca95b8f", APIVersion:"v1", ResourceVersion:"8313468", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "csi-s3": rpc error: code = Unknown desc = failed to create volume pvc-3adf8529-777d-11e9-9cb4-00163ca95b8f: The provided 'x-amz-content-sha256' header does not match what was computed.
@jbonnett92 I recently started using and testing it with DO and followed this post https://medium.com/asl19-developers/create-readwritemany-persistentvolumeclaims-on-your-kubernetes-cluster-3a8db51f98e3
Using the following credentials:
---
apiVersion: v1
kind: Secret
metadata:
name: csi-s3-secret
stringData:
accessKeyID: "XXXXX"
secretAccessKey: "XXXX"
endpoint: "https://ams3.digitaloceanspaces.com"
region: ""
I had a similar problem to yours that passed by removing this from the secret:
encryptionKey: ""
Also I didnt needed to perform step 4, things were just working without issues.
The only mounter that seems to work is mounter: goofys
Just a short notice beware that storage class reclaim policy is to default wich is "delete" and while using helm to deploy a chart that contains the pvc it ended in deleting the pvc and pv, this means the DO object space is deleted.
@pjanuario I found this article too, it didn't work for me.
@jbonnett92 if you followed the article becarefull, that their example contains the encryption key, your symptoms are similar to mine and removing that key from the secret resolved the problem.
@pjanuario I did do that and it still didn't work.
@jbonnett92 not sure if it helps but I am using the yaml from this repo and the tag v1.1.0
and I run k8 1.13 (should not have impact I think)
I'm using this with DO Spaces and the claim gets through blazingly fast, so that works.
Only problem is I get a Permission denied
from every container that is using it.
Anyone experienced the same problem? How did you solve it?
@Matzu89 I never solved my problem.
@jbonnett92 and @Matzu89 the setup is working for me without any issues so fare. I have several spaces mounted as volumes and use it with different applications, mostly to store uploaded images.
Tried the example of @pjanuario today: DO K8s and DO spaces. Worked like a charm!
@Berndinox @pjanuario I am running bare metal, I find this effects the setup of everything for some reason. Would it affect this CSI? Because it 100% doesn't work for me.
@jbonnett92 I can't answer to that with certainty, but I would say that it should not since this is mapping volumes and data fetch is done with API calls, but maybe there are some details there... Sorry I can't help that much.
@jbonnett92 may fire up an DO managed K8s and try it on there. Maybe you are missing some pre-requisits DO is already configuring for you?!
Ok I tried setting it up on Do and the PVC just gets stuck in pending, here is my command line output:
NAME READY STATUS RESTARTS AGE
csi-s3-6rxc7 2/2 Running 0 22m
Jamies-MBP:kubernetes Jbonnett$ kubectl -n kube-system logs csi-s3-6rxc7 -c csi-s3
I0723 00:15:03.161255 1 s3-driver.go:80] Driver: ch.ctrox.csi.s3-driver
I0723 00:15:03.161360 1 s3-driver.go:81] Version: v1.1.1
I0723 00:15:03.161371 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0723 00:15:03.161378 1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0723 00:15:03.162145 1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0723 00:15:03.317947 1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0723 00:15:04.177154 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetInfo
Jamies-MBP:kubernetes Jbonnett$ kubectl get pvc csi-s3-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-s3-pvc Pending csi-s3 14m
Jamies-MBP:kubernetes Jbonnett$ kubectl logs -l app=csi-provisioner-s3 -c csi-s3
Jamies-MBP:kubernetes Jbonnett$ kubectl logs -l app=csi-s3 -c csi-s3
Jamies-MBP:kubernetes Jbonnett$
Seems like the DO Space can not be created, can you verify this? If there is no space called pvc-blahblah123, i would look onto the endpoint and api Key Config
Yes you're right, the config is literally a copy and paste job I can't see that being wrong. Especially when I have tried implementing this more than twice.
Ok I finally have it working, although ReadWriteMany
is any ideas? I am using the goofys mounter stated by @pjanuario and the article posted says that ReadWriteMany
works, although if multiple pods try accessing the PVC it just fails to attach.
Weirdly enough deleting and applying the configs again worked
Sorry to be a pain, I keep getting this error with different pods:
pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Back-off restarting failed container
this happens with many different deployments. I did test with the pod example in the readme, it worked and then when I try adding a new deployment it fails.
Why i can't see files in my DO spaces ?
Why i can't see files in my DO spaces ?
As far as I remember, the issue that I had was that I was including the region in the endpoint. I will double check the config later and I will let you know!
But you’ve not really included in your comment how you set yours up so how can we determine where to start to help you?
This is my secret file (deployed to kube-system and default namespaces)
apiVersion: v1
kind: Secret
metadata:
name: csi-s3-secret
stringData:
accessKeyID: "MySuperSecretKey"
secretAccessKey: "MySuperSecretAccessKey"
endpoint: https://fra1.digitaloceanspaces.com
region: ""
encryptionKey: ""
This is my storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-s3
provisioner: ch.ctrox.csi.s3-driver
parameters:
# specify which mounter to use
# can be set to rclone, s3fs, goofys or s3backer
mounter: goofys
csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/node-publish-secret-namespace: kube-system
And this commands show nothing
$ kubectl exec -ti csi-s3-test-nginx bash
$ mount | grep fuse
$ mount
Try removing the encryption key's key. Also make sure you are not using the personal access token and key for this. Also make sure that the secret is in the kube-system namespace.
Apart from that I can't tell of anything wrong.
Try removing the encryption key's key.
Yes, I tried, the result is the same
Apart from that I can't tell of anything wrong.
:(
What about the other two?
Also make sure that the secret is in the kube-system namespace
csi-attacher-sa-token-kwmk4 kubernetes.io/service-account-token 3 4h38m
csi-do-controller-sa-token-j6tqb kubernetes.io/service-account-token 3 44d
csi-do-node-sa-token-mv89r kubernetes.io/service-account-token 3 44d
csi-provisioner-sa-token-sdkgt kubernetes.io/service-account-token 3 4h38m
csi-s3-secret Opaque 5 4h39m
csi-s3-token-rnnk4 kubernetes.io/service-account-token 3 4h37m
That isn't showing me a namespace?