k8s-csi-s3
k8s-csi-s3 copied to clipboard
how to upgrade the mount driver?
hello, the readme.md is clear on how to install csi driver.
but no tips on how to upgrade.
Should I disable k8s node schedule and delete all the pod which is using the s3 pv, before I update daemonset?
The easy way is using helm.
You do not need delete any pod, just in the deploy/helm
Update the values,yaml and run
helm upgrade --namespace [namespace] -f values.yaml csi-s3 .
The easy way is using helm. You do not need delete any pod, just in the deploy/helm Update the values,yaml and run
helm upgrade --namespace [namespace] -f values.yaml csi-s3 .
thanks for your reply.
I am not very familiar with the mount mechanism of CSI plugins. Will upgrading the driver daemonset cause the mounted PV mount points to become invalid or remounted? If this causes our task to fail due to failed access to the PV, we cannot accept this side effect. Instead, we hope to stop the job first and choose the time period with minimal impact to do the upgrade operation.
Hi! In previous versions, geesefs processes were started inside the csi-s3 pod so they actually died when you restarted the pod (for example, by upgrading it). But the latest versions (>= 0.34.7) now use systemd by default to start geesefs and it's started on the host :), outside of the container. This makes the upgrade process very simple, you can just reapply manifests and mounts will stay alive. Escaping the container is a funny way of course. Maybe I'll rework it in the future again to make it spawn other pods instead of using systemd, but for now the implementation is based on systemd :). This behaviour can be turned off if you add --no-systemd to mount options, in that case geesefs will again be started inside the container (and you again won't be able to update it without crashing mounts). For more details see #29.
Hi! In previous versions, geesefs processes were started inside the csi-s3 pod so they actually died when you restarted the pod (for example, by upgrading it). But the latest versions (>= 0.34.7) now use systemd by default to start geesefs and it's started on the host :), outside of the container. This makes the upgrade process very simple, you can just reapply manifests and mounts will stay alive. Escaping the container is a funny way of course. Maybe I'll rework it in the future again to make it spawn other pods instead of using systemd, but for now the implementation is based on systemd :). This behaviour can be turned off if you add --no-systemd to mount options, in that case geesefs will again be started inside the container (and you again won't be able to update it without crashing mounts). For more details see #29.
It's sad that my current version is cr.yandex/crp9ftr22d26age3hulg/csi-s3:0.34.4
.
BTW, will the geesefs
progress resource limited by the cgroup of systemd?
It works fine in our product environment, but we still concerned about its stability.
such questions like the geesefs
process OOM or exhaust the CPU resources of the host machine.