gcp-compute-persistent-disk-csi-driver
gcp-compute-persistent-disk-csi-driver copied to clipboard
How to update from an earlier version?
Hi devs, I can only find information about how to install or delete the driver.
I remember that last time, that I updated I deleted the old deployment before installing the new one. Is this still the way to go? If so, should I do this using the delete-driver.sh or what is the preferred method?
It would be nice, if this process could be added to the documentation. thanks
Any update on that?
Hi @rgarcia89, sorry for missing this issue.
Using delete-driver and then redeploying should work. It may also work to just rerun deploy-driver which should update everything appropriately, but that would have to be tested. So delete-driver may be safer even if it's overkill.
Have you tested either method? We'd be interested in hearing in problems you ran into.
Thx
hi @mattcary thanks for the update. I have not tested it so far as I wasn't sure if running delete-driver could not be causing issues. However, if this is the way to go, I will try my luck.
Maybe still nice to add this to the official documentation ;) Thanks
Thanks!
Yes, I'll keep this issue open so we can update the docs.
On Wed, Jan 6, 2021 at 10:26 AM Raúl Garcia Sanchez < [email protected]> wrote:
hi @mattcary https://github.com/mattcary thanks for the update. I have not tested it so far as I wasn't sure if running delete-driver could not be causing issues. However, if this is the way to go, I will try my luck.
Maybe still nice to add this to the official documentation ;) Thanks
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/681#issuecomment-755478936, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIJCBAE2HAP3WW4UFIDCAPTSYSTOHANCNFSM4UNNXYKQ .
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
@mattcary reminder 😉
Fair play :-)
/lifecycle frozen /assign @mattcary