cluster-api-provider-azure icon indicating copy to clipboard operation
cluster-api-provider-azure copied to clipboard

e2e test for k8s v1.22 --> v1.23 upgrade to use out of tree csi driver

Open sonasingh46 opened this issue 3 years ago • 6 comments
trafficstars

/kind bug

What steps did you take and what happened: From Kubernetes v1.23 AzureDiskCSI driver is enabled by default.

An e2e test with following validations should be there:

  • Create a k8s cluster with v1.22.
  • Create a deployment/sts that uses PVC created via in-tree AzureDiskCSI driver.
  • Upgrade the k8s cluster to v1.23 and install out-of-tree AzureDiskCSI driver.
  • Validate that the existing pod using PVC created via in-tree provider keeps functioning.
  • Validate that a new pod using PVC created via the out-of-tree provider works.

What did you expect to happen:

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • cluster-api-provider-azure version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

sonasingh46 avatar Apr 20 '22 09:04 sonasingh46

/assign sonasingh46

sonasingh46 avatar Apr 20 '22 09:04 sonasingh46

cc @CecileRobertMichon @shysank

sonasingh46 avatar Apr 20 '22 09:04 sonasingh46

@sonasingh46 thanks for picking this up.

Can we leverage Helm to install AzureDisk and AzureFile on the cluster similar to what we did in #2209 to install external cloud provider?

https://github.com/kubernetes-sigs/azurefile-csi-driver/tree/master/charts#install-csi-driver-with-helm-3 https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/master/charts#install-csi-driver-with-helm-3

We should also add user-facing docs about installing CSI drivers post cluster create.

CecileRobertMichon avatar Apr 22 '22 19:04 CecileRobertMichon

@CecileRobertMichon -- Yes. I am planning to use helm like we did for external cloud provider.

sonasingh46 avatar Apr 25 '22 10:04 sonasingh46

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 24 '22 10:07 k8s-triage-robot

/remove-lifecycle stale

jackfrancis avatar Jul 25 '22 08:07 jackfrancis