kubespray
kubespray copied to clipboard
Azure CSI disk provisioner fails with Kubernetes 1.22 / kubespray 2.18
Azure CSI disk provisioner does not work on kubespray 2.18 with Azure. That may be due to K8s 1.22 deprecations, but I'm not sure.
root@master-node-0:~# kubectl logs csi-azuredisk-controller-6f97f69dbf-4rdpf -n kube-system csi-provisioner | tail -2
I0111 14:14:10.649832 1 reflector.go:188] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:135
E0111 14:14:10.653439 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSINode: the server could not find the requested resource
Environment:
- Azure Virtual Machines
- **OS (Linux 5.11.0-1025-azure x86_64, NAME="Ubuntu", VERSION="20.04.3 LTS (Focal Fossa)", ID=ubuntu, ID_LIKE=debian, PRETTY_NAME="Ubuntu 20.04.3 LTS")
- Version of Ansible (ansible==3.4.0, ansible-base==2.10.15):
- Version of Python (3.8.10):
Kubespray version (commit) (92f25bf2, v2.18.0):
Network plugin used: calico
Full inventory with variables: (gist)
Anything else do we need to know:
All was working superfine on exactly same inventory and environment (including exactly same ansible-playbook command) with previous kubespray (deployed from release-2.17 branch). No changes, only kubespray bumped. Fresh deployment. Fully reapeatable.
Still present with kubespray v2.18.1, k8s v1.22.8 on Azure. Anybody else expierenced that?
csi-provisioner container image has been updated to v2.2.2 from v1.5.0 since the commit 9fce9ca42a47bef5d57ebca4c66c0bd8f94ca2cf after submitting this issue. Could you try the latest master branch if possible?
/cc @oomichi
Currently I'm indeed running v2.18.1 release with added https://github.com/kubernetes-sigs/kubespray/commit/9fce9ca42a47bef5d57ebca4c66c0bd8f94ca2cf commit plus some extra custom patches (like for example missing csi-azuredisk-node-sa service account, which is probably fixed somewhere later) and I can confirm that it works fine. There's a great chance that master will work then, and I'll test that as soon as I get a little bit of time for this.
Currently I'm indeed running v2.18.1 release with added 9fce9ca commit plus some extra custom patches (like for example missing csi-azuredisk-node-sa service account, which is probably fixed somewhere later) and I can confirm that it works fine. There's a great chance that master will work then, and I'll test that as soon as I get a little bit of time for this.
Thank you for your reply. We are happy to see your test result here.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/sig network
/assign
@oomichi You may find the steps I needed setup all successfully MR to current master today in https://github.com/kubernetes-sigs/kubespray/pull/9153. Still valid.