azuredisk-csi-driver
azuredisk-csi-driver copied to clipboard
azuredisk node liveness failure when hostnetwork is false
What happened:
Since last version of azuredisk csi driver, the liveness for azuredisk node component fails when hostnetwork (linux.hostnetwork in chart values) is set to false.
A fix has been made but seems to target only the controller deployment.
What you expected to happen:
linux.hostnetwork false value to be supported.
How to reproduce it:
Deploy azuredisk csi driver using helm chart in version 1.30.1 or 1.30.2 (probably some 1.29.x also) with a linux.hostnetwork false value and see that azuredisk-node daemonset pods are in crashloopback due to liveness failure.
Environment:
- CSI Driver version: 1.30.1 or 1.30.2 (probably some 1.29.x also)
- Kubernetes version (use
kubectl version): 1.25.16 - OS (e.g. from /etc/os-release): ubuntu 2204
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
fixed by https://github.com/kubernetes-sigs/azuredisk-csi-driver/pull/2521