autoscaler
autoscaler copied to clipboard
Local PV will prevent node scaling down
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
Component version: v1.26.3
What k8s version are you using (kubectl version
)?:
kubectl version
Output
$ kubectl version Client Version: v1.28.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.26.11-eks-8cb36c9
What environment is this in?:
EKS
What did you expect to happen?:
Under-utilized nodes with local PV should be scaled down
What happened instead?:
Under-utilized nodes with local PV cannot be scaled down
How to reproduce it (as minimally and precisely as possible):
If pod has local PV (provisioned by local-volume-provisioner in our use case) mounted CA refuses to evict the pod because the local pv has NodeAffinity for specific node There is no way we can bypass this restriction to scale down those under-utilized nodes
Anything else we need to know?:
Specifying safe-to-evict-local-volumes
and safe-to-evict
does no help,
because pod will eventually enter bounded PV check where it will fail
I think we should provide an option to remove some volumes before bounded PV check
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I also encountered this problem. Have you solved it? @jewelzqiu
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle stale