cluster-autoscaler scales down nodes before pod informer cache has synced
Which component are you using?: cluster-autoscaler
What version of the component are you using?: 1.28.2, 1.31.0
Component version: 1.28.2, 1.31.0
What k8s version are you using (kubectl version)?:
kubectl version Output
# 1.28 $ kubectl version Client Version: v1.29.0 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.14-dd.1 # 1.31 $ kubectl version Client Version: v1.29.0 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.31.1-dd.2
What environment is this in?:
Observed in an AWS environment, but can occur in other environments.
What did you expect to happen?:
When the autoscaler has not yet synced its caches (notably it's pod cache), it should not take autoscaling actions.
What happened instead?:
The autoscaler's pod cache was empty; as a result, it wrongly identified nodes as empty and scaled them in, resulting in many workloads being unexpectedly deleted.
How to reproduce it (as minimally and precisely as possible):
It's not consistent to reproduce; in our scenario, the control plane was stressed and was returning many 429s for various API calls. This cluster had a large number of pods (25k+) and nodes (2k+); the Nodes cache was able to be synced after a few retries, but Pods repeatedly hit timeouts for another 20 minutes.
k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:172: Failed to watch *v1.Pod: failed to list *v1.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
During this time, the autoscaler incorrectly identified 300+ nodes as empty and managed to improperly scale in 200+ of them before the pod cache synced, at which point it realized these nodes were not actually empty.
Scale-down: couldn't delete empty node, node is not empty, status error: failed to delete empty node "<node>", new pods scheduled
Once the Pod cache synced, the now-identified pods triggered scale ups and the cluster recovered, but not before interrupting all the workloads on those nodes.
Anything else we need to know?:
In a local build of the autoscaler from master, I injected a call to AllPodsLister.List() immediately after the call to informerFactory.Start() and confirmed that this will return an empty slice and no error when the cache is not yet populated.
My initial proposal would be to just add a call to informerFactory.WaitForCacheSync() after the informerFactory is started in buildAutoscaler; this would block the autoscaler's startup until all the caches have synced. However, the cluster-autoscaler has a lot of caches (I saw 17 different caches created in the logs), and wonder if there would be interest in making this more granular to ensure just the most-vital caches are populated (pods + nodes, probably?).
/area cluster-autoscaler
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.