autoscaler
autoscaler copied to clipboard
cluster-autoscaler does not scale out on pending pod
Which component are you using?: cluster-autoscaler
What version of the component are you using?: 1.28.5
Component version:
What k8s version are you using (kubectl version)?:
❯ kubectl version
Client Version: v1.28.11
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.9-eks-036c24b
What environment is this in?: aws
What did you expect to happen?: cluster-autoscaler should add a node in response to a pending pod
What happened instead?: instead, we see this in the autoscaler logs:
cluster-autoscaler-77df88fd9f-6fpkl cluster-autoscaler I0625 15:25:38.877228 1 klogx.go:87] Pod mynamespaces/mypod can be moved to template-node-for-eks-aws-mynodegroup-e0c6e084-ce62-07e1-eda2-da7ea039e099-8855753709771781732-upcoming-0
cluster-autoscaler-77df88fd9f-6fpkl cluster-autoscaler I0625 15:25:38.877243 1 filter_out_schedulable.go:120] 1 pods marked as unschedulable can be scheduled.
cluster-autoscaler-77df88fd9f-6fpkl cluster-autoscaler I0625 15:25:38.877251 1 filter_out_schedulable.go:75] Schedulable pods present
autoscaler doesn't seem to even attempt to add a node, instead stating that the pending pod can fit on "template-node-for-eks" (which ofc isn't an actual node). The pod in question has no special tolerations or taints or node placement statements, and if I manually scale up the underlying autoscaling group, the pod will start.
How to reproduce it (as minimally and precisely as possible): I'm not sure how to repro, since I don't fully understand what's happening.
Anything else we need to know?:
Not sure if it's related, but we're also seeing this:
1 klogx.go:87] failed to find place for other-ns/other-pod-848fc6b869-g5tl4: cannot put pod other-pod-848fc6b869-g5tl4 on any node
This pod is actually running, so it's very strange that autoscaler reports it like this
not sure if it's related to https://github.com/kubernetes/autoscaler/issues/6128. Tried going down to CA 1.27.3 but the problems seem to persist.
We also face the same issue in the EKS 1.25 version and the CA version is registry.k8s.io/autoscaling/cluster-autoscaler-amd64:v1.25.3.
CA logs -> We checked the cluster-autoscaler logs and could see below mentioned errors:
I0626 06:25:58.643699 1 csi.go:99] "Could not get a CSINode object for the node" node="template-node-for-eks-multiarch-worker-nodes-OD-v3-a0c59cf7-3413-f3e4-f0d6-cdf848afeecb-7336713706992112614-upcoming-0" err="csinode.storage.k8s.io \"template-node-for-eks-multiarch-worker-nodes-OD-v3-a0c59cf7-3413-f3e4-f0d6-cdf848afeecb-7336713706992112614-upcoming-0\" not found"
I0626 06:25:58.643718 1 filter_out_schedulable.go:162] Pod env035.swiggy-test-executor-75dbb94c7-rp8qk marked as unschedulable can be scheduled on node template-node-for-eks-multiarch-worker-nodes-OD-v3-a0c59cf7-3413-f3e4-f0d6-cdf848afeecb-7336713706992112614-upcoming-0. Ignoring in scale up.
@trondhindenes that suggests that the CA thinks there's an upcoming node which the unschedulable pod will be able to be scheduled to.
Have you tried using the debugging snapshotter feature to understand what the CA thought the state of the cluster was at the time?
/area cluster-autoscaler
Experienced this on version v1.28.2 running on a 1.28 cluster. There was an EC2 instance created that never joined the cluster, causing the autoscaler to think unschedulable pods could be scheduled to it and leaving them stuck "pending". Updating to version v1.30.1 caused the autoscaler to resolve the issue automatically by scaling up new nodes and removing the "template-node" that never joined. Waiting for the fix to be backported to version v1.28.x.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Experienced this on version v1.28.2 running on a 1.28 cluster. There was an EC2 instance created that never joined the cluster, causing the autoscaler to think unschedulable pods could be scheduled to it and leaving them stuck "pending". Updating to version v1.30.1 caused the autoscaler to resolve the issue automatically by scaling up new nodes and removing the "template-node" that never joined. Waiting for the fix to be backported to version v1.28.x.
Do you know if it has ever been backported?