autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

[EKS] safely evict pods on scale down

Open aaadipop opened this issue 2 years ago • 3 comments

Which component are you using?: cluster-autoscaler

What version of the component are you using?: v1.27.2

Component version:

What k8s version are you using (kubectl version)?: v1.22.12

kubectl version Output
$ kubectl version

What environment is this in?: AWS EKS

What did you expect to happen?: safely evict all pods from node before scaling down

What happened instead?: on scale down, the pods are been killed and not safely evicted from the node, this will result in a downtime until the new ones will become available

How to reproduce it (as minimally and precisely as possible): try to scale down an EKS node pool I've set the extraArgs.cordon-node-before-terminating flag to true

Anything else we need to know?: I saw the faq about gracefultermination in scale down and also this issue :)

aaadipop avatar Sep 26 '23 10:09 aaadipop

/area provider/aws

Shubham82 avatar Nov 28 '23 12:11 Shubham82

have this issue solved yet? I experience the same behavior having one Jenkins controller replica. Whenever CA scaling down nodes, it scales down the node where Jenkins controller pod is running and it causes the application to be down until Kubernetes restarting it into another node.

Autoscaler log shows the following for the same node running the Jenkins controller pod: 1 klogx.go:87] Node ip-123-45-67-89.ec2.internal- cpu utilization 0.049087 1 cluster.go:178] ip-123-45-67-89.ec2.internal may be removed I1219 20:35:13.918724 1 nodes.go:84] ip-123-45-67-89.ec2.internal is unneeded since 2023-12-19 20:35:13.91132046 +0000 UTC m=+19353.207298775 duration 0s I1219 20:35:13.919046 1 nodes.go:126] ip-123-45-67-89.ec2.internal was unneeded for 0s

here is a snippet of the CA deployment:

command:

  • ./cluster-autoscaler
  • --v=4
  • --stderrthreshold=info
  • --cloud-provider=aws
  • --skip-nodes-with-local-storage=false
  • --expander=least-waste
  • --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/cluster-name
  • --skip-nodes-with-system-pods=false
  • --scale-down-unready-time=20m
  • --skip-nodes-with-custom-controller-pods=true Of course, first thing I've added the annotations cluster-autoscaler.kubernetes.io/safe-to-evict: "false"

I would be more than happy for any advise or workaround to fix this behavior.

p.s I use CA version 1.28

tumaf33 avatar Dec 20 '23 09:12 tumaf33

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 19 '24 09:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 18 '24 10:04 k8s-triage-robot

/remove-lifecycle rotten

jjmerri avatar May 13 '24 11:05 jjmerri

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 11 '24 12:08 k8s-triage-robot