karpenter icon indicating copy to clipboard operation
karpenter copied to clipboard

Custom drain flow

Open dorsegal opened this issue 3 years ago • 13 comments

Tell us about your request

Add a rollout flag when using drain. It will be used when consolidation and native termination handler (https://github.com/aws/karpenter/pull/2546) will be ready. The custom drain flow is like this:

  1. Cordon the node
  2. Do a rolling restart of the deployments that have pods running on the node.
  3. Drain the node.

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

Currently when using consolidation feature or aws-node-termination-handler we can result of downtime or heavy performance degradation with the current implementation of kubectl drain

Current drain will terminate all workloads on a node and the scheduler will try to create those workloads on available nodes and if not any Karpenter will provision new node. Even with PDB there is some level or degradation.

Are you currently working around this issue?

having a custom bash script that implements an alternative to kubectl drain

https://gist.github.com/juliohm1978/1f24f9259399e1e1edf092f1e2c7b089

Additional Context

kubectl drain leads to downtime even with a PodDisruptionBudget https://github.com/kubernetes/kubernetes/issues/48307

Attachments

No response

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

dorsegal avatar Oct 20 '22 12:10 dorsegal

Sorry, I'm not following with respect to consolidation, it always pre-spins a replacement node so you should never need to wait for a node to provision.

Regarding PDBs, why are they not sufficient? It will slow the rate at which the pods are evicted.

tzneal avatar Oct 20 '22 18:10 tzneal

There are cases when application takes time to load so even if you pre-spin node the application takes time to become available. PDB have the same problem. It will first terminate a pod(s) and K8s will schedule a new one. If I PDB are defined with 99% or only allow small number of pod disruption it can slow the rate at which the pods are evicted as well.

We want to achieve as close to 100% up-time using spot instances and currently the drain behavior is what holding us back.

dorsegal avatar Oct 21 '22 06:10 dorsegal

It sounds like you're using the max surge on the restart to temporarily launch more pods. Instead you can just permanently scale the deployment to your desired baseline + whatever the surge you want is then use a PDB to limit the maxUnavailable for that deployment to the surge amount. This will ensure you always have your baseline desired capacity without incurring extra restarts.

tzneal avatar Oct 21 '22 12:10 tzneal

You could also try catching SIGTERM within your pod and keep it from shutting down immediately so that the new pod has time to initialize if they are spinning up while the other pod is terminating.

bwagner5 avatar Oct 24 '22 22:10 bwagner5

You could also try catching SIGTERM within your pod and keep it from shutting down immediately so that the new pod has time to initialize if they are spinning up while the other pod is terminating.

We thought about it. The problem is when using 3rd party images, it will require to change source code for every used application. Plus it is recommend to handle SIGTERM as graceful shutdown not suspend you application till k8s kills it.

This request makes it granular solution for all pods.

We had a new idea for custom flow that does not use rollouts. The labels to detach pods from their controllers (replica sets) to add/remove label from all pods in

so the new drain flow will be like this:

  1. cordon node
  2. change labels for all pods inside that node
  3. wait 90 seconds (when spot terminates we need handle this in no more than 120 seconds)
  4. drain node.

It's not perfect but will reduce the impact of draining nodes.

dorsegal avatar Oct 25 '22 06:10 dorsegal

What about a pre-stop command? https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/

ellistarn avatar Oct 26 '22 05:10 ellistarn

What about a pre-stop command? https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/

Since pre-stop does not put the container in terminating state k8s scheduler does not know to spin a new pod.

dorsegal avatar Oct 26 '22 06:10 dorsegal

IIUC, it should go into terminating, which will trigger the pod's replicaset to create a new one.

PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires.

ellistarn avatar Oct 31 '22 21:10 ellistarn

IIUC, it should go into terminating, which will trigger the pod's replicaset to create a new one.

PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires.

It even makes it worse :) since the pod is terminating requests are no longer coming to that pod which means we are getting desegregation till new pods are available.

The idea is to not terminate pods till new pods are available. Just like rollout restart

dorsegal avatar Nov 04 '22 06:11 dorsegal

This is a similar issue we're also running into where the node(s) will terminate before the schedule pod is in a running state on the new node(s).

tath81 avatar Mar 14 '23 22:03 tath81

I actually think a better approach here is to move https://www.medik8s.io/maintenance-node/ to be an official (out of tree, but official) Kubernetes API, and then use that when it's available in a cluster.

You could customize behavior by using your own controller rather than the default one, and keep the API the same for other parties such as kubectl and Karpenter.

Yes, it's a big change. However, it's easier than solving the n-to-m relationship between all the things that might either drain a node or watch a drain happen.

sftim avatar Jun 05 '23 17:06 sftim

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 18 '24 19:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 17 '24 20:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar May 17 '24 20:05 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar May 17 '24 20:05 k8s-ci-robot

I'm also facing same problem. Please open this issue

Bharath509 avatar Jun 04 '24 11:06 Bharath509

@jsamuel1: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jun 17 '24 09:06 k8s-ci-robot

Would be nice to see a proper solution

vainkop avatar Sep 13 '24 22:09 vainkop