autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

Cluster Autoscaler: align workload-level APIs with Karpenter

Open towca opened this issue 1 year ago • 17 comments

Which component are you using?: Cluster Autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

Recently, Karpenter officially joined sig-autoscaling, and we now have 2 Node autoscalers officially supported by Kubernetes. Both autoscalers provide workload-level APIs that a workload owner can use to influence autoscaling behavior related to the workloads. Some of these APIs have identical semantics, but different naming. Because of this, workloads taking advantage of such APIs aren't portable between clusters using different autoscalers (e.g. in a multi-cloud setting).

Cluster Autoscaler provides the following workload-level APIs:

  • Configure a pod not to be disrupted by scale-down: cluster-autoscaler.kubernetes.io/safe-to-evict: false
  • Configure a pod to never block scale-down (even if it normally would): cluster-autoscaler.kubernetes.io/safe-to-evict: false
  • Configure a pod to not block scale-down because of specific local volumes (while other blocking conditions still apply): cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes: "volume-1,volume-2,.."
  • Configure a pod to delay triggering scale-up by some duration (e.g. to allow scheduler more time to schedule the pod): cluster-autoscaler.kubernetes.io/pod-scale-up-delay: <duration>
  • Configure a DaemonSet pod to be/not be evicted during scale-down (regardless of the global CA setting controling this behavior): cluster-autoscaler.kubernetes.io/enable-ds-eviction: true/false
  • Configure a non-DaemonSet pod to be treated like a DaemonSet pod by CA: cluster-autoscaler.kubernetes.io/daemonset-pod: true

To my knowledge, right now Karpenter only provides the following workload-level API:

  • Configure a pod not to be disrupted by consolidation (i.e. scale-down): karpenter.sh/do-not-disrupt: true

Describe the solution you'd like.:

  1. Introduce a new API prefix for concepts related specifically to Node autoscaling: node-autoscaling.kubernetes.io. Going forward, any new APIs using this prefix would have to be approved by both CA and Karpenter owners. Note that this doesn't prevent the autoscalers from adding new autoscaler-specific APIs, but the goal should be to use the common prefix if possible.
  2. Add support for node-autoscaling.kubernetes.io/do-not-disrupt: true to CA and Karpenter, while still honoring cluster-autoscaler.kubernetes.io/safe-to-evict: false and karpenter.sh/do-not-disrupt: true for backwards compatibility.
  3. Ideally we'd also add support for node-autoscaling.kubernetes.io/do-not-disrupt: false, mapping to safe-to-evict: true in CA. Not sure what the semantics for that in Karpenter would be (need to check if it has any consolidation-blocking conditions triggered by a pod).
  4. Align with Karpenter on if they're interested in implementing any of the other workload-level APIs that CA uses, and if so - migrate them to the common API prefix as well.

Describe any alternative solutions you've considered.:

The CA/Karpenter alignment AEP also mentions aligning on Node-level APIs related to scale-down/consolidation. However, the scope of these APIs will likely be Node lifecycle altogether, not just Node autoscaling. IMO we shouldn't mix the two API prefixes together, and the Node-level API migration should be handled separately. Taking do-not-disrupt: true as an example: if we put it in a node-autoscaling.kubernetes.io prefix, all we need to guarantee is that the 2 supported autoscalers handle it correctly. If we were to put it into a broader node-lifecycle.kubernetes.io prefix, every component interacting with node lifecycle through this API would have to honor it going forward, or break the expectations. Honoring do-not-disrupt: true might not be an option for certain components (e.g. a component Upgrading nodes with strict FedRAMP requirements has to violate it at some point), limiting the usefulness of that broader node-lifecycle API.

Additional context.:

  • Doc describing the alignment between CA and Karpenter: CA/Karpenter alignment AEP
  • Existing CA issue for renaming the Node-level APIs: https://github.com/kubernetes/autoscaler/issues/5433
  • Discussion on the need to define node lifecycle: https://github.com/kubernetes/website/issues/45074
  • I want to bring this up for discussion during the sig-autoscaling meeting on ~~2024-03-25~~ TBD.

towca avatar Mar 21 '24 15:03 towca

@MaciekPytel @gjtempleton @jonathan-innis I want to discuss this during the next sig meeting if possible, could you take a look?

towca avatar Mar 21 '24 15:03 towca

BTW, if you're looking at the key prefix for annotations and / or labels, these things aren't called “API groups”. We use the term API group purely for resource kinds that you find within Kubernetes' HTTP API.

sftim avatar Mar 25 '24 15:03 sftim

This feels accurate: /retitle Align annotations and labels between Cluster Autoscaler and Karpenter

sftim avatar Mar 25 '24 15:03 sftim

Good point about the "API group" being a precisely defined term, changed to "API prefix". Unless we have a name for such concept as well?

I'm struggling to understand how the new title is accurate. "Aligning labels and annotations" could mean many things in the Cluster Autoscaler/Karpenter context, since labels and annotations are important for various parts of the logic (e.g. something around node templates would probably be my first guess for something related to "aligning labels and annotations"). "Workload-level APIs", on the other hand, should be pretty clear in Cluster Autoscaler/Karpenter context.

/retitle Cluster Autoscaler: align workload-level APIs with Karpenter

towca avatar Mar 25 '24 15:03 towca

If you mean the cluster-autoscaler.kubernetes.io in cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes @towca, I tend to call that a label name prefix or annotation name prefix. See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set for some more detail.

sftim avatar Mar 25 '24 16:03 sftim

thanks for bringing this up @towca , i think that if we make an api around this it will be beneficial to the wider community. i don't have super strong opinions on the naming part, but i think de-emphasizing the "autoscaling" part of it would be nice. that said, i like the distinction you call about what would be expected from something with the node-lifecycle as opposed to node-autoscaling part of its prefix.i would have thought node-lifecycle would be a little better, but i like your point about other lifecycle tooling having to then obey them.

elmiko avatar Mar 25 '24 19:03 elmiko

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 23 '24 20:06 k8s-triage-robot

The https://github.com/kubernetes/kubernetes/pull/124800 PR defining the first common annotation is in review. The review has stalled a bit, I bumped it for the reviewers during the sig-autoscaling meeting today.

towca avatar Jun 24 '24 16:06 towca

/remove-lifecycle stale

towca avatar Jun 24 '24 16:06 towca

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 22 '24 17:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 22 '24 17:10 k8s-triage-robot

/remove-lifecycle stale

towca avatar Oct 29 '24 00:10 towca

/remove-lifecycle rotten

towca avatar Oct 29 '24 00:10 towca

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 27 '25 01:01 k8s-triage-robot

/remove-lifecycle stale

Shubham82 avatar Feb 18 '25 06:02 Shubham82

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 19 '25 07:05 k8s-triage-robot

/remove-lifecycle stale

towca avatar May 27 '25 17:05 towca

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 25 '25 17:08 k8s-triage-robot

/remove-lifecycle stale

towca avatar Sep 18 '25 11:09 towca