Unable to configure disruption controls for karpenter
I am unable to figure out how to add a disruption consolidationPolicy and expireAfter in my karpenter node pools for kops. Where do I configure this?
The karpenter docs discuss this here.
https://karpenter.sh/v0.32/concepts/nodepools/#specdisruption
I'm not even able to see a CRD for karpenter NodePools, so I'm guessing kops has another way of managing the disruption controls?
disruption:
consolidationPolicy: WhenUnderutilized
expireAfter: 720h # 30 * 24h = 720h
From what I can tell right now, kOps installs karpenter version 0.31.3 by default which didn't support the nodePools concept yet, according to what I'm seeing in the docs (I hope I'm not wrong there), ref: https://github.com/kubernetes/kops/blob/d489024714013523bb1df74a58eaa9b99f6805b2/pkg/model/components/karpenter.go#L38-L40.
This brings me to believe that it's not supported in kOps right now, and thus, we might need to put in some effort to add this.
I don't mind taking a stab at this one, wdyt @hakman @rifelpet @olemarkus ?
I don't mind taking a stab at this one, wdyt @hakman @rifelpet @olemarkus ?
My impression is that, if we want to move Karpenter support to a newer version, we would need to move from providing the LaunchTemplates to doing everything via Karpenter objects.
https://github.com/kubernetes/kops/blob/d489024714013523bb1df74a58eaa9b99f6805b2/upup/models/cloudup/resources/addons/karpenter.sh/k8s-1.19.yaml.template#L1796-L1874
My impression is that, if we want to move Karpenter support to a newer version, we would need to move from providing the LaunchTemplates to doing everything via Karpenter objects.
Yeah, that makes sense to me. So, would that be (theoretically) a somewhat similar process to any other cloudup add-on such as aws-cni, in which we'll update the template (and potentially supporting resources such as template functions etc.) according to the vendor chart?
Yes. The good part is that we have a Karpenter e2e test, so should be easy to test via WIP PR.
Sounds good! I'll give that a try. Thanks!
/assign
From my understanding it's unlikely possible but doesn't hurt to ask if there is any workaround for getting upstream Karpenter to manage current kOps's release InstanceGroups?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.