kops
kops copied to clipboard
How to pass user-defined kube scheduler config to the existing cluster
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
kubec1.26.4
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
1.24.15
3. What cloud provider are you using?
aws
PR provided a way to add a user-defined kube scheduler config. But there is no way to add/pass user-defined kube scheduler config to the existing cluster. This behavior breaks existing functionality.
I would like to set this feature in Cluster yaml file, otherwise it's useless.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Hi guys, what's the status of this issue? We're trying to define our own KubeSchedulerConfiguration (as documented in https://kubernetes.io/docs/reference/scheduling/config/) and I'm trying to figure out how to do it using kops cluster manifest file.
you can add your custom KubeSchedulerConfiguration to the existing cluster using kops fileAssets cluster resource
$ kops edit cluster
kind: Cluster
spec:
fileAssets:
- content: |
apiVersion: kubescheduler.config.k8s.io/v1
clientConnection:
kubeconfig: /var/lib/kube-scheduler/kubeconfig
kind: KubeSchedulerConfiguration
profiles:
- pluginConfig:
- args:
apiVersion: kubescheduler.config.k8s.io/v1
kind: NodeResourcesFitArgs
scoringStrategy:
resources:
- name: cpu
weight: 1
- name: memory
weight: 1
type: MostAllocated
name: NodeResourcesFit
name: ksc
path: /var/lib/kube-scheduler/config.yaml
roles:
- ControlPlane
then do kops update cluster, kops rolling update and voila you can check in kube-scheduler startup logs (with verbose level --v="3" that it uses your custom schedule strategy
Yep, the above worked for me, I've used the following configuration for tighter Pod packing on each node:
---
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: /var/lib/kube-scheduler/kubeconfig
profiles:
- pluginConfig:
- args:
scoringStrategy:
resources:
- name: cpu
weight: 1
- name: memory
weight: 1
type: MostAllocated
name: NodeResourcesFit