kubespray
kubespray copied to clipboard
Kubespray not mange kubelet configmap file
Hi!
I found that kubespray does not manage kubelet configmap (actually it managed, but with default settings only) kubelet configmap Here you can find right kubelet config for nodes kubelet node config So kubelet configmap is differ with kubelet config for nodes, in my oppinion it is a not right way
Why it usefull for us: in our cluster installation master nodes managed by kubespray, but onter (worker) nodes added to cluster by hand manually, and they are not managed by kubespray, and since k8s v1.26 kubelet config managed with --config option, the default file is /var/lib/kubelet/config.yaml, it creates when node added to cluster (or with kubeadm upgrade node phase kubelet-config command).
Anyway in my oppinion it is a right way, when kubelet configmap the same as a kubelet config for all nodes Unfortunatelly I have no permissions to create a pull requests Here is my solution fix ./roles/kubernetes/control-plane/tasks/kubeadm-setup.yml file:
#- name: kubeadm | Create kubeadm config
# template:
# src: "kubeadm-config.{{ kubeadmConfig_api_version }}.yaml.j2"
# dest: "{{ kube_config_dir }}/kubeadm-config.yaml"
# mode: 0640
- name: kubeadm | Render kubeadm template to a variable
set_fact:
kubeadmvar: "{{ lookup('template', 'kubeadm-config.{{ kubeadmConfig_api_version }}.yaml.j2') }}"
- name: kubeadm | Render kubelet template to a variable
set_fact:
kubeletvar: "{{ lookup('template', '../node/templates/kubelet-config.{{ kubeletConfig_api_version }}.yaml.j2') }}"
- name: kubeadm | Print kubeletvar
debug:
msg: "Ansible Version: {{ kubeletvar }}"
- name: kubeadm | Save a kubeadm, kubelet variable to file
copy:
content: "{{ kubeadmvar + kubeletvar }}"
dest: "{{ kube_config_dir }}/kubeadm-config.yaml"
I found than address field will be applied to kubelet configmap, and it will be breakig change
Do you mean that kubelet config is different on control-plane nodes (vs worker nodes). Your description is not very clear.
/triage needs-information
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.