kubespray
kubespray copied to clipboard
client-certificate and client-key attributes keep getting added to kubelet.conf
Every time you run ansible you get a duplicate line in the kubelet config and the service reloaded
the duplicate line is "client-certificate:" and "client-key:"
Environment:
-
Cloud provider or hardware configuration: On Premis
-
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): Centos 7 -
Version of Ansible (
ansible --version
):
ansible 2.8.5
-
Version of Python (
python --version
):
python version = 2.7.5
Kubespray version (commit) (git rev-parse --short HEAD
):
master
Network plugin used: calico
Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
):
Command used to invoke ansible:
**Output of ansible run**:
ASK [kubernetes/control-plane : Fixup kubelet client cert rotation 1/2] *********************************************************************************************************************************************************
--- before: /etc/kubernetes/kubelet.conf (content)
+++ after: /etc/kubernetes/kubelet.conf (content)
@@ -23,3 +23,4 @@
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
+ client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
.... etc for each node
Anything else do we need to know: Looks like the problem is related to https://github.com/kubernetes-sigs/kubespray/pull/7347, either need to add "backrefs: yes" or modify the regex, or maybe better just to remove this all together if it's been fixed since 1.17 and kubespray doesn't support anything from 1.21 now I believe
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.