kubespray icon indicating copy to clipboard operation
kubespray copied to clipboard

client-certificate and client-key attributes keep getting added to kubelet.conf

Open timgriffiths opened this issue 2 years ago • 1 comments

Every time you run ansible you get a duplicate line in the kubelet config and the service reloaded

the duplicate line is "client-certificate:" and "client-key:"

Environment:

  • Cloud provider or hardware configuration: On Premis

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"): Centos 7

  • Version of Ansible (ansible --version):

ansible 2.8.5

  • Version of Python (python --version):

python version = 2.7.5 Kubespray version (commit) (git rev-parse --short HEAD): master

Network plugin used: calico

Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"):

Command used to invoke ansible:

**Output of ansible run**:
ASK [kubernetes/control-plane : Fixup kubelet client cert rotation 1/2] *********************************************************************************************************************************************************
--- before: /etc/kubernetes/kubelet.conf (content)
+++ after: /etc/kubernetes/kubelet.conf (content)
@@ -23,3 +23,4 @@
     client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
     client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
     client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
+    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
.... etc for each node

Anything else do we need to know: Looks like the problem is related to https://github.com/kubernetes-sigs/kubespray/pull/7347, either need to add "backrefs: yes" or modify the regex, or maybe better just to remove this all together if it's been fixed since 1.17 and kubespray doesn't support anything from 1.21 now I believe

timgriffiths avatar May 30 '22 00:05 timgriffiths

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 28 '22 00:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 27 '22 00:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 27 '22 01:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 27 '22 01:10 k8s-ci-robot