kubespray
kubespray copied to clipboard
cilium_kube_proxy_replacement: strict doesn't delete kube-proxy daemonset
I change Cilium mode to cilium_kube_proxy_replacement: strict
and run ansible-playbook
ansible-playbook cluster.yml -t cilium -b
My Cilium pods show KubeProxyReplacement: Strict
in status and add NodePort and HostPort to cilium service list
.
But kube-proxy DaemonSet is not deleted and all kube-proxy pods are running.
The last version of kubespray.
$ git rev-parse --short HEAD
e1558d2
If I run playbook with this value when I initiate cluster kube-proxy is not created as it should be.
Same situation here, the kube-proxy daemonset needs to be deleted manually, are there further operations needed?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I am seeing the same thing as I am trying to replace kube-proxy in my cluster.
Looks like in kubespray, kube-proxy
is created by kubeadm
. And when cilium_kube_proxy_replacement
is set, it simply skips the kube-proxy phrase in kubeadm.
I guess it might be possible to run kubeadm reset
after init
to cleanup components no longer needed, determined by the current playbook run. However, personally I feel this operation is better to be captured in a separate playbook, instead of cluster.yml
.
For now, to manually cleanup kube-proxy
resorces created by kubeadm
, you can check kubeadm
code to see what it creates, and just delete them by hand. Apart from the daemonset, there are a few configmap, service account, role and (cluster)rolebindings:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/addons/proxy/proxy.go