cluster-api-provider-vsphere
cluster-api-provider-vsphere copied to clipboard
:seedling: Skip kube-vip-prepare for 1.31+ k8s since CAPI won't depend on super-admin.conf
What this PR does / why we need it: Skip kube-vip-prepare for 1.31+ k8s since CAPI won't depend on super-admin.conf
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/2596
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign fabriziopandini for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
I manually modified the file. Looks like I shouldn't
localhost.localdomain localhost4 localhost4.localdomain4" >>/etc/hosts
- mkdir -p /etc/pre-kubeadm-commands
- for script in $(find /etc/pre-kubeadm-commands/ -name '*.sh' -type f | sort);
- do echo "Running script $script"; "$script"; done
+ do echo "Running script $script"; "$script"; done
\ No newline at end of file
cc @sbueringer @chrischdi
Looks like this does not work, there seems to still be some dependency on the loadbalancer IP in this case:
E0826 09:31:30.638835 1 leaderelection.go:332] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
More details:
https://github.com/kube-vip/kube-vip/issues/684#issuecomment-2309781000
cc @sbueringer
Dumb, question, how does it work before in 1.28. Kube-vip also use the admin.conf generated by kubeadm, if kubeadm applies the rbac after, then kube-vip can't bootstrap either I assume in old k8s version?
v1.29 introduced the super-admin.conf. In v1.28 and before, the admin.conf immediately had the required permissions.
IC. so kubelet is able to talk to api through local ip, but not kubeadm. Kubeadm still talks to api through control plane ip. Is there use case that kubeadm init will be run not on the control plane node? Should it also support this mode to talk to api through localhost?
With ControlPlaneKubeletLocalMode and when referncing admin.conf for kube-vip:
The kubelet got started
The kubelet bootstraps itself using the local control-plane IP (not depending on kube-vip being up)
The admin.conf gets created
The kubelet should be able to start kube-vip now
IC. so kubelet is able to talk to api through local ip, but not kubeadm. Kubeadm still talks to api through control plane ip. Is there use case that kubeadm init will be run not on the control plane node? Should it also support this mode to talk to api through localhost?
With ControlPlaneKubeletLocalMode and when referncing admin.conf for kube-vip: The kubelet got started The kubelet bootstraps itself using the local control-plane IP (not depending on kube-vip being up) The admin.conf gets created The kubelet should be able to start kube-vip now
@neolit123 what do you think
@lubronzhan @neolit123 @chrischdi bump :)
missed the prior ping.
IC. so kubelet is able to talk to api through local ip, but not kubeadm. Kubeadm still talks to api through control plane ip. Is there use case that kubeadm init will be run not on the control plane node?
not in CAPI (automated) workflows. the user can do it out of band though.
e.g. ssh on a CP machine and call kubeadm init phase ...something as an utility for something.
Should it also support this mode to talk to api through localhost?
the admin.conf and super-admin.conf always talk to the CPE (control plane endpoint), that is because we want the user to be able to reach any API server.
With ControlPlaneKubeletLocalMode and when referncing admin.conf for kube-vip: The kubelet got started The kubelet bootstraps itself using the local control-plane IP (not depending on kube-vip being up) The admin.conf gets created The kubelet should be able to start kube-vip now
IIUC, at that point admin.conf still doesn't have the permissions. thus initial kube-vip bootstrap must use super-admin.conf and then move to admin.conf, that is what @chrischdi workaround proposal is doing.
these are points i brought before, but ideally kube-vip should stop using the *admin.conf (because it's not an admin) and stop requiring any RBAC on bootstrap. it could have delayed RBAC requirements IIUC.
these are points a brought before but, ideally kube-vip should stop using the *admin.conf (because it's not an admin) and stop requiring any RBAC on bootstrap. it could have delayed RBAC requirements IIUC.
speaking of RBAC and using the incorrect kubeconfigs, does the kube-controller-manager.conf or kube-scheduler.conf clients have the required permissions? i think it was related to leader election.
The implemented approach does not work (or maybe even in general the idea does not completely work out):
Does not seem to work right away.
https://github.com/team-cluster-api/cluster-api-provider-vsphere/pull/18#issuecomment-2309742544
/close
Because this won't work
@chrischdi: Closed this PR.
In response to this:
/close
Because this won't work
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.