annotate the kube-system namespace to allow kubeadm managed static Pod labels
update for the 1.23 cycle: https://github.com/kubernetes/enhancements/issues/1314#issuecomment-902256245 looks like the design is going in a different direction. i have closed the PR to change kubeadm that follows it, but we should keep this issue open until a KEP update follows related to https://github.com/kubernetes/enhancements/issues/1314
annotate the kube-system namespace to allow kubeadm managed static Pod labels, such as "tier" and "component".
this change is landing as alpha in 1.17 and by 1.19 it will be on by default (beta).
see: https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190916-noderestriction-pods.md
tracking issue in k/e:
- https://github.com/kubernetes/enhancements/issues/1314
tracking issue for k/k:
- https://github.com/kubernetes/kubernetes/issues/83977
The k8s-app label is used to match controllers for system components, and therefore should be explicitly disallowed.
looks like we also use the k8s-app label in the upgrade process, which should be revisited:
https://github.com/kubernetes/kubernetes/blob/3758426884e3c82cbd99c72e8015f4396f21fde2/cmd/kubeadm/app/phases/upgrade/prepull.go#L83
it's not a high priority for this cycle, but i have a WIP PR for this. one decision we have to make is in which "kubeadm init" phase we want this annotation to happen. ~my vote is the "control-plane" phase, before writing static pods.~ EDIT: my mistake, this needs to happen after the "wait-control-plane" phase.
/cc
/cc
the work in on hold for 1.18 https://github.com/kubernetes/enhancements/issues/1314#issuecomment-575805238 moving to 1.19
@neolit123 is this something we should work on for v1.19?
depends if https://github.com/kubernetes/enhancements/issues/1314 is worked on for 1.19.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
I think sig auth is looking for alternatives to this, but i need to check.