re-evaluate kubeadm-config download and dynamic defaults
here we saw a few problems with how kubeadm handles download of configuration from the cluster and then defaults it: https://github.com/kubernetes/kubeadm/issues/2323
a couple of task here are:
- [ ] do not download the kubeadm-config for worker nodes because they don't need it
- [ ] (related to the above) do not grant bootstrap tokens access to the kube-proxy config map https://github.com/kubernetes/kubeadm/issues/2305
- [ ] don't apply dynamic defaults on commands that don't need them (such as
kubeadm config images...)
I'm still making up my mind around this topic, but what about rephrasing the second goal in
- "don't apply dynamic defaults when downloading the config"
My assumption is that defaults should be applied only the first time a config is processed, during init or join, then all the other commands should rely on the values stored in the ConfigMap; Nb. due to the fact that we are not storing node-specific configuration, there could be some possible exception to this rule e.g for upgrades or renew-certs, but this requires further investigation
"don't apply dynamic defaults when downloading the config"
would that work for join --control-plane too?
"don't apply dynamic defaults when downloading the config"
would that work for join --control-plane too?
My assumption is that defaults should be applied only the first time a config is processed, during init or join (or join control-plane)
this PR reduced the unit test overhead by using static defaults (instead of dynamic) in most places: https://github.com/kubernetes/kubernetes/pull/98638
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten