kubeadm icon indicating copy to clipboard operation
kubeadm copied to clipboard

re-evaluate kubeadm-config download and dynamic defaults

Open neolit123 opened this issue 5 years ago • 10 comments

here we saw a few problems with how kubeadm handles download of configuration from the cluster and then defaults it: https://github.com/kubernetes/kubeadm/issues/2323

a couple of task here are:

  • [ ] do not download the kubeadm-config for worker nodes because they don't need it
  • [ ] (related to the above) do not grant bootstrap tokens access to the kube-proxy config map https://github.com/kubernetes/kubeadm/issues/2305
  • [ ] don't apply dynamic defaults on commands that don't need them (such as kubeadm config images...)

neolit123 avatar Oct 16 '20 23:10 neolit123

I'm still making up my mind around this topic, but what about rephrasing the second goal in

  • "don't apply dynamic defaults when downloading the config"

My assumption is that defaults should be applied only the first time a config is processed, during init or join, then all the other commands should rely on the values stored in the ConfigMap; Nb. due to the fact that we are not storing node-specific configuration, there could be some possible exception to this rule e.g for upgrades or renew-certs, but this requires further investigation

fabriziopandini avatar Oct 19 '20 07:10 fabriziopandini

"don't apply dynamic defaults when downloading the config"

would that work for join --control-plane too?

neolit123 avatar Oct 19 '20 12:10 neolit123

"don't apply dynamic defaults when downloading the config"

would that work for join --control-plane too?

My assumption is that defaults should be applied only the first time a config is processed, during init or join (or join control-plane)

fabriziopandini avatar Oct 19 '20 14:10 fabriziopandini

this PR reduced the unit test overhead by using static defaults (instead of dynamic) in most places: https://github.com/kubernetes/kubernetes/pull/98638

neolit123 avatar Feb 02 '21 15:02 neolit123

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jun 07 '21 15:06 fejta-bot

/remove-lifecycle stale

neolit123 avatar Jul 26 '21 18:07 neolit123

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 24 '21 18:10 k8s-triage-robot

/remove-lifecycle stale

neolit123 avatar Oct 24 '21 20:10 neolit123

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 09 '22 16:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 08 '22 17:09 k8s-triage-robot