cluster-api icon indicating copy to clipboard operation
cluster-api copied to clipboard

During KCP rollout, etcd churn can prevent renewal of the bootstrap token causing KCP node registration to fail

Open randomvariable opened this issue 4 years ago • 21 comments

/kind bug

Follow up from #2770

What steps did you take and what happened: [A clear and concise description of what the bug is.]

Deploy a 3-node CP cluster using CAPV with kube-vip using PhotonOS (for some reason, it's more likely to occur here) and then set kcp.spec.upgradeAfter to trigger a rollout.

Due to a mixture of https://github.com/kube-vip/kube-vip/issues/214 and the fact we haven't yet implemented etcd learner mode in kubeadm or have full support in kubernetes, etcd leader switches around many times, with kube-vip leader election also rotating. During this time, CAPI controllers are unable to fully reconcile, and neither can kubelet register nodes. Importantly, CABPK is also unable to renew the bootstrap token. Eventually, etcd replication completes but after the 15 minute bootstrap token timeout. kubelet node registration ultimately fails and we end up with an orphaned control plane machine which is a valid member of the etcd cluster, complicating cleanup.

What did you expect to happen:

Rollout succeeds.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

It might be worth in these instances to have a configurable bootstrap token TTL. We can't account for every type of environment with a longer TTL so we should make it configurable and vendors and cluster operators can decide, since there is a security trade off.

Furthermore, VMware would like to request a backport to v0.3.x branch.

Environment:

  • Cluster-api- version: v0.3.22
  • Kubernetes version: (use kubectl version): v1.21
  • OS (e.g. from /etc/os-release): Photon 3

randomvariable avatar Oct 22 '21 12:10 randomvariable

+1 for a configurable timeout

sbueringer avatar Oct 22 '21 13:10 sbueringer

It seems to me too many things are happening at once?

Would it make sense for upgradeAfter to trigger a more step by step operation with bs token renewal being the first task, then the rest...etc. On Oct 22, 2021 15:56, "Naadir Jeewa" @.***> wrote:

/kind bug

Follow up from #2770 https://github.com/kubernetes-sigs/cluster-api/issues/2770

What steps did you take and what happened: [A clear and concise description of what the bug is.]

Deploy a 3-node CP cluster using CAPV with kube-vip using PhotonOS (for some reason, it's more likely to occur here) and then set kcp.spec.upgradeAfter to trigger a rollout.

Due to a mixture of kube-vip/kube-vip#214 https://github.com/kube-vip/kube-vip/issues/214 and the fact we haven't yet implemented etcd learner mode in kubeadm or have full support in kubernetes, etcd leader switches around many times, with kube-vip leader election also rotating. During this time, CAPI controllers are unable to fully reconcile, and neither can kubelet register nodes. Importantly, CABPK is also unable to renew the bootstrap token. Eventually, etcd replication but after the 15 minute bootstrap token timeout. kubelet node registration ultimately fails and we end up with an orphaned control plane machine which is a valid member of the etcd cluster, complicating cleanup.

What did you expect to happen:

Rollout succeeds.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

It might be worth in these instances to have a configurable bootstrap token TTL. We can't account for every type of environment with a longer TTL so we should make it configurable and vendors and cluster operators can decide, since there is a security trade off.

Furthermore, VMware would like to request a backport to v0.3.x branches.

Environment:

  • Cluster-api- version: v0.3.22
  • Kubernetes version: (use kubectl version): v1.21
  • OS (e.g. from /etc/os-release): Photon 3

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/cluster-api/issues/5477, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACRATE4PK7JOK76BV27YZTUIFNQVANCNFSM5GQP5HUA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

neolit123 avatar Oct 22 '21 13:10 neolit123

FYI CABPK controller manager has a --bootstrap-token-ttl flag

fabriziopandini avatar Oct 22 '21 13:10 fabriziopandini

the fact we haven't yet implemented etcd learner mode in kubeadm or have full support in kubernetes, etcd leader switches around many times

Hm, would learner mode really solve this...i wonder if this is more of a problem about pinning the leader programmatically. This is something that k8s wishes to add in the future. There is a kep about it.

neolit123 avatar Oct 22 '21 13:10 neolit123

Also, AFAIK KCP move leadership only if the current leader is being deleted, so we are already very conservative in doing this operation and so I don't see a direct connection between current process and etcd leader switching around many times.

fabriziopandini avatar Oct 22 '21 13:10 fabriziopandini

Would it make sense for upgradeAfter to trigger a more step by step operation with bs token renewal being the first task

We generate a token per machine, based on a Machine resource being created, so KCP isn't in the loop at that stage, just CABPK.

FYI CABPK controller manager has a --bootstrap-token-ttl flag

I think this means we won't need the backport, and I've filed https://github.com/vmware-tanzu/tanzu-framework/issues/954

Hm, would learner mode really solve this...i wonder if this is more of a problem about pinning the leader programmatically. This is something that k8s wishes to add in the future. There is a kep about it.

Any pointers on that would be handy.

randomvariable avatar Oct 22 '21 13:10 randomvariable

Also, KCP move leadership only if the current leader is being deleted, so we are already very conservative in doing this operation and so I don't see a direct connection between current process and etcd leader switching around many times.

There's an interaction between the etcd leader moving and kube-vip which is using kubernetes resource locking, which makes things esp. problematic.

randomvariable avatar Oct 22 '21 13:10 randomvariable

Interestingly, @voor reports similar behaviour on AWS when cross-zone load balancing enabled.

randomvariable avatar Oct 22 '21 13:10 randomvariable

Any pointers on that would be handy.

Here is the kep / issue. Still WIP. Although this is for k8s LE: https://github.com/kubernetes/enhancements/issues/2835

neolit123 avatar Oct 22 '21 13:10 neolit123

/priority awaiting-more-evidence /assign @randomvariable /milestone Next

vincepri avatar Oct 22 '21 17:10 vincepri

Been speaking to @gab-satchi about looking into some of these thorny etcd thingies. /assign @gab-satchi

randomvariable avatar Oct 27 '21 16:10 randomvariable

/area control-plane

randomvariable avatar Nov 02 '21 14:11 randomvariable

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 31 '22 14:01 k8s-triage-robot

/remove-lifecycle stale

voor avatar Jan 31 '22 14:01 voor

/lifecycle frozen

vincepri avatar Jan 31 '22 16:01 vincepri

/triage accepted /unassign @randomvariable /unassign @gab-satchi

This should be re-assessed after we improved how token renewal is managed (tokens are now renewed until the machine joins); also in kubeadm we recently discussed to revive the work on etcd learner mode

fabriziopandini avatar Oct 03 '22 19:10 fabriziopandini

/help

fabriziopandini avatar Oct 03 '22 19:10 fabriziopandini

@fabriziopandini: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 03 '22 19:10 k8s-ci-robot

Quick update: kubeadm learner mode RFE:

  • https://github.com/kubernetes/kubeadm/issues/1793#issuecomment-1268139489
  • https://hackmd.io/@DAKGcrh_RpC5vlt8w5bf8A/r1qoLh9zj

sbueringer avatar Oct 05 '22 10:10 sbueringer

Also related: had to increase the timeout here for self-hosted upgrade tests. From my analysis the reason here is also etcd churn and side-effects of it (e.g. leader-election failing which makes webhooks unavailable).

  • code-change: https://github.com/kubernetes-sigs/cluster-api/blob/02f42f2ffc4114d00a300ae3d8cd39a4c934fc5c/test/framework/cluster_proxy.go#L49-L53
  • xref PR discussion: https://github.com/kubernetes-sigs/cluster-api/pull/7239#discussion_r987787229

chrischdi avatar Oct 05 '22 17:10 chrischdi

We should better sequence what's happening here. KCP move leadership to another node only before deleting the current leader, so we are already minimizing the number of forced leader changes / doing what is possible for them to happen in a controlled way. Everything else is raft periodic leader election that should be pretty stable, assuming the current leader answers timely. 🤔

fabriziopandini avatar Oct 06 '22 10:10 fabriziopandini

Just fyi regarding etcd learner mode:

The alpha-level code is merged for v1.27 in https://github.com/kubernetes/kubernetes/pull/113318 https://github.com/kubernetes/kubeadm/issues/1793#issuecomment-1377722007

sbueringer avatar Jan 12 '23 14:01 sbueringer

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot avatar Jan 20 '24 06:01 k8s-triage-robot

/priority important-longterm

fabriziopandini avatar Apr 12 '24 14:04 fabriziopandini

We now have a flag in CABPK for setting the TTL + we did a lot of improvements in the last two years. Also learner mode in kubeadm is making progress, even if not yet used by CAPI

/close

fabriziopandini avatar Apr 23 '24 12:04 fabriziopandini

@fabriziopandini: Closing this issue.

In response to this:

We now have a flag in CABPK for setting the TTL + we did a lot of improvements in the last two years. Also learner mode in kubeadm is making progress, even if not yet used by CAPI

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 23 '24 12:04 k8s-ci-robot