website icon indicating copy to clipboard operation
website copied to clipboard

Control plane failure modes for high-availability documentation

Open royalsflush opened this issue 2 years ago • 27 comments

We likely need some brief documentation on what customers can expect in terms of the reliability of the control plane. We discussed the "majority" vs "less than majority" buckets of problems, would be great to have documentation that we can point to, in order to justify our reliability stance

royalsflush avatar Nov 07 '23 17:11 royalsflush

There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:

  • /sig <group-name>
  • /wg <group-name>
  • /committee <group-name>

Please see the group list for a listing of the SIGs, working groups, and committees available.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 07 '23 17:11 k8s-ci-robot

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 07 '23 17:11 k8s-ci-robot

/transfer website

where the k8s documentation is located.

neolit123 avatar Nov 07 '23 17:11 neolit123

We likely need some brief documentation on what customers can expect in terms of the reliability of the control plane. We discussed the "majority" vs "less than majority" buckets of problems, would be great to have documentation that we can point to, in order to justify our reliability stance

when speaking about "majority" is this about etcd's raft algorithm? k8s core doesn't have this requirement directly. also, when / where was this discussed?

neolit123 avatar Nov 07 '23 17:11 neolit123

/kind feature /triage needs-information /sigs docs (tagging with docs until owner is established, if ever)

neolit123 avatar Nov 07 '23 17:11 neolit123

It'd be good to understand the gaps: what should https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/ cover that it doesn't?

sftim avatar Nov 07 '23 18:11 sftim

/close

the ticket has missing information; questions were not answered. please update and re-open.

neolit123 avatar Nov 14 '23 10:11 neolit123

@neolit123: Closing this issue.

In response to this:

/close

the ticket has missing information; questions were not answered. please update and re-open.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 14 '23 10:11 k8s-ci-robot

Hi all, really sorry for the delay on my elaboration of this issue!

The context is that my team is working on Kubernetes reliability (as part of a product) and we want to understand the failure modes of the control plane. I had a chat with Han Kang about this offline, and I wanted to amend the details of this issue with our conversation of what I think is missing, but I wanted to review the links you all sent first to see if I was missing something. @sftim thank you very much for sending it over!

The part I wanted the most is the expectations of restrictions when one or more nodes of the control plane are down. We're currently working with a setup that considers HA as three control plane nodes, so we were trying to understand what were the consequences of:

  1. A single node being down
  2. The majority of nodes
  3. All of them (we assume cluster down, but just for completeness)

So what I was asking was "what Kubernetes customers can expect in case of failure of their control plane nodes".

Let me know if this makes sense, and sorry again for the delay

royalsflush avatar Nov 15 '23 09:11 royalsflush

what you are talking about makes sense, @royalsflush

please include more detail in the OP post: https://github.com/kubernetes/website/issues/43849#issue-1981863546

i don't mind us including more documentation about failures and recovery of the CP, as the documentation is lacking. let's see what is actionable here.

/reopen

neolit123 avatar Nov 15 '23 09:11 neolit123

@neolit123: Reopened this issue.

In response to this:

what you are talking about makes sense, @royalsflush

please include more detail in the OP post: https://github.com/kubernetes/website/issues/43849#issue-1981863546

i don't mind us including more documentation about failures and recovery of the CP, as the documentation is lacking. let's see what is actionable here.

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 15 '23 09:11 k8s-ci-robot

/sig architecture /sig api-machinery /remove-triage needs-information

Please revise (edit) the original issue description @royalsflush to explain what you want added to the documentation. You could write this as a user story or as a definition of done.

sftim avatar Nov 15 '23 12:11 sftim

/assign

(I can take this, if y'all don't mind)

logicalhan avatar Nov 15 '23 14:11 logicalhan

Thanks @logicalhan. These things are important.

sftim avatar Nov 15 '23 14:11 sftim

/triage accepted /priority important-longterm

sftim avatar Nov 15 '23 14:11 sftim

I would add that we ideally ought to cover some of the less common situations too. I'll outline some below. What I hope is that someone carefully reading the docs can answer what the expected outcome is, without actually setting up a cluster or reading any source code. “answer“ means working out if the expected behavior as seen by a client is: API usable; API unavailable / degraded; undefined behavior

Eg:

  • three control plane nodes (1 per zone); separate etcd hosts (1 per zone); full failure in exactly one zone; “perfect” client-side load balancing and retries
  • three control plane nodes (1 per zone); separate etcd hosts (1 per zone); etcd healthy but full API server failure in exactly one zone; “perfect” client-side load balancing and retries
  • even number of control plane nodes, of which all are healthy; separate etcd cluster has odd number of nodes and some (but fewer than half) have failed; “perfect” client-side load balancing and retries
  • even number of control plane nodes, only half of which all are healthy; separate etcd cluster has odd number of nodes and some (but fewer than half) have failed; “perfect” client-side load balancing and retries
  • stacked 3-node control plane; each API server only speaks to local etcd; one etcd fully unavailable; “dumb” round-robin style load balancing without health checks

I'm sure we could think up more; maybe we even have a list already?


We can produce - and publish - docs without meeting this ideal; I've mentioned it so we understand where we'd like to end up.

sftim avatar Nov 15 '23 15:11 sftim

I would add that we ideally ought to cover some of the less common situations too. I'll outline some below. What I hope is that someone carefully reading the docs can answer what the expected outcome is, without actually setting up a cluster or reading any source code. “answer“ means working out if the expected behavior as seen by a client is: API usable; API unavailable / degraded; undefined behavior

Eg:

  • three control plane nodes (1 per zone); separate etcd hosts (1 per zone); full failure in exactly one zone; “perfect” client-side load balancing and retries
  • three control plane nodes (1 per zone); separate etcd hosts (1 per zone); etcd healthy but full API server failure in exactly one zone; “perfect” client-side load balancing and retries
  • even number of control plane nodes, of which all are healthy; separate etcd cluster has odd number of nodes and some (but fewer than half) have failed; “perfect” client-side load balancing and retries
  • even number of control plane nodes, only half of which all are healthy; separate etcd cluster has odd number of nodes and some (but fewer than half) have failed; “perfect” client-side load balancing and retries
  • stacked 3-node control plane; each API server only speaks to local etcd; one etcd fully unavailable; “dumb” round-robin style load balancing without health checks

I'm sure we could think up more; maybe we even have a list already?

We can produce - and publish - docs without meeting this ideal; I've mentioned it so we understand where we'd like to end up.

Additional scenarios:

  • stacked 3-node control plane; each API server only speaks to local etcd; two or more etcd fully unavailable; “dumb” round-robin style load balancing without health checks
  • stacked 5-node control plane; each API server only speaks to local etcd; one etcd fully unavailable; “dumb” round-robin style load balancing without health checks
  • stacked 5-node control plane; each API server only speaks to local etcd; two etcd fully unavailable; “dumb” round-robin style load balancing without health checks
  • stacked 5-node control plane; each API server only speaks to local etcd; three or more etcd fully unavailable; “dumb” round-robin style load balancing without health checks

logicalhan avatar Nov 15 '23 15:11 logicalhan

I may group answers based on local or remote etcd hosts, since the answers are likely skewed to that distinction anyway.

logicalhan avatar Nov 15 '23 15:11 logicalhan

These questions need not appear in the page; you could think of them as like unit tests for the docs. In other words, if a reviewer picks a question, can they - just by reading what's in the page - work out what the answer must be?

(we could even ask a large language AI model to help us check)

sftim avatar Nov 15 '23 15:11 sftim

These questions need not appear in the page; you could think of them as like unit tests for the docs. In other words, if a reviewer picks a question, can they - just by reading what's in the page - work out what the answer must be?

(we could even ask a large language AI model to help us check)

I dig the framing.

logicalhan avatar Nov 15 '23 15:11 logicalhan

https://github.com/kubernetes/website/pull/43903 feels slightly relevant (only slightly, though). I don't know how much we want to also cover upgrades and how they impact failure modes.

sftim avatar Nov 15 '23 15:11 sftim

+1 to cover upgrades and rollback.

in KEP PRRs we require "downgradability" of k8s features, but etcd by design does not support downgrade well, yet: https://github.com/etcd-io/etcd/issues/15878#issuecomment-1567986308

kubeadm as a whole also does not support downgrades. it supports rollback, in case of component failure, but that may or may not work, depending on:

  • if it was a k8s component, hopefully all features, skews, etc properly guarantee downgrade
  • if it was etcd, who knows

#43903 feels slightly relevant (only slightly, though). I don't know how much we want to also cover upgrades and how they impact failure modes.

it's a bug in kubeadm's api-machinery usage and the etcd upgrade failure will trigger a rollback, unless the user workarounds it. but since the rollback will restore an etcd with the same version, it will act as a component restart.

neolit123 avatar Nov 15 '23 15:11 neolit123

+1 @sftim , Can you reshare the docs for gaps

justankiit avatar Nov 18 '23 06:11 justankiit

+1 @sftim , Can you reshare the docs for gaps

I don't understand what you'd like me to do here @kumarankit999. How would you know when I'd done what you're asking (can you frame it as a definition of done)?

If you mean https://github.com/kubernetes/website/issues/43849#issuecomment-1799444308, I was the person who asked the question, and I do not have the answer to it.

sftim avatar Nov 28 '23 11:11 sftim

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot avatar Nov 27 '24 11:11 k8s-triage-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 25 '25 12:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 27 '25 12:03 k8s-triage-robot