kind icon indicating copy to clipboard operation
kind copied to clipboard

HA clusters don't reboot properly

Open BenTheElder opened this issue 5 years ago • 40 comments

first reported in https://github.com/kubernetes-sigs/kind/issues/1685 tracking in an updated bug.

reproduce with:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane

+ restart docker.

BenTheElder avatar Jun 25 '20 08:06 BenTheElder

Still needs root causing, but multiple user reports. We should fix this.

BenTheElder avatar Jun 25 '20 08:06 BenTheElder

yes, I hit the same issue today.

any workaround I can manually fix it? I spent a little bit long time to set up the test KIND environment, I used it for a while. I don't want to recreate it.

Any way to restore it back?

Another thing which not sure if related to this problem.

yesterday I upgraded KIND version from 0.7 to 0.8.1. My old nodes used to be 1.17.0, but today, after I restart Docker service, it becomes to kindest/node:v1.18.2 .

ozbillwang avatar Jul 03 '20 01:07 ozbillwang

I haven't looked into this issue yet.

Regarding the node versions, please read the release notes about the changes, and see the usage and user guide for how to change it.

BenTheElder avatar Jul 03 '20 04:07 BenTheElder

This has never worked. 0.7 and down did not survive reboots for any configuration. 0.8+ apparently doesn't survive reboots for "HA" clusters.

On Thu, Jul 2, 2020, 18:21 Bill Wang [email protected] wrote:

yes, I hit the same issue today.

any workaround I can manually fix it? I spent a little bit long time to set up the test KIND environment, I don't want to recreate it.

Any way to restore it back?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kind/issues/1689#issuecomment-653282279, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADKYPW3VQJYFXRZSV4DLRZUXAHANCNFSM4OICEZFQ .

BenTheElder avatar Jul 06 '20 14:07 BenTheElder

cc @aojea you were recently looking at the loadbalancer networking

BenTheElder avatar Aug 27 '20 07:08 BenTheElder

/assign

aojea avatar Aug 27 '20 07:08 aojea

it is more complicated than the load balancer, the control plane nodes has different ips and the cluster does not come up

2020-08-27 07:58:58.054356 E | etcdserver: publish error: etcdserver: request timed out
2020-08-27 07:58:58.061687 W | rafthttp: health check for peer 6dd029603bf5e797 could not connect: x509: certificate is valid for 172.18.0.7, 127.0.0.1, ::1, not 172.18.0.5
2020-08-27 07:58:58.061717 W | rafthttp: health check for peer 6dd029603bf5e797 could not connect: x509: certificate is valid for 172.18.0.7, 127.0.0.1, ::1, not 172.18.0.5
2020-08-27 07:58:58.063416 W | rafthttp: health check for peer 2b4992c658e42934 could not connect: dial tcp 172.18.0.7:2380: connect: no route to host
2020-08-27 07:58:58.063454 W | rafthttp: health check for peer 2b4992c658e42934 could not connect: dial tcp 172.18.0.7:2380: connect: no route to host

seems we should use hostnames on the certificates to avoid this

aojea avatar Aug 27 '20 08:08 aojea

We do where we can already. IIRC etcd won't use hostnames.

On Thu, Aug 27, 2020, 01:01 Antonio Ojea [email protected] wrote:

it is more complicated than the load balancer, the control plane nodes has different ips and the cluster does not come up

2020-08-27 07:58:58.054356 E | etcdserver: publish error: etcdserver: request timed out 2020-08-27 07:58:58.061687 W | rafthttp: health check for peer 6dd029603bf5e797 could not connect: x509: certificate is valid for 172.18.0.7, 127.0.0.1, ::1, not 172.18.0.5 2020-08-27 07:58:58.061717 W | rafthttp: health check for peer 6dd029603bf5e797 could not connect: x509: certificate is valid for 172.18.0.7, 127.0.0.1, ::1, not 172.18.0.5 2020-08-27 07:58:58.063416 W | rafthttp: health check for peer 2b4992c658e42934 could not connect: dial tcp 172.18.0.7:2380: connect: no route to host 2020-08-27 07:58:58.063454 W | rafthttp: health check for peer 2b4992c658e42934 could not connect: dial tcp 172.18.0.7:2380: connect: no route to host

seems we should use hostnames to sign certificates to avoid this

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kind/issues/1689#issuecomment-681736668, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADKZROGMZP7NU3UZY3D3SCYHHJANCNFSM4OICEZFQ .

BenTheElder avatar Aug 27 '20 16:08 BenTheElder

Same issue here.

my machine environment:

macOS High Sierra v10.13.6 docker: 2.3.0.4 engine: 19.03.12 kubernetes: v1.16.5

kind environment:

kind-control-plane kind-control-plane2 kind-control-plane3 kind-worker kind-worker2 kind-external-load-balancer (haproxy)

after docker reboot: kubectl get pods

output: Unable to connect to the server: EOF

Is there any workaround for this issue?

BTW - when running with only one kind-control-plane the reboot passed successfully.

shlomibendavid avatar Aug 30 '20 18:08 shlomibendavid

There's no work around, rebooting HA (multiple control plane) clusters has never been supported and does not appear to be trivial to fix. https://github.com/kubernetes-sigs/kind/issues/1689#issuecomment-654278411

BenTheElder avatar Aug 30 '20 19:08 BenTheElder

Hi @BenTheElder, I guess this issue is caused by the Nodes' IP are changed during restarting docker. One possible solution is to assign a fixed IP to the Nodes. That requires 2 steps:

  1. Create a network with a subnet, 'docker network create --subnet=172.18.0.0/24 kind`
  2. Start the Node with fixed IP, docker run --network kind --ip 172.18.0.6 -d nginx

RolandMa1986 avatar Oct 30 '20 09:10 RolandMa1986

@RolandMa1986 thanks for the suggestion, but we discarded that idea before because we'll need to implement an IPAM in KIND.

Also, we'll need to keep status of all the KIND clusters to handle reboots avoid conflicts with new clusters or new containers that can be created in the bridge.

aojea avatar Oct 30 '20 10:10 aojea

Thanks, @aojea I want to know more details about the IPAM approach. Will it be a CNM plugin or CNI plugin?

RolandMa1986 avatar Nov 02 '20 10:11 RolandMa1986

Thanks, @aojea I want to know more details about the IPAM approach. Will it be a CNM plugin or CNI plugin?

it depends on the provider, currently KIND uses docker as default, that means CNM ... but podman is in the roadman that uses CNI, https://github.com/kubernetes-sigs/kind/issues/1778

as you can see this is an area that will require a lot of effort to support, honestly, I don't see that we want to invest much on this ... Ben can correct me if I'm wrong

aojea avatar Nov 02 '20 10:11 aojea

I don't think that's a good approach. If we create non-standard IPAM this will create a headache for users vs their existing ability to configure docker today.

Additionally, this approach still does not guarantee an address, and you have concurrency issues with clusters using a remote docker (where will you store and lock the IPAM data?), which otherwise works fine for users today. EDIT: please search for past discussion, I'd rather not re-hash that entire discussion here. EDIT2: thanks for the suggestion though 🙃

We can probably instead re-roll the etcd peer configuration and necessary certs on restart, but this is very low priority.

The main reason to support clusters through reboot is long lived development clusters for users building applications, which should not be using "HA" clusters. Otherwise for testing / disposable clusters, this is a non-issue.

BenTheElder avatar Nov 02 '20 17:11 BenTheElder

see more here on why the "ipam" approach is not super tenable: https://github.com/kubernetes-sigs/kind/issues/2045#issuecomment-772296375

BenTheElder avatar Feb 06 '21 02:02 BenTheElder

i faced a pretty different issue, replicasets were not creating pods, when pod is deleted. Deployments were not creating replicasets, after i restarted the machine

velcrine avatar Jul 09 '21 11:07 velcrine

@velcrine that's actually a variation on the issues in https://github.com/kubernetes-sigs/kind/issues/2045

HA has a different additional problem in that the loadbalancer causes issues with the API being reachable after restart, in which case you wouldn't even be able to query for those problems.

FWIW regarding HA Nobody is working on or using this feature much and it's simplistic / not fully designed. This issue is unlikely to see work anytime soon. (priority/backlog)

The other issue (https://github.com/kubernetes-sigs/kind/issues/2045) is one I'm sure someone would work on except nobody has posited a good solution we can agree on yet or root caused the issues.

BenTheElder avatar Jul 09 '21 18:07 BenTheElder

Hi @BenTheElder I know that using DNS names is the cleanest solution for issue #2045

However I am using this script as a workaround to use static IPs for the nodes communication

I have restarted my cluster several times and it has worked fine so far

seguidor777 avatar Jul 29 '21 05:07 seguidor777

I have restarted my cluster several times and it has worked fine so far

Users may have multiple clusters and that is hard to support, however, your script is great, I think that it also can solve the problems of snapshotting HA clusters.

aojea avatar Jul 29 '21 05:07 aojea

That's a neat script!

It's unfortunately not super workable as an approach to a built-in solution though. Users creating clusters concurrently in CI (and potentially with a "remote" daemon due to containerized CI) are very important to us and this approach is not safe there.

BenTheElder avatar Jul 29 '21 17:07 BenTheElder

can't we extend the kind config to take ip-node mapping as an optional parameter: to only those who know what they are doing. It can then completely replace every solution to perfectly fit in all needs.

velcrine avatar Jul 29 '21 17:07 velcrine

can't we extend the kind config to take ip-node mapping as an optional parameter: to only those who know what they are doing. It can then completely replace every solution to perfectly fit in all needs.

This is not without its own drawbacks.

  1. Not necessarily portable across node backends (e.g. out of our current options podman cannot do at least ipv6 this way).
  2. Does not solve the need to set a reserved IP range in the kind network (so you will still need to do hacks outside of the kind tool...)
  3. Adds infrequently used and untested codepath(s). (We are not going to add yet another CI job to exercise this, we have too many as-is and we have no need for this upstream https://kind.sigs.k8s.io/docs/contributing/project-scope/).

Multi-node clusters are a necessity for testing Kubernetes itself (where we expect clusters to be disposable over the course of developing some change to Kubernetes). For development of applications, we expect single node clusters to be most reasonable (and this is the case where it may make sense to persist them, though we'd still encourage regularly testing from a clean state).

The case of:

  1. Requirement for multi-node
  2. Requirement for persistence
  3. Frequent reboots

Seems rather rare and I'm not sure it outweighs adding a broken partial solution that people will then depend on in the future even if we find some better design.

I'm not saying we definitely couldn't do this, but I wouldn't jump to doing it today.

BenTheElder avatar Jul 29 '21 18:07 BenTheElder

k3d seems to have done something about this here: https://github.com/rancher/k3d/issues/550#issuecomment-819436109 which links back to this issue in our repo.

With --subnet auto, k3d will create a fake docker network to get a subnet auto-assigned by docker that it can use.

This looks to me like a broken approach to identifying an available subnet (there is at minimum a race between acquiring / deleting the "fake" network and creating the real one with two clusters), but I'm also unclear as of yet if the IP range is used on a per-cluster network or IPs outside of another network's range are used on that network.

It may be worth digging into the approach there more.

BenTheElder avatar Jul 29 '21 18:07 BenTheElder

this is where plugins help!! Anyways, it must be mentioned in the quick start/ other doc, that multinode will not survive restart. When I first face the issue, it was hard time debugging.

velcrine avatar Jul 30 '21 03:07 velcrine

We document this sort of thing at https://kind.sigs.k8s.io/docs/user/known-issues/ which the quick start links to prominently, but it seems this issue hasn't made it there yet. Earlier versions did not support host restart at all, it wasn't in scope early in the project.

BenTheElder avatar Jul 30 '21 03:07 BenTheElder

this is where plugins help!! Anyways, it must be mentioned in the quick start/ other doc, that multinode will not survive restart. When I first face the issue, it was hard time debugging.

Can I second this, I just spent several days building a multi-node cluster, then on the first reboot, effectively lost the lot. Not best pleased, especially when after researching my problem, finding this is a known issue. For the sake of the sanity of others, can someone please put a simple warning about this in the known issues section of the Kind documentation.

MarkLFT avatar Apr 26 '22 02:04 MarkLFT

FWIW:

  • most use cases should be using single node clusters, there is no resource isolation and unless you have very particular needs involving multi-node behavior you will be better served with single node clusters
  • most use cases shouldn't be using permanent long lived clusters
  • no use cases should only have state in the kind cluster (!), this is not a durable approach
  • because (1) (2) (3) originally we didn't test or support reboot at all, these are meant to be local ephemeral test clusters
  • the multi-node issue is https://github.com/kubernetes-sigs/kind/issues/2045, this one is meant for discussion specific to problems with multiple control-plane nodes
  • recent discussion in https://github.com/kubernetes-sigs/kind/pull/2671 on how we might fix it

For the sake of the sanity of others, can someone please put a simple warning about this in the known issues section of the Kind documentation.

We have a detailed contributing guide including how to contribute to the docs, the known issues page is written in markdown in this repo. No tools other than git / github / markdown text are required.

BenTheElder avatar May 03 '22 19:05 BenTheElder

most use cases should be using single node clusters

@BenTheElder what do you imply by single vs multi-node clusters referring to Kind? You mean multiple control-plane nodes or multiple worker ones? I'd never need multiple control-plane nodes in Kind, but I sometimes do need multiple worker nodes to test tolerances, affinities etc. And I would like the clusters with multiple worker nodes to survive a reboot.

victor-sudakov avatar May 04 '22 16:05 victor-sudakov

Yes, affinities and tolerances are a case for using multi-node and that issue is #2405

BenTheElder avatar May 04 '22 16:05 BenTheElder