kind
kind copied to clipboard
HA clusters don't reboot properly
first reported in https://github.com/kubernetes-sigs/kind/issues/1685 tracking in an updated bug.
reproduce with:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
+ restart docker.
Still needs root causing, but multiple user reports. We should fix this.
yes, I hit the same issue today.
any workaround I can manually fix it? I spent a little bit long time to set up the test KIND environment, I used it for a while. I don't want to recreate it.
Any way to restore it back?
Another thing which not sure if related to this problem.
yesterday I upgraded KIND version from 0.7 to 0.8.1. My old nodes used to be 1.17.0, but today, after I restart Docker service, it becomes to kindest/node:v1.18.2 .
I haven't looked into this issue yet.
Regarding the node versions, please read the release notes about the changes, and see the usage and user guide for how to change it.
This has never worked. 0.7 and down did not survive reboots for any configuration. 0.8+ apparently doesn't survive reboots for "HA" clusters.
On Thu, Jul 2, 2020, 18:21 Bill Wang [email protected] wrote:
yes, I hit the same issue today.
any workaround I can manually fix it? I spent a little bit long time to set up the test KIND environment, I don't want to recreate it.
Any way to restore it back?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kind/issues/1689#issuecomment-653282279, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADKYPW3VQJYFXRZSV4DLRZUXAHANCNFSM4OICEZFQ .
cc @aojea you were recently looking at the loadbalancer networking
/assign
it is more complicated than the load balancer, the control plane nodes has different ips and the cluster does not come up
2020-08-27 07:58:58.054356 E | etcdserver: publish error: etcdserver: request timed out
2020-08-27 07:58:58.061687 W | rafthttp: health check for peer 6dd029603bf5e797 could not connect: x509: certificate is valid for 172.18.0.7, 127.0.0.1, ::1, not 172.18.0.5
2020-08-27 07:58:58.061717 W | rafthttp: health check for peer 6dd029603bf5e797 could not connect: x509: certificate is valid for 172.18.0.7, 127.0.0.1, ::1, not 172.18.0.5
2020-08-27 07:58:58.063416 W | rafthttp: health check for peer 2b4992c658e42934 could not connect: dial tcp 172.18.0.7:2380: connect: no route to host
2020-08-27 07:58:58.063454 W | rafthttp: health check for peer 2b4992c658e42934 could not connect: dial tcp 172.18.0.7:2380: connect: no route to host
seems we should use hostnames on the certificates to avoid this
We do where we can already. IIRC etcd won't use hostnames.
On Thu, Aug 27, 2020, 01:01 Antonio Ojea [email protected] wrote:
it is more complicated than the load balancer, the control plane nodes has different ips and the cluster does not come up
2020-08-27 07:58:58.054356 E | etcdserver: publish error: etcdserver: request timed out 2020-08-27 07:58:58.061687 W | rafthttp: health check for peer 6dd029603bf5e797 could not connect: x509: certificate is valid for 172.18.0.7, 127.0.0.1, ::1, not 172.18.0.5 2020-08-27 07:58:58.061717 W | rafthttp: health check for peer 6dd029603bf5e797 could not connect: x509: certificate is valid for 172.18.0.7, 127.0.0.1, ::1, not 172.18.0.5 2020-08-27 07:58:58.063416 W | rafthttp: health check for peer 2b4992c658e42934 could not connect: dial tcp 172.18.0.7:2380: connect: no route to host 2020-08-27 07:58:58.063454 W | rafthttp: health check for peer 2b4992c658e42934 could not connect: dial tcp 172.18.0.7:2380: connect: no route to host
seems we should use hostnames to sign certificates to avoid this
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kind/issues/1689#issuecomment-681736668, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHADKZROGMZP7NU3UZY3D3SCYHHJANCNFSM4OICEZFQ .
Same issue here.
my machine environment:
macOS High Sierra v10.13.6 docker: 2.3.0.4 engine: 19.03.12 kubernetes: v1.16.5
kind environment:
kind-control-plane kind-control-plane2 kind-control-plane3 kind-worker kind-worker2 kind-external-load-balancer (haproxy)
after docker reboot: kubectl get pods
output: Unable to connect to the server: EOF
Is there any workaround for this issue?
BTW - when running with only one kind-control-plane the reboot passed successfully.
There's no work around, rebooting HA (multiple control plane) clusters has never been supported and does not appear to be trivial to fix. https://github.com/kubernetes-sigs/kind/issues/1689#issuecomment-654278411
Hi @BenTheElder, I guess this issue is caused by the Nodes' IP are changed during restarting docker. One possible solution is to assign a fixed IP to the Nodes. That requires 2 steps:
- Create a network with a subnet, 'docker network create --subnet=172.18.0.0/24 kind`
- Start the Node with fixed IP,
docker run --network kind --ip 172.18.0.6 -d nginx
@RolandMa1986 thanks for the suggestion, but we discarded that idea before because we'll need to implement an IPAM in KIND.
Also, we'll need to keep status of all the KIND clusters to handle reboots avoid conflicts with new clusters or new containers that can be created in the bridge.
Thanks, @aojea I want to know more details about the IPAM approach. Will it be a CNM plugin or CNI plugin?
Thanks, @aojea I want to know more details about the IPAM approach. Will it be a CNM plugin or CNI plugin?
it depends on the provider, currently KIND uses docker as default, that means CNM ... but podman is in the roadman that uses CNI, https://github.com/kubernetes-sigs/kind/issues/1778
as you can see this is an area that will require a lot of effort to support, honestly, I don't see that we want to invest much on this ... Ben can correct me if I'm wrong
I don't think that's a good approach. If we create non-standard IPAM this will create a headache for users vs their existing ability to configure docker today.
Additionally, this approach still does not guarantee an address, and you have concurrency issues with clusters using a remote docker (where will you store and lock the IPAM data?), which otherwise works fine for users today. EDIT: please search for past discussion, I'd rather not re-hash that entire discussion here. EDIT2: thanks for the suggestion though 🙃
We can probably instead re-roll the etcd peer configuration and necessary certs on restart, but this is very low priority.
The main reason to support clusters through reboot is long lived development clusters for users building applications, which should not be using "HA" clusters. Otherwise for testing / disposable clusters, this is a non-issue.
see more here on why the "ipam" approach is not super tenable: https://github.com/kubernetes-sigs/kind/issues/2045#issuecomment-772296375
i faced a pretty different issue, replicasets were not creating pods, when pod is deleted. Deployments were not creating replicasets, after i restarted the machine
@velcrine that's actually a variation on the issues in https://github.com/kubernetes-sigs/kind/issues/2045
HA has a different additional problem in that the loadbalancer causes issues with the API being reachable after restart, in which case you wouldn't even be able to query for those problems.
FWIW regarding HA Nobody is working on or using this feature much and it's simplistic / not fully designed. This issue is unlikely to see work anytime soon. (priority/backlog)
The other issue (https://github.com/kubernetes-sigs/kind/issues/2045) is one I'm sure someone would work on except nobody has posited a good solution we can agree on yet or root caused the issues.
Hi @BenTheElder I know that using DNS names is the cleanest solution for issue #2045
However I am using this script as a workaround to use static IPs for the nodes communication
I have restarted my cluster several times and it has worked fine so far
I have restarted my cluster several times and it has worked fine so far
Users may have multiple clusters and that is hard to support, however, your script is great, I think that it also can solve the problems of snapshotting HA clusters.
That's a neat script!
It's unfortunately not super workable as an approach to a built-in solution though. Users creating clusters concurrently in CI (and potentially with a "remote" daemon due to containerized CI) are very important to us and this approach is not safe there.
can't we extend the kind config to take ip-node mapping as an optional parameter: to only those who know what they are doing. It can then completely replace every solution to perfectly fit in all needs.
can't we extend the kind config to take ip-node mapping as an optional parameter: to only those who know what they are doing. It can then completely replace every solution to perfectly fit in all needs.
This is not without its own drawbacks.
- Not necessarily portable across node backends (e.g. out of our current options podman cannot do at least ipv6 this way).
- Does not solve the need to set a reserved IP range in the kind network (so you will still need to do hacks outside of the kind tool...)
- Adds infrequently used and untested codepath(s). (We are not going to add yet another CI job to exercise this, we have too many as-is and we have no need for this upstream https://kind.sigs.k8s.io/docs/contributing/project-scope/).
Multi-node clusters are a necessity for testing Kubernetes itself (where we expect clusters to be disposable over the course of developing some change to Kubernetes). For development of applications, we expect single node clusters to be most reasonable (and this is the case where it may make sense to persist them, though we'd still encourage regularly testing from a clean state).
The case of:
- Requirement for multi-node
- Requirement for persistence
- Frequent reboots
Seems rather rare and I'm not sure it outweighs adding a broken partial solution that people will then depend on in the future even if we find some better design.
I'm not saying we definitely couldn't do this, but I wouldn't jump to doing it today.
k3d seems to have done something about this here: https://github.com/rancher/k3d/issues/550#issuecomment-819436109 which links back to this issue in our repo.
With --subnet auto, k3d will create a fake docker network to get a subnet auto-assigned by docker that it can use.
This looks to me like a broken approach to identifying an available subnet (there is at minimum a race between acquiring / deleting the "fake" network and creating the real one with two clusters), but I'm also unclear as of yet if the IP range is used on a per-cluster network or IPs outside of another network's range are used on that network.
It may be worth digging into the approach there more.
this is where plugins help!! Anyways, it must be mentioned in the quick start/ other doc, that multinode will not survive restart. When I first face the issue, it was hard time debugging.
We document this sort of thing at https://kind.sigs.k8s.io/docs/user/known-issues/ which the quick start links to prominently, but it seems this issue hasn't made it there yet. Earlier versions did not support host restart at all, it wasn't in scope early in the project.
this is where plugins help!! Anyways, it must be mentioned in the quick start/ other doc, that multinode will not survive restart. When I first face the issue, it was hard time debugging.
Can I second this, I just spent several days building a multi-node cluster, then on the first reboot, effectively lost the lot. Not best pleased, especially when after researching my problem, finding this is a known issue. For the sake of the sanity of others, can someone please put a simple warning about this in the known issues section of the Kind documentation.
FWIW:
- most use cases should be using single node clusters, there is no resource isolation and unless you have very particular needs involving multi-node behavior you will be better served with single node clusters
- most use cases shouldn't be using permanent long lived clusters
- no use cases should only have state in the kind cluster (!), this is not a durable approach
- because (1) (2) (3) originally we didn't test or support reboot at all, these are meant to be local ephemeral test clusters
- the multi-node issue is https://github.com/kubernetes-sigs/kind/issues/2045, this one is meant for discussion specific to problems with multiple control-plane nodes
- recent discussion in https://github.com/kubernetes-sigs/kind/pull/2671 on how we might fix it
For the sake of the sanity of others, can someone please put a simple warning about this in the known issues section of the Kind documentation.
We have a detailed contributing guide including how to contribute to the docs, the known issues page is written in markdown in this repo. No tools other than git / github / markdown text are required.
most use cases should be using single node clusters
@BenTheElder what do you imply by single vs multi-node clusters referring to Kind? You mean multiple control-plane nodes or multiple worker ones? I'd never need multiple control-plane nodes in Kind, but I sometimes do need multiple worker nodes to test tolerances, affinities etc. And I would like the clusters with multiple worker nodes to survive a reboot.
Yes, affinities and tolerances are a case for using multi-node and that issue is #2405