kcarson77
kcarson77
Also seeing this on 1.29.4 deployments with 3 nodes. As above, can provide config, logs or test potential fixes.
thanks - seeing this also vmware VMs on fairly well spec'd hosts across multiple environments. These were fine on 1.29. Will there be a patch to 1.32, which is our...
OK, thanks. Do you mean next release of 1.33, or mean next release as in 1.34 at end of August?
Other scenario |(different cluster with just mK8s installed) Host 1 offline. All nodes reporting Ready labuser@dailybuild4-host2:~$ kubectl get no NAME STATUS ROLES AGE VERSION dailybuild4-host1 Ready 5h13m v1.34.1 dailybuild4-host2 Ready...
thanks - I've done some testing. If I manually add those parameters to the files I get API server instability and flapping leading to cyclic connection failures. I noticed the...
Hi - are there any updates here? If I add the suggested config I get API server instability, in that it continually restarts. We've had to rollback to 1.32, but...
I did take 1 node down to trigger this. Sometimes all 3 nodes are "Ready" sometimes 2 NotReady when I take a node down. Seeing this across different systems. SQA...
I re-installed 1.34. I have 3 hosts dmhost1, dmhost2 and dmhost3. I downed the link on host3, so that host1 cannot ping it. MicroK8s still has all three nodes "Ready"...