tuxicorn
tuxicorn
EDIT == This morning, running the exact same code (used `git status` to verify), 2 worker pools registered successfully and 1 got stuck with the following status: ```sh Container runtime...
EDIT == **I added the stuck worker nodes to the load balance backend pool manually, all of them got registered.**  To me, this is look like...
> hostname_override Hi, thanks for your answer! I already looked at this in the doc previously but that argument is not present in the rancher2 Terraform provider for the `node_template`...
> The `hostname_override` is in the cluster resource https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster#hostname_override. Basically, that value would have been configured in the cluster config file if you were provisioning the RKE cluster with the...
EDIT == If I deploy the external load balancer only after all nodes are registered and the cluster is active, it adds all of them to the backend pool but...
Hitting the same issue randomly
**EDIT** So the issue was that when a VM is part of an Availability Set, if another VM which is part of the same Availability Set is in a public...
Same issue here using loki Helm Chat 4.10.0
I was able to use the example repo by doing the following: ```sh . ├── apps │ ├── base │ │ └── podinfo │ │ ├── kustomization.yaml │ │ ├──...
> > Does this doc help with solving your issue? > > @kingdonb it doesn't, you can't use any of the example repos for remote clusters, a different structure is...