Björn Pedersen
Björn Pedersen
See https://github.com/kubernetes-sigs/external-dns/pull/3976 which introduced the multiple-zone handling. It seems like handling the trailing dot at the end of the zone got broken by the reodering of the dns.fqdn calls. Zone...
#9594
maybe related to #44929 ?
Seems to occur even after the fix for #44929 , both on scaling and creating a new cluster
And I am on rancher v2.8.2
Looking at the created job (for a worker node scaleup): ``` "args": [ 8 items "--driver-download-url=https:///assets/docker-machine-driver-harvester", "--driver-hash=a9c2847eff3234df6262973cf611a91c3926f3e558118fcd3f4197172eda3434", "--secret-namespace=fleet-default", "--secret-name=staging-pool-worker-bbfc2798-d5jsj-machine-state", "rm", "-y", "--update-config", "staging-pool-worker-bbfc2798-d5jsj" ``` the first thing the driver tries...
I could manually fix it: 1) go to the harvester embedded rancher and get the kube config 2) update the kubeconfig in the harvester credential in the cattle-global-data namespace in...
> @bpedersen2 do you have rancher running inside a nested VM or in the same kubernetes cluster of Harvester itself? No, it is running standalone.
What I observe is that the token in harvester changes. Rancher is configured to use OIDC, and in the rancher logs I get ``` Error refreshing token principals, skipping: oauth2:...
I reregistred the harvester cluster using a non-oidc admin account, now the connections seems to be stable again. It looks like a problem with token expiration to me.