Christian Schlotter
Christian Schlotter
Fixes are merged, let's check next week or so if the error occurs again.
The merged fix [did not help](https://storage.googleapis.com/k8s-triage/index.html?text=No%20Control%20Plane%20machines%20came%20into%20existence.&job=.*-cluster-api-.*&xjob=.*-provider-.*).
For reference, I did hit the same issue (CAPD load balancer config not active) as described in [this comment](https://github.com/kubernetes-sigs/cluster-api/issues/10356#issuecomment-2061588870) on a `0.4 => 1.6 => current` upgrade test but with...
Query to find the latest [failures](https://storage.googleapis.com/k8s-triage/index.html?text=%5C%2B*DockerMachinePool%2F&job=.*-cluster-api-.*&xjob=.*-provider-.*)
I think this still seems to happen (although the message changed): https://storage.googleapis.com/k8s-triage/index.html?text=Resource%20versions%20didn%27t%20stay%20stable&job=.*-cluster-api-.*&test=When%20upgrading%20a%20workload%20cluster%20using%20ClusterClass%20with%20RuntimeSDK&xjob=.*-provider-.*%7C.*-cluster-api-operator-.*
Sounds good 🎉 xref: ``` https://storage.googleapis.com/k8s-triage/index.html?text=Resource%20version&job=.*-cluster-api-.*main.*&test=.*RuntimeSDK.*&xjob=.*-provider-.*%7C.*-cluster-api-operator-.* ``` RuntimeSDK test still has some other flakes though. But maybe they are already tracked differently.
/assign @sbueringer
> Thanks for trying to tackle this! The issue with this PR is it brings us back to a situation where we could be rate-limited by github. Using goproxy was...
👍 I also like the change, last nit. I'm not a big fan of having that env variable deep down in the code, but passing it through everywhere also seems...
Also please document the new env var in `docs/book/src/clusterctl/overview.md`