Matthew Hembree
Matthew Hembree
IMO, on startup, the app should block until the telemetry question has been answered. Before I can even answer the question, the app apparently spawns `kubectl` and `Python`, querying (all...
> @matthewhembree, Regarding the `~700 processes`, how many contexts do you have? Hi @mortada-codes I have 20 contexts. Some have a decently large number of namespaces. The largest has ~130...
> It can take time for the ACM certificate validation to complete (sometimes even hours). Should be taken into consideration. I've never seen this. I would say that **hours** is...
Additional information for people that stumble upon this issue thread. I use GitHub OIDC via Dex. I had to also add the proxy envars to the `argocd-dex-server` deployment. Additionally, I...
> I am assuming you mean the Cloudflare tunnel deployment and not the operator itself? No, the operator. This is the snippet from my kustomization.yaml: ```yaml patches: - path: patches/cloudflare-operator-controller-manager-resources.json...
I'll get a log capture. Not at the system right now.
Okay. I see the confusion. I referenced the the tunnel deployment code in the original post. This is what I meant to reference: https://github.com/adyanth/cloudflare-operator/blob/c38e0cc14dceef41729f8f9852c5e3743d392bff/config/manager/manager.yaml#L51 Pod logs: ``` manager I0531 06:34:20.420863...
Well this is interesting. Two EKS clusters. Different versions. Both AL2. 1.24.13 : 5.4.241-150.347.amzn2.x86_64 - lower mem 1.23.17 : 5.10.178-162.673.amzn2.x86_64 - higher mem 
I wonder if the kube client discovery cache is bloating the memory. I don't have an excessive number of CRDs in either. I cleaned up the 1.24 cluster before the...
I did get an alloc flame graph with the krew flame plugin. Github does a static rendering, so the 15min one is sort of useless when posted here. 1m: ...