Pawel Kopiczko
Pawel Kopiczko
@gonzalob https://github.com/keybase/keybase-issues/issues/4241#issuecomment-1948128383
> We can't do the change in CABPK because we want the same bootstrap process to be usable on distros which may use Ignition but don't use containerd. I see....
@lukasmrtvy I won't have time to work on that one in the nearest future I'm afraid. Possibly I'll be revisiting it at some point but no plans now.
I should have mentioned this is HA cluster.
I managed to capture pprof during the issue for 300s happening (it was pretty mild case): - [pprof-cpu-cilium-bzfnk-during.zip](https://github.com/cilium/cilium/files/14742807/pprof-cpu-cilium-bzfnk-during.zip) - [pprof-cpu-cilium-bzfnk-after.zip](https://github.com/cilium/cilium/files/14742795/pprof-cpu-cilium-bzfnk-after.zip) During the incident (300s):  After the incident (300s): 
> Do you have any CPU limits applied on the Cilium daemonset that could be causing K8s to pin Cilium to one CPU? Only requests, as in your chart (and...
> I would expect these messages and the spike of CPU to occur at Cilium startup time. Is it persistent? It happens randomly. And it's the other way around, restarting...
Some logs from this pod: ``` cilium-rrl5g cilium-agent 2024-03-27T11:57:04.793372471Z level=info msg="hubble events queue is processing messages again: 2786 messages were lost" subsys=hubble cilium-rrl5g cilium-agent 2024-03-27T11:57:06.590436158Z level=info msg="hubble events queue is...
Unfortunately I couldn't pprof because we don't have pprof enabled on the this cluster and restarting the pod fixes the issue so I can't enable it without fixing the problem.
Yes, this is 1.13, I noted that in the description, but I'll copy the information below for convenience. In my synthetic tests I noticed some improvements in terms of performance...