Parca server pod evicted on kubernetes
I have deployed parca on AKS and it keeps getting evicted after exceeding the memory limit, possible memory leak
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned parca/parca-85bd8c4cd7-tj79g to aks-agentpool-15819346-vmss000000
Normal Pulled 26m kubelet Container image "ghcr.io/parca-dev/parca:v0.14.0" already present on machine
Normal Created 26m kubelet Created container parca
Normal Started 26m kubelet Started container parca
Warning Evicted 20m kubelet The node was low on resource: memory. Container parca was using 3090924Ki, which exceeds its request of 128Mi.
Normal Killing 20m kubelet Stopping container parca
This is probably to be expected when running in-memory only. After a certain while, the meta store will grow and grow and never shrink. On top of that comes the amount of data ingested, so it might very well be that it might be evicted eventually.
We've already made some big improvements to Parca's usage in #2220 again. That being said, the same principles from above are still true. Ultimately, we want to run Parca with persistence, so all data won't be held in memory anymore (especially the meta store).
@metalmatze I think you meant to link https://github.com/parca-dev/parca/pull/2202