whereabouts
whereabouts copied to clipboard
[BUG] OOM on ip-control-loop
Describe the bug with k8s 1.22 I'm able to see/get more and more issues like Memory cgroup out of memory: Killed process 643039 (ip-control-loop) total-vm:742792kB, anon-rss:43452kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:240kB oom_score_adj:-997
this error does not seem all the time and I'm also not sure about the impact of that on the PODS. there is something I can check to understand from where it comes?
Environment:
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
- Whereabouts version : Image: ghcr.io/k8snetworkplumbingwg/whereabouts:latest-amd64 can i get the specific version somehow?
from one of the Nodes that I got OOM. what can increase the Memory like that ?
from one of the Nodes that I got OOM. what can increase the Memory like that ?
I'm unsure, but, we're still using the requests / limits of when ~~this~~ whereabouts did not have a dedicated controller - since the whereabouts pods now run a controller that reconciles the IP addresses based on a cron expr, and also makes sure the IPs are gone when pods are deleted, it is bound to require more memory.
It's possible that the spike is from teh reconciliation being scheduled: https://github.com/k8snetworkplumbingwg/whereabouts/pull/238