Reloader icon indicating copy to clipboard operation
Reloader copied to clipboard

Memory requirements

Open vinzent opened this issue 3 years ago • 9 comments

We have deployed reloader 0.0.114 cluster-wide. We have a OpenShift 4.8 cluster with about 190 Namespaces, 5900 Secrets and 3200 ConfigMaps. Raw size of exported secrets+configmaps (kubectl get secrets,configmaps -A -oyaml) is 213m. Only 15 deployments of 520 are annotated with Reloader annotations.

Memory usage (container_memory_working_set_bytes metric) on monday:

grafik

Until 11 o'clock it uses roughly 450m, then drops to 368m, explodes to >1g (configured container limit is 1g) and after it stabilized it now uses 560m (since monday until now).

https://github.com/stakater/Reloader/issues/174 suggests 200m as a common setting because reloader is not a resource hog.

What is requiring memory in Reloader? 560m is 2.6 times the raw size of all the secrets/configmaps. How should I calculate the memory requirements?

Fun fact: Cpu usage is always below 0.1 (~0.02).

vinzent avatar Jun 22 '22 12:06 vinzent

^ similar concern, also have a larger ocp cluster but the memory consumption is about 5 times the size of secrets/configmaps.

  • 600 namespaces
  • 20k secrets
  • 15k configmaps

Tried to limit by setting a namespaceSelector but I guess it just ignores those when processing the data so still consumes space. Total raw size of the secrets & configmaps is 800 MB, controller is using up to 4.75x that during startup. It drops around 1.5 GB after a few minutes to 2.2 GB.

image

ctml91 avatar Mar 24 '23 15:03 ctml91

@ctml91 how does the CPU consumption look during this timeframe?

rasheedamir avatar Mar 24 '23 15:03 rasheedamir

namespaceSelector still gives access to the whole cluster. If the reloader runs on cluster level then it is going to get the events from the whole cluster which in turn consumes more resources. At this time, there is no mechanism to control access to certain namespaces so it doesn't watch the cluster-level resources. In this case, it is recommended to use the namespace scope and run the reloader per namespace instead of a single reloader for the whole cluster.

faizanahmad055 avatar Mar 24 '23 15:03 faizanahmad055

@faizanahmad055 there might some code optimization we can do? as you can see on startup it consumes too much and then normalizes afterwards

rasheedamir avatar Mar 24 '23 18:03 rasheedamir

When the reloader starts, it gets a lot of events from the k8s API which it tries to reconcile hence the initial load. The events are directly proportional to the secrets/configmaps reloader has access. @ctml91 Can you share your config for reloader? Are you using reloadOnCreate=true?

faizanahmad055 avatar Mar 25 '23 11:03 faizanahmad055

maybe we can batch number of namespaces on startup? so, instead of getting all; we get a batch; and then reloader might take a minute or two to be ready completely but then it won't need to consume loads of memory on startup; @faizanahmad055 will that be possible

rasheedamir avatar Mar 26 '23 11:03 rasheedamir

@rasheedamir image

After running for some time, it's consumed up to 8 GB (memory limit) before it was OOMKilled (Yellow). When I delete the pod to start a fresh copy, during initial startup it uses up to 4 GB and then drops.

w/ CPU included image

  • 600 namespaces
  • 20k secrets
  • 15k configmaps

So 8 GB with the above cluster is not enough I guess, I'll have to try 16 GB but may need to increase from there too... I will report back.

ctml91 avatar Mar 29 '23 19:03 ctml91