André Bauer
André Bauer
In our largest cluster we have 1051 pods which also was the one with the highes kyverno load. Before kyverno and the other operators started crashlooping only around ~700Mi were...
Seems setting --clientRateLimitQPS=20 & --clientRateLimitBurst=50 helped mititgate the problem. Background scanning is enabled again. Also set failurePolicy to "ignore" in all rules, to be on the safe side should the...
Yes, i can :) But i've deleted most of the existing CRs in our dev cluster, as suggested in: https://kubernetes.slack.com/archives/CLGR9BJU9/p1665666355699159 Hope it helps anyway :)
Running v1.8.0-54-g392d2bcba now on dev with ~700 pods. Still 2.4GB RAM usage for the kyverno leader pod :O As i understood before something around 500MB should be normal?
Unfortunately not :( I've tried to replecate my setup in a kind cluster which runs about 20 pods. Kyverno needs about 80mb there :shrug: If you want to try it...
I'll try to add some fake workloads tomorrow, to see if it can be replicated. In our clusters the used ram raises slowly over the days. So looks a bit...
Sorry, wording might be misleading. I meant "as it already works for".
Updated to 1.8.0 now. Still have the same problem but the metric changed: ``` kyverno_policy_rule_info_total{policy_background_mode="true",policy_name="disallow-privileged-containers",policy_namespace="-",policy_type="cluster",policy_validation_mode="audit",rule_name="privileged-containers",rule_type="validate",service_name="kyverno-svc-metrics",service_namespace="kyverno",status_ready="false"} 1 kyverno_policy_rule_info_total{policy_background_mode="true",policy_name="disallow-privileged-containers",policy_namespace="-",policy_type="cluster",policy_validation_mode="audit",rule_name="privileged-containers",rule_type="validate",service_name="kyverno-svc-metrics",service_namespace="kyverno",status_ready="true"} 1 ```
Seems to work for me now too :) I've used this query for alerting: ``` sum(kyverno_policy_rule_info_total{status_ready!="true"}) by (policy_name) > 0 ```
Yes, the tests in the same directory still work: ``` kyverno apply require_probes.yaml --resource resource.yaml Applying 1 policy rule to 5 resources... policy require-pod-probes -> resource default/Pod/badpod01 failed: 1. validate-livenessProbe-readinessProbe:...