Jānis
Jānis
``` resources: limits: memory: 256Mi ``` Looks like pod now requires more memory. 256MB solved the issue in our case. CPU was not the issue.
> There is no way this should require that much memory. I'd still consider it a bug if it actually required that much. Yes, 256Mi was enough in our case....
I have found the issue with the keel (>= 0.18.0) versions (CPU and memory leak). The root cause of the problem is: ``` helmProvider: version: "v3" ```
Hello, I don't get this line in my logs 😞 The same issue with tag: 0.0.4. Here is my deployment config: ``` --- kind: Deployment apiVersion: apps/v1 metadata: name: gw-opnsense-exporter...
For now I will just use 5sec timeout on Liveness, Readiness checks (as poor work-around). ``` livenessProbe: httpGet: path: /metrics port: metrics-http timeout: 5 readinessProbe: httpGet: path: /metrics port: metrics-http...
> Sorry for pointing to your garden 😀. It's not related to k8s or your environment. I found the problem. It only appears when more than 1 disable flags are...
> This shouldn't be a problem anymore since the v0.0.5 release Thanks it works now!
Hello! I have the same problem upgrading from 1.3.2 to 1.40 helm chart with terraform. Have never had any issues before. ``` Error: rendered manifests contain a resource that already...
> I had the same issue upgrading from nginx-ingress 1.3.2 to the 1.4.1 chart version. In my case, the `nginx-ingress-leader-election` lease pointed to a deleted controller pod. Manually deleting the...
Any updates on this?