application-gateway-kubernetes-ingress
application-gateway-kubernetes-ingress copied to clipboard
OOM Killed ingress-appgw-deployment
Describe the bug We are experiencing OOM Killed every time we restart the Azure Kubernetes Cluster
To Reproduce Stop Application Gateway Stop Kubernetes cluster Start both in parallell Steps to reproduce the behavior: We use for system (2 nodes): Standard F8s v2 (8 vcpus, 16 GiB memory) Image: AKSUbuntu-2204gen2containerd-202404.16.0 Ingress Controller details
- Output of
kubectl describe pod <ingress controller> . Thepod name can be obtained by running helm list. - Normal Pulled 2m22s (x6 over 7m24s) kubelet Container image "mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:1.7.4" already present on machine
- Output of `kubectl logs
. I0505 09:39:43.639902 1 context.go:171] k8s context run started I0505 09:39:43.639931 1 context.go:238] Waiting for initial cache sync I0505 09:39:43.640042 1 reflector.go:219] Starting reflector *v1.Secret (30s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640062 1 reflector.go:255] Listing and watching *v1.Secret from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640089 1 reflector.go:219] Starting reflector *v1.Pod (30s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640103 1 reflector.go:255] Listing and watching *v1.Pod from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640044 1 reflector.go:219] Starting reflector *v1.Service (30s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640144 1 reflector.go:219] Starting reflector *v1beta1.AzureApplicationGatewayRewrite (30s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640163 1 reflector.go:255] Listing and watching *v1beta1.AzureApplicationGatewayRewrite from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640180 1 reflector.go:255] Listing and watching *v1.Service from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640244 1 reflector.go:219] Starting reflector *v1.Ingress (30s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640264 1 reflector.go:255] Listing and watching *v1.Ingress from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640322 1 reflector.go:219] Starting reflector *v1.IngressClass (30s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640336 1 reflector.go:255] Listing and watching *v1.IngressClass from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640044 1 reflector.go:219] Starting reflector *v1.Endpoints (30s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 I0505 09:39:43.640351 1 reflector.go:255] Listing and watching *v1.Endpoints from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167 - Any Azure support tickets associated with this issue. 2405050050000002
In addition, when i try just temporary to change the memory limit to 1Gi (from 600 Mi) it works but it again rolled back to the initial state. In the logs of the created one with 1Gi before terminating I'm seeing the following error:
httpserver.go:59] Failed to start API serverhttp: Server closed
Did you ever get this sorted? Running into the same problem and haven't found a way to provide configuration for ingress app gw.