ingress-nginx
ingress-nginx copied to clipboard
Sporadically 502 response
What happened:
Elastic Heartbeat within the cluster perform HEAD requests to https://app.corp.com/context/ every 15 seconds. Application runs in the same cluster. Once per 2-3 hrs there is 502 response.
What you expected to happen:
Succeeded response
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.): 1.9.4
Kubernetes version (use kubectl version): 1.26.6
Environment:
- Cloud provider or hardware configuration: AKS
- OS (e.g. from /etc/os-release): AKSCBLMariner
- Kernel (e.g.
uname -a): V1-202309.06.0 - Install tools:
- Basic cluster related info:
Custom CoreDNS configuration:
corp.com:53 {
errors
cache 30
forward . %ONPREM_DNS1% %ONPREM_DNS2%
}
app.corp.com resolved to Ingress NGINX Controller ExternalIP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "JSESSIONID"
spec:
tls:
- hosts:
- app.corp.com
secretName: server-tls
rules:
- host: app.corp.com
http:
paths:
- backend:
service:
name: app
port:
number: 443
path: /
pathType: Prefix
-
How was the ingress-nginx-controller installed: Helm chart 4.8.3
-
Current State of the controller:
-
Current state of ingress object, if applicable:
-
Others:
[error] 30#30: *19211632 SSL_do_handshake() failed (SSL: error:0A00010B:SSL routines::wrong version number) while SSL handshaking to upstream, client: 127.0.0.1, server: app.corp.com, request: "HEAD /context/ HTTP/1.1", upstream: "https://0.0.0.1:80/context/", host: "app.corp.com"
[warn] 30#30: *19211632 upstream server temporarily disabled while SSL handshaking to upstream, client: 127.0.0.1, server: app.corp.com, request: "HEAD /context/ HTTP/1.1", upstream: "https://0.0.0.1:80/context/", host: "app.corp.com"
[warn] 30#30: *19211632 [lua] sticky.lua:193: balance(): failed to get new upstream using upstream nil while connecting to upstream, client: 127.0.0.1, server: app.corp.com, request: "HEAD /context/ HTTP/1.1", upstream: "https://0.0.0.1:80/context/", host: "app.corp.com"
[warn] 30#30: *19211632 [lua] balancer.lua:335: balance(): no peer was returned, balancer: sticky_balanced while connecting to upstream, client: 127.0.0.1, server: app.corp.com, request: "HEAD /context/ HTTP/1.1", upstream: "https://0.0.0.1:80/context/", host: "app.corp.com"
[error] 30#30: *19211632 no live upstreams while connecting to upstream, client: 127.0.0.1, server: app.corp.com, request: "HEAD /context/ HTTP/1.1", upstream: "https://upstream_balancer/context/", host: "app.corp.com"
How to reproduce this issue:
Anything else we need to know: