ingress-nginx
ingress-nginx copied to clipboard
Specifying ingress resource defaultBackend no longer works and endpoints return 404
What happened: I use a defaultBackend specified on an ingress resource as a "fallback" option for the LB endpoint (and anything CNAME'd to it), which is just a service that redirects a user elsewhere. After upgrading to the helm chart v4.2.5, the default backend specified in the ingress resource no longer works, and instead I see an nginx 404 page
What you expected to happen: Continue forwarding traffic to the default backend if no path is matched
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.3.0
Build: 2b7b74854d90ad9b4b96a5011b9e8b67d20bfb8f
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
Kubernetes version (use kubectl version
):
Server Version: v1.22.11-eks-18ef993
Environment:
-
Cloud provider or hardware configuration: AWS EKS
-
OS (e.g. from /etc/os-release):
-
Kernel (e.g.
uname -a
): -
Install tools:
-
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
-
-
Basic cluster related info:
-
kubectl version
-
kubectl get nodes -o wide
-
-
How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress
- If helm was used then please show output of
ingress-nginx ingress-nginx 42 2022-09-07 05:53:12.599949272 +0000 UTC deployed ingress-nginx-4.2.4 1.3.1
- If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
controller:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/instance
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
weight: 100
autoscaling:
behavior:
scaleDown:
policies:
- periodSeconds: 180
type: Pods
value: 1
stabilizationWindowSeconds: 300
scaleUp:
policies:
- periodSeconds: 60
type: Pods
value: 2
stabilizationWindowSeconds: 300
enabled: true
maxReplicas: 15
minReplicas: 1
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 70
config:
enable-modsecurity: "true"
enable-owasp-modsecurity-crs: "true"
ssl-redirect: "true"
extraArgs:
default-ssl-certificate: main-namespace/app-tls # redacted real values, but this does exist
metrics:
enabled: true
prometheusRule:
enabled: true
rules:
- alert: NGINXConfigFailed
annotations:
description: bad ingress config - nginx config test failed
summary: uninstall the latest ingress changes to allow config reloads to
resume
expr: count(nginx_ingress_controller_config_last_reload_successful == 0) >
0
for: 1s
labels:
severity: critical
- alert: NGINXCertificateExpiry
annotations:
description: ssl certificate(s) will expire in less then a week
summary: renew expiring certificates to avoid downtime
expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time())
< 604800
for: 1s
labels:
severity: critical
- alert: NGINXTooMany500s
annotations:
description: Too many 5XXs
summary: More than 5% of all requests returned 5XX, this requires your attention
expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests)
) > 5
for: 1m
labels:
severity: warning
service:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
resources:
requests:
cpu: 100m
memory: 512Mi
@TomKeyte: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@TomKeyte something seems not right upfront. Your post shows controller version 1.3.0 but the chart version 4.2.5 ships controller version 1.3.1
/remove-kind bug
ah its our problem. I see you typed chart version 4.2.5 but the helm ls -A
output you posted shows you are running chart v4.2.4.
Chart version 4.2.4 is buggy. Please upgrade to chart version 4.2.5 and update status. You need to run helm repo update
on your helm client or whatever automation is installing the chart.
Apologies, the bug occurs on chart version 4.2.5 I downgraded to 4.2.4 to fix this issue (Which disappeared after downgrading)
Odd. Need to check where it originates from.
Can you please reproduce this on minikube/kind ? If bugs shows up on minikube/kind cluster, then request you to kindly write a step by step instruction procedure that someone can use to copy/paste in their own minikube/kind cluster.
I Will check if there is any directly related to defaultbackend PR after v1.3.0
Could be related to https://github.com/kubernetes/ingress-nginx/pull/8825/
Could be related to #8825
cc @harry1064 @rikatz , is it related to https://github.com/kubernetes/ingress-nginx/pull/8825 ?
@harry1064 gentle ping to ask if you have time to look at this
As a workaround for this, using the arg --default-backend-service
still seems to work.
If that helps, the defaultBackend
on an Ingress with no rules works as expected, that is -- all requests are forwarded to the error-svc
service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: some-ingress
namespace: some-namespace
spec:
ingressClassName: nginx
defaultBackend:
service:
name: error-svc
port:
name: svc-port
So... it has been nearly a year and the Ingress spec is still unsatisfied.
Also, this is a regression as it works in 4.2.3 (at least).
/triage accepted /priority important-backlog
@longwuyuan: The label(s) priority/important-backlog
cannot be applied, because the repository doesn't have them.
In response to this:
/triage accepted /priority important-backlog
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/priority backlog
/kind bug