ingress-nginx
ingress-nginx copied to clipboard
catch-all location answer as an other namespace/ingress_name
What happened:
Hello 👋
The catch-all location, (location ~* "^/"
) automatically added by the controller when we use annotation use-regex:true
is associated to a backend that have not answer to the request itself
When a request arrives at the catch-all location it should not be associated with a backend service that has not responded itself This is problematic in the sense that the catch-all statistics are attributed to a backend that does not process these requests If we make a request to an endpoint that doesn't exist, (e.g. /deadendpoint) we get a 404, but that same 404 will be counted for a backend that has nothing to do with We have not identified the backend selection criterion by ingress-nginx, and we have not found a way to define it ourselves
location ~* "^/" {
set $namespace "test-service";
set $ingress_name "test-service-default";
set $service_name "";
set $service_port "";
set $location_path "/";
set $global_rate_limit_exceeding n;
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = true,
ssl_redirect = true,
force_no_ssl_redirect = false,
preserve_trailing_slash = false,
use_port_in_redirects = false,
global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
plugins.run()
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "upstream-default-backend";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 10M;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Forwarded-Scheme $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 20s;
proxy_read_timeout 25s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
What you expected to happen:
when the location ~* "^/"
is added the values of $namespace and $ingress_name should be the default-backend or empty
set $namespace "";
set $ingress_name "";
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.): v1.2.1
Kubernetes version (use kubectl version
): v1.22.12
Environment:
-
Cloud provider or hardware configuration: GCP
-
How was the ingress-nginx-controller installed:
helm chart
ingress-nginx version : 1.2.1
chart version : 4.1.4
values
ingress-nginx:
controller:
allowSnippetAnnotations: false
autoscaling:
enabled: false
config:
client-body-buffer-size: 16k
disable-ipv6-dns: "true"
gzip-level: "9"
gzip-types: application/atom+xml application/javascript application/x-javascript
application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf
application/x-web-app-manifest+json application/xhtml+xml application/xml
font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component
"text/plain; charset=utf-8" application/x-msgpack
limit-conn-status-code: "429"
limit-req-status-code: "429"
proxy-body-size: 10M
proxy-buffer-size: 4k
proxy-connect-timeout: "5"
proxy-read-timeout: "25"
proxy-real-ip-cidr: ****
proxy-send-timeout: "20"
use-forwarded-headers: "true"
use-geoip2: true
use-gzip: "true"
metrics:
enabled: true
serviceMonitor:
additionalLabels:
release: monitoring
enabled: true
podSecurityContext:
seccompProfile:
type: RuntimeDefault
readinessProbe:
httpGet:
port: 80
replicaCount: 2
resources:
limits:
memory: 200Mi
requests:
cpu: 10m
memory: 200Mi
service:
nodePorts:
http: "32080"
type: NodePort
stats:
enabled: true
fullnameOverride: nginx-ingress
-
Current State of the controller:
-
kubectl describe ingressclasses
-
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.2.1
argocd.argoproj.io/instance=test-ingress-nginx
helm.sh/chart=ingress-nginx-4.1.4
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Controller: k8s.io/ingress-nginx
Events: <none>
How to reproduce this issue:
Install minikube
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
Install the ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml
Install an application that will act as default backend (is just an echo app)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml
create multiple namespace
kubectl create namespace foo
kubectl create namespace test1
kubectl create namespace test2
Create an ingress (please add any additional annotation required)
echo "
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo-bar
namespace: foo
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx # omit this if you're on controller version below 1.0.0
rules:
- host: foo.bar
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test1
namespace: test1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx # omit this if you're on controller version below 1.0.0
rules:
- host: foo.bar
http:
paths:
- path: /test1
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test2
namespace: test2
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx # omit this if you're on controller version below 1.0.0
rules:
- host: foo.bar
http:
paths:
- path: /test2
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
" | kubectl apply -f -
make a request
POD_NAME=$(k get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o NAME)
kubectl exec -it -n ingress-nginx $POD_NAME -- curl -H 'Host: foo.bar' localhost/toto
look at the config nginx.conf in the nginx-controller pod, you should see in server foo.bar a location ~* "^/"
where the namespace and ingress_name value is foo or test1 or test2 instead of the default backend/empty
Thanks 🙏
@Guidurand: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-kind bug /kind support
Hi @Guidurand,
To help identify the problem in the controller, Please edit your post and explain the justification/reasoning/basis for this statement ;
The catch-all location, automatically added by the controller when we use annotation use-regex:true is associated to a "random" namespace/ingress_name instead of default-backend
-
Based on what basic information should the
default-backend
be set as value there ? -
How does the config qualify to be called random, when the value of the host field in the corresponding ingress, is used as the servername in the server-block and the namesace where the ingress is created is used for the value of $namespace and name field of the ingress is set to the $ingress_name
Your update could help others get more insight into the problem
Hi @Guidurand Can you try with latest chart. There was a bug fix related to similar issue https://github.com/kubernetes/ingress-nginx/pull/8825
Hi @Guidurand Can you try with latest chart. There was a bug fix related to similar issue #8825
Thanks for the suggestion, I just tried and got the same result :-(
Hi @longwuyuan Thank you to taking the time to looking at 🙏 I've updated the issue with "i hope" a better description, about your questions:
-
Based on what basic information should the default-backend be set as value there ?
In my mind, it should always be the backend that answer, in this case, this is the nginx-ingress-controller or the default-backend when activated
-
How does the config qualify to be called random, when the value of the host field in the corresponding ingress, is used as the servername in the server-block and the namesace where the ingress is created is used for the value of $namespace and name field of the ingress is set to the $ingress_name
Indeed I understand better the behaviour of the choice of the "backend", in our case we have several services and several ingress, so it is the first "ingress" created which will inherit the statistics of the requests in error (/deadendpoint), nevertheless would it not be more logical to attribute them to the controller or default-backend rather than to the first ingress created whereas it does not process these requests?
Thanks
I am not sure what problem has to be solved yet.
I think you should read this https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#default-backend . Then you should configure a service as the default-backend as described in that annotation documentation. Then you should look at the nginx.conf.
I've disable the catch-all disable-catch-all: true
and add an ingress to do it right, now 404 errors are linked to nginx-ingress-default
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/default-backend: nginx-ingress-defaultbackend
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
name: nginx-ingress-default
namespace: ingress-nginx
spec:
ingressClassName: nginx
rules:
- host: ingress.supertest.com
http:
paths:
- backend:
service:
name: nginx-ingress-default
port:
name: http
path: /
pathType: ImplementationSpecific
I don't clearly understand why requests made on an unknown endpoints should be associated to the oldest location created when he will never answer them, anyway maybe nobody is monitoring 4xx 🤷♂️
I don't understand.
Usually when such an issue is reported, the discussion becomes better after the following info is posted to the issue, from the namespace in question and the clusterwide objects in question;
- kubectl get all,ing -A -o wide
- kubectl describe svc
- kubectl describe ing
- kubectl describe ingress-nginx-controller-podname
- Complete and full curl command with -v and its response
- Complete logs of the ingress-nginx-controller pod
- In this case the output of command
kubectl -n ingess-nginx exec ingress-nginx-controller-podname -- cat /etc/nginx/nginx.conf
If I make a guess on your last post, it will be based on some guess and assumptions so trying to avoid that
Also, its clear that you are seeking support and github issue is watched by few very very few people. You can get more people watching your message if you discuss support issues on the kubernetes slack
We're seeing the same thing, in v1.4.0. We use regex paths in ingresses, and the controller is adding this catch-all location at the end of every server. However, the location block is copying configuration from another ingress for that same server, which results in incorrect behavior. We want the nginx issue 404s for requests unmatched by paths defined in ingresses, but this catch-all is copying a config from another ingress that requires auth, so all requests that fall through to that are instead returning 401. For example, I see the following location:
location ~* "^/" {
set $namespace "local-devcenter-apigw";
set $ingress_name "up-ingress";
...
proxy_set_header Host "demo-api.local-devcenter-apigw.svc.cluster.local";
...
}
So it certainly seems to be taking values from another ingress (up-ingress in local-devcenter-apigw namespace), even though that ingress (up-ingress) does not define a rule with / prefix.
https://github.com/kubernetes/ingress-nginx/blob/69318355b167b2cd4eb6a9c90c1fc793200e982c/internal/ingress/controller/controller.go#L1243-L1261 is causing it, from what I can tell. When initializing the servers, the controller creates a root location and then copies an ingress's (the first ingress it encounters for that server) settings for that location. In our case, that includes annotations that require auth, and so "unhandled" requests that should 404 actually 401.
I agree that locationApplyAnnotations()
is probably the issue. For each host
, the rootLocation
inherits annotations from the oldest rule in ing.Spec.Rules
for that host
, even if the path is not rootLocation
.
In other words, all the oldest annotations for a host
get applied to the default path, even if they shouldn't apply to that path.
There are some annotations that apply to the entire server block, such as server-snippet
where that behavior makes sense, but others that only apply to the location block, such as configuration-snippet
or auth, should not be added here unless the path matches.
This reproduces the behavior: https://github.com/kubernetes/ingress-nginx/issues/9485#issuecomment-1413481285
I think I'm running into this right now. I have 3 ingress resources, which define the following rules respectively:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-foo-api
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: foo.com
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: service-api
port:
number: 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-foo-web
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: foo.com
http:
paths:
- path: /web(/|$)(.*)
pathType: Prefix
backend:
service:
name: service-web
port:
number: 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-foo-web-root
annotations:
nginx.ingress.kubernetes.io/permanent-redirect: https://foobar.com/blablabla
spec:
ingressClassName: nginx
rules:
- host: foo.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: http-svc
port:
number: 80
The last redirect isn't working - it returns the default Nginx 404 page. When I go look at nginx.conf I see something like this:
server {
server_name foo.com ;
location ~* "^/web(/|$)(.*)" { ... }
location ~* "^/api(/|$)(.*)" { ... }
location ~* "^/" {
set $ingress_name "ingress-foo-api"; # <- this is not supposed to exist?
....
}
location ~* "^/" {
set $ingress_name "ingress-foo-web-roo";
...
}
}
The 3rd location is the redundant one, and from what I understand, it's the bug described in this issue. When I comment it out, the redirect from above (ingress-foo-web-root
works perfectly).
Is there any workaround for this? Even if it's not for the bug in general, I'd appreciate any advice which will solve my case specifically.
After thinking about this some more, I think this actually shows another issue: the catch-all route is not placed in the correct place. In my case, the "redundant" catch-all route wouldn't be a problem if it was last, but it gets inserted between the specific routes and my custom-defined "catch-all" (which is actually an exact path, but that doesn't seem to matter much). I'll open a different issue for that if needed, but I'm leaving this here for anyone who stumbles upon a similar problem.
Think I did bump into this problem as well. I did use nginx.ingress.kubernetes.io/configuration-snippet
to inject a 302
response for certain locations, and then realized that the default suddenly also responded with 302
instead of 404
for all unmatched paths. Eventually I found the location /
entry in the generated nginx configuration file which did lead me to this issue.
The whole behavior seems to me as if there is an implicit configuration similar to the following snipped made:
paths:
path: /
pathType: Prefix
I guessed this because an exact match would generate something like location = /
instead.
As a workaround I tried to add another Ingress object which has this prefix match and a configuration snippet to return a 404 response:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
return 404;
name: frontend-workaround
namespace: default
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- backend:
service:
name: frontend
port:
name: http
path: /
pathType: Prefix
Posting this here since there has been a question for a workaround, maybe this can help some people.
We also bumped into the same issue. I was setting up external authentication and was confused about why it tries to authenticate seemingly arbitrary requests that don't match any ingresses. Then I found the location ~* "^/" {
route that is not defined in any ingress definitions.
After cycling a bunch of old ingresses I'm struggling with this a lot. I think locationApplyAnnotations
needs to be yanked there or split up into location local and server global annotations.
There's also a TODO: Redirect and rewrite can affect the catch all behavior, skip for now
further up that also tries to work around this issue.
Edit: Had some incorrect information. This change likely introduced this issue in the first place: https://github.com/kubernetes/ingress-nginx/pull/3696
I think the current default backend location implementation might be broken. Applying the following configuration:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-buffer-size: "4k"
name: prefix-foo
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- backend:
service:
name: foo-backend-service
port:
number: 80
path: /foo
pathType: ImplementationSpecific
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
name: prefix-and-default
spec:
defaultBackend:
service:
name: default-backend-service
port:
number: 80
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- backend:
service:
name: bar-backend-service
port:
number: 80
path: /bar
pathType: ImplementationSpecific
creates the following:
-
/foo
withfoo-backend-service
andproxy_buffer_size 4k
-
/bar
withbar-backend-service
andproxy_buffer_size 8k
-
/
with no backend service andproxy_buffer_size 4k
applying them in reverse order (or delete prefix-foo
and apply the above again) changes:
-
/
withdefault-backend-service
andproxy_buffer_size 8k
My expectation would be that /
for a host is configured by:
- the (oldest) ingress that contains a rule for
/
or - the (oldest) ingress that defines a
defaultBackend
for that host or - controller default configuration, no configuration from any ingress
Currently, it appears to be:
- the (oldest?) ingress that contains a rule for
/
or - the oldest ingress for that host
Which, IMHO, is a bug.
https://github.com/kubernetes/ingress-nginx/blob/69318355b167b2cd4eb6a9c90c1fc793200e982c/internal/ingress/controller/controller.go#L1243-L1261
is causing it, from what I can tell. When initializing the servers, the controller creates a root location and then copies an ingress's (the first ingress it encounters for that server) settings for that location. In our case, that includes annotations that require auth, and so "unhandled" requests that should 404 actually 401.
To add a little more info to this, this root Location object that a Server object is automatically initialized with can have its properties overwritten by defining a rule with the root path and pathType of Prefix. You can see that happening here: https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/controller/controller.go#L790 Also see johbo's comment above for an example rule: https://github.com/kubernetes/ingress-nginx/issues/9054#issuecomment-1701700534
Note that at that point in the code, server.Locations will contain the initial and problematic root Location object. Your additional rule will match that root Location object because the Path will match, the PathType will match, and the root Location object has IsDefBackend set to true. If your root rule is working correctly, you should see a log message like this (don't forget to use the --v=5 flag on your controller)
Replacing location "/" for server "myserver.com" with upstream "upstream-default-backend"
IMO, the core issue is that the ingress being used to associate/configure the initial root Location object is selected based on creationTimestamp (which is the sort order of the list of ingresses). This means the ingress being used will change, for example if the oldest ingress is deleted and then recreated, it will no longer be used to configure the initial root Location object. Instead of using the ingress with the oldest creationTimestamp, the ingress that is used should be the one that has a host rule defined with no path, like the example provided in #3696.
For the case where there is no such ingress, don't initialize the Server.Locations list with an initial Location object.
IMO, the core issue is that the ingress being used to associate/configure the initial root Location object is selected based on creationTimestamp
No, the core issue is that the root location is initialized by configuration for anything other than the root location. Order does not matter. Only system default configuration and root location configuration has business configuring the root location.
@longwuyuan you remove the bug label two years ago. This is clearly a bug and not a support request.