ingress-nginx
ingress-nginx copied to clipboard
Allow custom-http-errors without proxy_intercept_errors
Use case
In our setup, we tried to replace the default NGINX-Error Page with custom error pages. One requirement is, that error-pages from backends are unchanged.
Example:
- If there is no matching ingress-route (404), the custom error page is shown
- If a backend returns a 404 status-code, the body/HTML of that response is shown
- If there is a misconfigured ingress-route (service does not exist - 503), the custom error page is shown
- If a backend returns a 503 status-code, the body/JSON of that response is send
The problem
According to the docs at https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#custom-http-errors
Setting at least one code also enables proxy_intercept_errors which are required to process error_page.
This implicit setting of proxy_intercept_errors
leads to the behaviour, that the body is replaced with the custom error page, if the status code is contained in the custom-http-errors.
I see no way to override this, as the location-snippet
is included in https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/template/nginx.tmpl#L1335 before proxy_intercept_errors
is set. Even if that wasn't the case, an override should not rely on an implementation detail.
Not using the custom-http-errors
and setting the full configuration via a custom location-snippet
is not really feasible, as we do not only need to set the error_page
directive, but also set the required HTTP Headers such as X-Code
. This is activated implicitly as well as described in https://kubernetes.github.io/ingress-nginx/user-guide/custom-errors/.
Suggestion
It would be great, to extend the ConfigMap options, so that one can override the proxy_intercept_errors
setting.
I'd be open to contributing a merge-request, if that helps solving the issue :)
@JohannesLamberts: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I have added a new flag as an ingress annotation and to the global configuration called set-proxy-intercept-errors
.
Setting this to false
will stop setting proxy_intercept_errors
in the nginx config.
Due to this silly CLA system in place for contributions it makes it very difficult for me to contribute this directly...
https://github.com/GerryWilko/ingress-nginx/tree/patch-1
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@stromvirvel: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.