nginx-errors backend not returning custom json error responses
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.2.0
Build: a2514768cd282c41f39ab06bda17efefc4bd233a
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
-------------------------------------------------------------------------------
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:38:05Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.6-eks-14c7a48", GitCommit:"35f06c94ad99b78216a3d8e55e04734a85da3f7b", GitTreeState:"clean", BuildDate:"2022-04-01T03:18:05Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
Environment:
-
Cloud provider or hardware configuration: EKS 1.22
-
OS (e.g. from /etc/os-release): Amazon Linux 2
-
Kernel (e.g.
uname -a): 5.4.188-104.359.amzn2.x86_64 -
Install tools:
- EKS with Terraform and Helm
-
Basic cluster related info:
- EKS v1.22
-
How was the ingress-nginx-controller installed: Helm chart
- If helm was used then please show output of
helm ls -A | grep -i ingressingress-nginx ingress-nginx 35 2022-05-20 12:37:41.04545 -0700 PDT deployed ingress-nginx-4.1.1 1.2.0
- If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
- If helm was used then please show output of
USER-SUPPLIED VALUES:
controller:
config:
use-forwarded-headers: true
custom-http-errors: "404"
ingressClassResource:
controllerValue: k8s.io/ingress-nginx
default: true
enabled: true
name: nginx
service:
enableHttps: false
externalTrafficPolicy: Local
type: NodePort
defaultBackend:
enabled: true
extraVolumeMounts:
- mountPath: /www
name: custom-error-pages
extraVolumes:
- configMap:
items:
- key: 404.html
path: 404.html
- key: 404.json
path: 404.json
name: custom-error-pages
name: custom-error-pages
image:
image: ingress-nginx/nginx-errors
registry: k8s.gcr.io
tag: 0.48.1
replicaCount: 1
- Others: custom-error-pages configmap:
apiVersion: v1
data:
404.html: <html></h1>My custom 404!</h1></html>
404.json: '{"message":"page not found"}'
kind: ConfigMap
metadata:
creationTimestamp: "2022-05-20T18:54:38Z"
name: custom-error-pages
namespace: ingress-nginx
resourceVersion: "1060460"
uid: c8d68044-3a38-4dcd-be5c-56b108f300cf
What happened:
After following the example in https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-errors, I cannot get custom 404 errors when sending json requests.
For example, I get the custom 404.html rather than the expected 404.json in my ConfigMap:
❯ http https://foo.example.com/foo 'Accept:application/json'
HTTP/1.1 404 Not Found
Connection: keep-alive
Content-Length: 37
Content-Type: text/html
Date: Fri, 20 May 2022 19:52:56 GMT
<html></h1>My custom 404!</h1></html>
What you expected to happen:
❯ http https://foo.example.com/foo 'Accept:application/json'
HTTP/1.1 404 Not Found
Connection: keep-alive
Content-Length: 37
Content-Type: text/html
Date: Fri, 20 May 2022 19:52:56 GMT
{"message":"page not found"}
It seems the nginx-errors service isn't detecting the JSON request. Tried multiple clients (curl, python requests, httpie, etc.) all with the same results when setting the Accept: application/json header.
How to reproduce it:
See above Helm values and ConfigMap.
Anything else we need to know:
Going based off of the documentation in https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-errors
@evandam: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Update: looks like it's actually a documentation issue, there's already a PR open https://github.com/kubernetes/ingress-nginx/pull/8558 that solves it, so can probably close this one out when the PR is merged 👌
(although I am a bit confused how the 404.html page gets set without the custom-error-pages correctly set in the ConfigMap but 404.json does not work 🤔)
Somewhat related - is it documented anywhere what image tags should be used for nginx-errors? It seems like by trial and error it keeps up with the ingress-nginx tags (ex: 1.2.0) but it would be great to see it documented somewhere or have docs updated.
/remove-kind bug
Does this happen in v1.2.x as well ?
Yes I can confirm the same behavior on 1.2.0.
404.html will be used when there is no matching backend for a request when custom-error-pages is not set, regardless of the Accept header.
Adding the entry to the config map will result in 404.json when the Accept header is set.
I will check the default backend , it must be just a vanilla nginx image. You can check too.
The first line of the doc yo linked I think is referring to a custom default backend. Are you using a custom one ?
Yes, I believe the image is from https://github.com/kubernetes/ingress-nginx/tree/main/images/custom-error-pages as used in the example from https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/customization/custom-errors/custom-default-backend.helm.values.yaml
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.