ingress-nginx
ingress-nginx copied to clipboard
Annotation whitelist-source-range not using client real IP
What happened:
I upgraded ingress-nginx through the Helm Chart to the latest version. The old Helm Chart version was 4.6.2, now it's 4.10.0 so I'm using NGINX 1.25. After the upgrade, seems that now, wherever the annotation whitelist-source-range
was already present, the client ip received from the requests is now one from the Load Balancer (AWS EKS with NLB) inside my AWS VPC (172.31.0.0/16). This doesn't allow a proper whitelisting of the IPs in my internal platform. I also tried to revert the upgrade, but the problem persisted once the old version was deployed.
What you expected to happen: I would expect that the IP received by the ingress-nginx would be the real client IP for every service and not just a few, allowing me to correctly whitelist corporate addresses.
NGINX Ingress controller version:
NGINX Ingress controller Release: v1.10.0 Build: 71f78d49f0a496c31d4c19f095469f3f23900f8a Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.25.3
Kubernetes version (use kubectl version
):
Client Version: v1.25.9
Kustomize Version: v4.5.7
Server Version: v1.29.1-eks-b9c9ed7
Environment:
- Cloud provider or hardware configuration: AWS EKS
- OS (e.g. from /etc/os-release): Amazon Linux 2
-
Kernel (e.g.
uname -a
): -
Install tools:
-
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
-
- Basic cluster related info: Nodes
ip-172-31-222-113.eu-west-1.compute.internal Ready <none> 21d v1.29.0-eks-5e0fdde 172.31.222.113 <none> Amazon Linux 2 5.10.210-201.852.amzn2.x86_64 containerd://1.7.11
ip-172-31-231-131.eu-west-1.compute.internal Ready <none> 4d3h v1.29.0-eks-5e0fdde 172.31.231.131 <none> Amazon Linux 2 5.10.210-201.852.amzn2.x86_64 containerd://1.7.11
ip-172-31-231-52.eu-west-1.compute.internal Ready <none> 32d v1.29.0-eks-5e0fdde 172.31.231.52 <none> Amazon Linux 2 5.10.210-201.852.amzn2.x86_64 containerd://1.7.11
ip-172-31-235-179.eu-west-1.compute.internal Ready <none> 4d4h v1.29.0-eks-5e0fdde 172.31.235.179 <none> Amazon Linux 2 5.10.210-201.852.amzn2.x86_64 containerd://1.7.11
ip-172-31-244-85.eu-west-1.compute.internal Ready <none> 32d v1.29.0-eks-5e0fdde 172.31.244.85 <none> Amazon Linux 2 5.10.210-201.852.amzn2.x86_64 containerd://1.7.11
ip-172-31-250-55.eu-west-1.compute.internal Ready <none> 4d4h v1.29.0-eks-5e0fdde 172.31.250.55 <none> Amazon Linux 2 5.10.210-201.852.amzn2.x86_64 containerd://1.7.11
- How was the ingress-nginx-controller installed: Helm Chart v4.10.0
USER-SUPPLIED VALUES:
commonLabels: {}
controller:
addHeaders: {}
admissionWebhooks:
annotations: {}
certManager:
admissionCert:
duration: ""
enabled: false
rootCert:
duration: ""
certificate: /usr/local/certificates/cert
createSecretJob:
name: create
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
enabled: true
existingPsp: ""
extraEnvs: []
failurePolicy: Fail
key: /usr/local/certificates/key
labels: {}
name: admission
namespaceSelector: {}
objectSelector: {}
patch:
enabled: true
image:
digest: sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334
image: ingress-nginx/kube-webhook-certgen
pullPolicy: IfNotPresent
registry: registry.k8s.io
tag: v1.4.0
labels: {}
networkPolicy:
enabled: false
nodeSelector:
kubernetes.io/os: linux
podAnnotations: {}
priorityClassName: ""
securityContext: {}
tolerations: []
patchWebhookJob:
name: patch
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
port: 8443
service:
annotations: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 443
type: ClusterIP
affinity: {}
allowSnippetAnnotations: true
annotations: {}
autoscaling:
annotations: {}
behavior: {}
enabled: false
maxReplicas: 11
minReplicas: 1
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
autoscalingTemplate: []
config:
enable-real-ip: "true"
proxy-body-size: 0m
proxy-buffer-size: 256k
proxy-buffes-number: 4
proxy-real-ip-cidr: 0.0.0.0/0
use-forwarded-headers: "true"
use-proxy-protocol: "false"
configAnnotations: {}
configMapNamespace: ""
containerName: controller
containerPort:
http: 80
https: 443
nexus: 5000
containerSecurityContext: {}
customTemplate:
configMapKey: ""
configMapName: ""
dnsConfig: {}
dnsPolicy: ClusterFirst
electionID: ""
enableAnnotationValidations: false
enableMimalloc: true
enableTopologyAwareRouting: false
existingPsp: ""
extraArgs:
default-ssl-certificate: default/[REDACTED]
extraContainers: []
extraEnvs: []
extraInitContainers: []
extraModules: []
extraVolumeMounts: []
extraVolumes: []
healthCheckHost: ""
healthCheckPath: /healthz
hostAliases: []
hostNetwork: false
hostPort:
enabled: true
ports:
http: 80
https: 443
hostname: {}
image:
allowPrivilegeEscalation: false
chroot: false
digest: sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
digestChroot: sha256:7eb46ff733429e0e46892903c7394aff149ac6d284d92b3946f3baf7ff26a096
image: ingress-nginx/controller
pullPolicy: IfNotPresent
readOnlyRootFilesystem: false
registry: registry.k8s.io
runAsNonRoot: true
runAsUser: 101
seccompProfile:
type: RuntimeDefault
tag: v1.10.0
ingressClass: nginx
ingressClassByName: false
ingressClassResource:
controllerValue: k8s.io/ingress-nginx
default: true
enabled: true
name: nginx
parameters: {}
keda:
apiVersion: keda.sh/v1alpha1
behavior: {}
cooldownPeriod: 300
enabled: false
maxReplicas: 11
minReplicas: 1
pollingInterval: 30
restoreToOriginalReplicaCount: false
scaledObject:
annotations: {}
triggers: []
kind: DaemonSet
labels: {}
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
maxmindLicenseKey: ""
metrics:
enabled: true
port: 10254
portName: metrics
prometheusRule:
additionalLabels: {}
enabled: false
rules: []
service:
annotations: {}
externalIPs: []
labels: {}
loadBalancerSourceRanges: []
servicePort: 10254
type: ClusterIP
serviceMonitor:
additionalLabels: {}
annotations: {}
enabled: true
metricRelabelings: []
namespace: ""
namespaceSelector: {}
relabelings: []
scrapeInterval: 30s
targetLabels: []
minAvailable: 1
minReadySeconds: 0
name: controller
networkPolicy:
enabled: false
nodeSelector:
kubernetes.io/os: linux
opentelemetry:
containerSecurityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
enabled: false
image:
digest: sha256:13bee3f5223883d3ca62fee7309ad02d22ec00ff0d7033e3e9aca7a9f60fd472
distroless: true
image: ingress-nginx/opentelemetry
registry: registry.k8s.io
tag: v20230721-3e2062ee5
name: opentelemetry
resources: {}
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
priorityClassName: ""
proxySetHeaders: {}
publishService:
enabled: true
pathOverride: ""
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
replicaCount: 1
reportNodeInternalIp: false
resources:
requests:
cpu: 100m
memory: 90Mi
scope:
enabled: false
namespace: ""
namespaceSelector: ""
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
appProtocol: true
clusterIP: ""
enableHttp: true
enableHttps: true
enabled: true
external:
enabled: true
externalIPs: []
externalTrafficPolicy: Local
internal:
annotations: {}
appProtocol: true
clusterIP: ""
enabled: false
externalIPs: []
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerClass: ""
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePorts:
http: ""
https: ""
tcp: {}
udp: {}
ports: {}
sessionAffinity: ""
targetPorts: {}
type: ""
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
labels: {}
loadBalancerClass: ""
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePorts:
http: ""
https: ""
tcp: {}
udp: {}
ports:
http: 80
https: 443
sessionAffinity: ""
targetPorts:
http: http
https: https
nexus: nexus
type: LoadBalancer
shareProcessNamespace: false
sysctls: {}
tcp:
annotations: {}
configMapNamespace: ""
terminationGracePeriodSeconds: 300
tolerations: []
topologySpreadConstraints: []
udp:
annotations: {}
configMapNamespace: ""
updateStrategy: {}
watchIngressWithoutClass: false
defaultBackend:
affinity: {}
autoscaling:
annotations: {}
enabled: false
maxReplicas: 2
minReplicas: 1
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
containerSecurityContext: {}
enabled: false
existingPsp: ""
extraArgs: {}
extraConfigMaps: []
extraEnvs: []
extraVolumeMounts: []
extraVolumes: []
image:
allowPrivilegeEscalation: false
image: defaultbackend-amd64
pullPolicy: IfNotPresent
readOnlyRootFilesystem: true
registry: registry.k8s.io
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
tag: "1.5"
labels: {}
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
minAvailable: 1
minReadySeconds: 0
name: defaultbackend
networkPolicy:
enabled: false
nodeSelector:
kubernetes.io/os: linux
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
port: 8080
priorityClassName: ""
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 0
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
replicaCount: 1
resources: {}
service:
annotations: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 80
type: ClusterIP
serviceAccount:
automountServiceAccountToken: true
create: true
name: ""
tolerations: []
updateStrategy: {}
dhParam: ""
imagePullSecrets: []
namespaceOverride: ""
podSecurityPolicy:
enabled: false
portNamePrefix: ""
rbac:
create: true
scope: false
revisionHistoryLimit: 10
serviceAccount:
annotations: {}
automountServiceAccountToken: true
create: true
name: ""
tcp: {}
udp: {}
-
Current State of the controller:
-
kubectl describe ingressclasses
-
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.10.0
helm.sh/chart=ingress-nginx-4.10.0
Annotations: ingressclass.kubernetes.io/is-default-class: true
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: nginx
Controller: k8s.io/ingress-nginx
Events: <none>
-
kubectl -n <ingresscontrollernamespace> get all -A -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ingress-nginx-controller-6pq2f 1/1 Running 0 80m 172.31.244.180 ip-172-31-244-85.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-controller-6vltb 1/1 Running 0 82m 172.31.232.201 ip-172-31-235-179.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-controller-7dmlq 1/1 Running 0 83m 172.31.215.199 ip-172-31-221-109.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-controller-cqbsq 1/1 Running 0 82m 172.31.230.251 ip-172-31-231-131.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-controller-gxpck 1/1 Running 0 82m 172.31.246.119 ip-172-31-250-55.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-controller-jvnnk 1/1 Running 0 81m 172.31.224.58 ip-172-31-231-52.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-controller-rq4zr 1/1 Running 0 81m 172.31.220.48 ip-172-31-222-113.eu-west-1.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller LoadBalancer 10.100.137.168 ab72a06b6832d4cafabdce88e91a7c26-20b8b11700d071b8.elb.eu-west-1.amazonaws.com 80:31356/TCP,443:31159/TCP 35d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 10.100.183.205 <none> 443/TCP 35d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics ClusterIP 10.100.193.196 <none> 10254/TCP 83m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/ingress-nginx-controller 7 7 7 7 7 kubernetes.io/os=linux 21d controller registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
-
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Name: ingress-nginx-controller-6pq2f
Namespace: nginx
Priority: 0
Service Account: ingress-nginx
Node: ip-172-31-244-85.eu-west-1.compute.internal/172.31.244.85
Start Time: Fri, 26 Apr 2024 15:08:20 +0200
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.10.0
controller-revision-hash=587dfc5474
helm.sh/chart=ingress-nginx-4.10.0
pod-template-generation=14
Annotations: kubectl.kubernetes.io/restartedAt: 2024-04-26T10:34:13Z
Status: Running
IP: 172.31.244.180
IPs:
IP: 172.31.244.180
Controlled By: DaemonSet/ingress-nginx-controller
Containers:
controller:
Container ID: containerd://007d37ce09fd6de88a13725bef7cf624d3a4d375f81a48164bb9ab53a4cf6c98
Image: registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
Ports: 80/TCP, 443/TCP, 5000/TCP, 10254/TCP, 8443/TCP
Host Ports: 80/TCP, 443/TCP, 5000/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--default-ssl-certificate=default/[REDACTED]
State: Running
Started: Fri, 26 Apr 2024 15:08:21 +0200
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-6pq2f (v1:metadata.name)
POD_NAMESPACE: nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sg6kx (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-sg6kx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
-
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name: ingress-nginx-controller
Namespace: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.10.0
helm.sh/chart=ingress-nginx-4.10.0
k8slens-edit-resource-version=v1
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: nginx
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 60
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: true
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.137.168
IPs: 10.100.137.168
LoadBalancer Ingress: ab72a[REDACTED]-[REDACTED].elb.eu-west-1.amazonaws.com
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31356/TCP
Endpoints: 172.31.215.199:80,172.31.220.48:80,172.31.224.58:80 + 4 more...
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31159/TCP
Endpoints: 172.31.215.199:443,172.31.220.48:443,172.31.224.58:443 + 4 more...
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31263
Events: <none>
-
Current state of ingress object, if applicable:
-
kubectl -n <appnamespace> describe ing <ingressname>
-
Name: dev-[APPNAME]-be
Labels: app=dev-[APPNAME]-be
k8slens-edit-resource-version=v1
Namespace: default
Address: ab72a[REDACTED].elb.eu-west-1.amazonaws.com
Ingress Class: <none>
Default backend: <default>
TLS:
dev-[APPNAME]-be-[REDACTED]-tech-cert-secret terminates dev.api.[APPNAME].[REDACTED].tech
Rules:
Host Path Backends
---- ---- --------
dev.api.[APPNAME].[REDACTED].tech
/ dev-[APPNAME]-be-service:http (172.31.230.116:8080)
Annotations: cert-manager.io/cluster-issuer: letsencrypt-clusterissuer
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: true
nginx.ingress.kubernetes.io/configuration-snippet:
if ($http_origin ~* "^https?:\/\/((?:localhost:3000)|(?:.*\.[REDACTED]\.tech))$") {
add_header "Access-Control-Allow-Origin" "*";
}
nginx.ingress.kubernetes.io/enable-cors: true
nginx.ingress.kubernetes.io/whitelist-source-range: [REDACTED]/32
Events: <none>
- If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag I'm sure my IP was in the whitelist in the moment I started the following request:
$ curl -v https://dev.api.[APPNAME].[REDACTED].tech/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying [REDACTED]:443...
* Connected to dev.api.[APPNAME].[REDACTED].tech ([REDACTED]) port 443 (#0)
* ALPN: offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [330 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [2605 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [52 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [52 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384
* ALPN: server accepted h2
* Server certificate:
* subject: CN=dev.api.[APPNAME].[REDACTED].tech
* start date: Apr 9 06:40:10 2024 GMT
* expire date: Jul 8 06:40:09 2024 GMT
* subjectAltName: host "dev.api.[APPNAME].[REDACTED].tech" matched cert's "dev.api.[APPNAME].[REDACTED].tech"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
* using HTTP/2
* h2 [:method: GET]
* h2 [:scheme: https]
* h2 [:authority: dev.api.[APPNAME].[REDACTED].tech]
* h2 [:path: /]
* h2 [user-agent: curl/8.1.2]
* h2 [accept: */*]
* Using Stream ID: 1 (easy handle 0x148809600)
> GET / HTTP/2
> Host: dev.api.[APPNAME].[REDACTED].tech
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/2 403
< date: Fri, 26 Apr 2024 14:53:16 GMT
< content-type: text/html
< content-length: 146
< strict-transport-security: max-age=31536000; includeSubDomains
< access-control-allow-origin: *
< access-control-allow-credentials: true
< access-control-allow-methods: GET, PUT, POST, DELETE, PATCH, OPTIONS
< access-control-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization
< access-control-max-age: 1728000
<
{ [146 bytes data]
100 146 100 146 0 0 664 0 --:--:-- --:--:-- --:--:-- 685
* Connection #0 to host dev.api.[APPNAME].[REDACTED].tech left intact
-
Others:
I read similar issues where the solution was that the
externalTrafficPolicy
should beLocal
and I'm sure it is. To validate the fact that something is not behaving correctly, I created a copy of one of the services I was having problems with, and deployed it with a different subdomain (e.g. dev.company.com -> test.company.com) and the IP received on the new domain is correctly the real client IP, so the whitelist is working. I printed both the configurations from the nginx.conf file but they are the same, except for the name obviously. Another event that may be interesting is that half an hour after the upgrade, there was a DiskPressure on some nodes and many pods were evicted, including some ingress-nginx replicas. The problem has been solved by increasing the number of nodes.
How to reproduce this issue: Actually, I'm not able to describe how to reproduce the issue. I just went through the update and the following problem on nodes, but I didn't take other actions on the controller.
Anything else we need to know: I checked that the AWS Network Load Balancer Target Groups have the Preserve client IP addresses set to On. If you need some specific information, please ask me and I will provide it to you.
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-kind bug
Please enable proxy-protocol on the NLB as well as in the controller https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
/kind support
Because of this bug report we decided to test this before upgrading. We are not experiencing this problem so this does indeed seem to be a problem related to your setup and not with the upgrade itself.
Hi, I tried to activate the proxy protocol, but I got errors. More in detail, I get logs of broken headers like this:
2024/05/02 09:32:24 [error] 445#445: *4633986 broken header: "84�x�^��۩" while reading PROXY protocol, client: 172.31.15.204, server: 0.0.0.0:443
I did the following operations:
- Activated the proxy protocol v2 on the AWS NLB Target Groups for ports 443 and 80
- Changed the ingress-nginx-controller ConfigMap to set the
use-proxy-protocol
option totrue
- Changed the ingress-nginx-controller Service by editing the value of the following annotation:
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true,proxy_protocol_v2.enabled=true
Anyway, during these days, while I was checking other similar issues, I changed the ingress-nginx-controller Service by adding more annotations. Here's the full list of annotations present on the ingress-nginx-controller Service.
annotations:
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: nginx
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: '80'
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-type: nlb
None of these annotations actually changed the real behaviour of the controller.
@longwuyuan The docs you linked me talk about the proxy protocol on the AWS ELB, which means the Classic Load Balancer and not the Network Load Balancer. On the Classic version, the linked AWS docs talk just about proxy protocol v1, while on the Network version, just the v2 is available. Moreover, seems that the broken header is affecting other people too, like in the issue #9643 from the previous year. If you have any suggestion on how to resolve this, I'm available to test it.
@rouke-broersma Regarding the upgrade, I don't think my problem is strictly related to the ingress-nginx version. I upgraded the controller in another cluster and everything went fine.
There is one issue about the proxy-protocol-v2 and they had the same problem (which they solved AFAIK). Searching for the issue number now
Check if the info here helps in any way https://github.com/kubernetes/ingress-nginx/issues/10982
I am currently having this same issue but on Azure. Adding nginx.ingress.kubernetes.io/whitelist-source-range: "74.234.138.x/32"
annotation basically make the service internally (from pods that have access within the same namespace) and externally (from 74.234.138.x, over the internet) inaccessible. Removing the annotation restore access back to the service. It's totally strange to me.
While I was checking on this
Check if the info here helps in any way #10982
I made some modification and redeployed the ingress-nginx with externalTrafficPolicy: Cluster
and the proxy protocol enabled both on the controller and the Load Balancer. I also changed the healthcheck port as suggested in the issue. Anyway, that wasn't working.
Then I reverted the configuration to the preivous one, which at least served not-whitelisted traffic. Anyway, the situation got worse and the services where the IP was wrong, now weren't serving traffic anymore. The connections to these services were being closed. The Chrome browser showed the ERR_CONNECTION_CLOSED
error and there was no trace about that requests in the ingress-nginx logs.
Since this was causing a real downtime on the systems, I opted to completely remove ingress-nginx, which led to the removal of the Network Load Balancer on AWS. After reainstalling ingress-nginx, the new Load Balancer has been created and everything started working again, the whitelist annotation too.
Something I noticed was that the DNS records on Route53 were actually pointing to the NLB but they were an Alias typed to be used on Classic or Application Load Balancers. I corrected that records too, that may have been managed by an old version of external-dns with an old ingress-nginx. Anyway, I don't have any proof that this may have affected the traffic (which worked until the first update, like I mentioned above in the issue).
I suspect that there was something not working with that particular NLB instance. Anyway, if the same problem is happening on Azure, the problem may be in some internal (mis)configuration of ingress-nginx, or something between the ingress-nginx and the Load Balancer?
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev
on Kubernetes Slack.
having the same problems (Azure).
Works as expected with the old nginx setup and doesn't work with new versions.
@bodyabos please post the kubectl describe
output of all the related K8S resources like ;
- The controller pod
- The controller service
- The controler configMap
- The application pod
- The application service
- The ingress
besides the logs, the curl command as executed and an other information that is related.
Getting real-client ip requires proxy-protocol enabled on the controller and on the Azure-LB, in addition to an other Azure annotation, that pertains to the real-client-ip
An upgrade may or may not have retained the required config
@bodyabos please post the
kubectl describe
output of all the related K8S resources like ;
- The controller pod
- The controller service
- The controler configMap
- The application pod
- The application service
- The ingress
besides the logs, the curl command as executed and an other information that is related.
Getting real-client ip requires proxy-protocol enabled on the controller and on the Azure-LB, in addition to an other Azure annotation, that pertains to the real-client-ip
An upgrade may or may not have retained the required config
What I have: nginx behind a proxy (CF, Faslty - it doesn't matter). What I want to achieve: I want my apps to work with real client IPs. Block requests from non-proxy IPs on the nginx side.
The command I used to deploy the new nginx:
helm upgrade nginx-ingress ingress-nginx/ingress-nginx `
--install `
--namespace ingress-basic `
--create-namespace `
--set controller.service.externalTrafficPolicy=Local `
--set controller.replicaCount=1 `
--set controller.config.upstream-keepalive-timeout=10 `
--set controller.config.use-forwarded-headers=true `
--set controller.config.use-gzip=true `
--set controller.config.keep-alive=10 `
--set controller.config.keep-alive-requests=1000 `
--set controller.enableSnippetAnnotations=true `
--set controller.allowSnippetAnnotations=true
On the old deployment with similar settings my whitelist-source-range
works as expected - blocks requests from non-proxy IPs.
On the new ones - I should whitelist clients' IPs.
That is all the info I can provide you with at the moment but I think it should be enough.
Without knowing if the ipaddress in the first column of the controller log messages, is the real-client-ip address of the client outside the cluster, and without checking if that ipaddress is within the range of the value of the whitelist annotation, its hard to make a comment on what is the cause of your problem.
And setting externalTrafficPolicy to Local will cause the default healthy loadbalancing to change. Hence the suggested config is to use proxy-protocol.
Since you think the info you provided is enough, please wait for comments from experts who can analyze the info you have provided so far.
Without knowing if the ipaddress in the first column of the controller log messages
In both setups IP addresses in the first column are real clients' IPs.
is the real-client-ip address of the client outside the cluster
yes
without checking if that ipaddress is within the range of the value of the whitelist annotation
no, it is not in the range. that is the problem.
My setup: Client 1.1.1.1 -> Proxy 2.2.2.2 -> Nginx 3.3.3.3
My current setup involves a client (1.1.1.1) accessing a proxy (2.2.2.2), which in turn communicates with Nginx (3.3.3.3). I've restricted access to the proxy so only the client IP (1.1.1.1) is allowed. On the Nginx side, I aim to whitelist only the proxy's IP (2.2.2.2).
In the updated Nginx version, requests from the client appear with their IP address (1.1.1.1), not the proxy's (2.2.2.2). This results in Nginx rejecting the request with a 403 error since it's not in the whitelist. The previous Nginx version correctly identified the proxy's IP, ensuring proper access without additional whitelisting requirements.
Hence the suggested config is to use proxy-protocol.
use-proxy-protocol: "true"
gives me:
2024/07/04 06:07:08 [error] 101#101: *102505 broken header: "��=ؕM����~+A-��fA�����2E����6b�0�,�(�$��" while reading PROXY protocol, client: 167.82.17.53, server: 0.0.0.0:443
2024/07/04 06:07:08 [error] 101#101: *102512 broken header: "��*��Yw�ѻI�3�*��t 6�|�">��$�*�7b�0�,�(�$��" while reading PROXY protocol, client: 167.82.17.62, server: 0.0.0.0:443
client: 167.82.17.62
- it is the proxy address in that case
Sounds like you actually want to achieve the opposite of what this issue is about. You should disable any client real ip and proxy protocol config if you want to allow list the proxy ip instead of the client ip.
Sounds like you actually want to achieve the opposite of what this issue is about. You should disable any client real ip and proxy protocol config if you want to allow list the proxy ip instead of the client ip.
I want my app behind Nginx to know only the real client IPs and Nginx to know about the proxy IPs.
Sounds like you actually want to achieve the opposite of what this issue is about. You should disable any client real ip and proxy protocol config if you want to allow list the proxy ip instead of the client ip.
I want my app behind Nginx to know only the real client IPs and Nginx to know about the proxy IPs.
So enable proxy protocol on nginx and on your proxy. Your proxy must support proxy protocol for what you want.
@bodyabos can you please confirm something very very particularly specific.
From your example, where you mention your "proxy with ipaddress 2.2.2.2"
, is that so called "proxy"
the same as this LB in the original issue description
service/ingress-nginx-controller LoadBalancer 10.100.137.168 ab72a06b6832d4cafabdce88e91a7c26-20b8b11700d071b8.elb.eu-west-1.amazonaws.com 80:31356/TCP,443:31159/TCP 35d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
or is that not this AWS LoadBalancer ?
@bodyabos can you please confirm something very very particularly specific.
From your example, where you mention your
"proxy with ipaddress 2.2.2.2"
, is that so called"proxy"
the same as this LB in the original issue descriptionservice/ingress-nginx-controller LoadBalancer 10.100.137.168 ab72a06b6832d4cafabdce88e91a7c26-20b8b11700d071b8.elb.eu-west-1.amazonaws.com 80:31356/TCP,443:31159/TCP 35d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
or is that not this AWS LoadBalancer ?
no, by proxy I mean a third-party cdn.
Second question is about nginx.
When you use the words "Nginx 3.3.3.3"
, does this refer to the ingress-nginx controller or does this NOT refer to the ingress-nginx controller
Second question is about nginx.
When you use the words
"Nginx 3.3.3.3"
, does this refer to the ingress-nginx controller or does this NOT refer to the ingress-nginx controller
ingress-nginx controller with public IP (azure lb) - 3.3.3.3.
If you want the CDN ipaddress as the SRC ipaddress, then talk to the provider of the CDN. This project does not test with CDNs.
If you want real-client-ip address as the SRC ipaddress, then enable proxy-protocol in the configMap of the ingress-nginx controller and also enable proxy-protocol in the configuration of the AWS LoadBalancer.
Since the behavior changed after an upgrade, there may be change in the features or there may be change in the CDN. What the ingress-nginx controller is doing is expected behavior, as per your configuration.
I understand you need help to configure your networking to make the CDN ip-address as the SRC ipaddress. But that task is not done by the code of the ingress-nginx controller. So I am not sure what you are expecting from the project. You may get better help if you discuss this in the Kubernetes Slack. There are more experts and users of similar networking there. There are very few people who comment on github issues.
@Kavuti If you want real-client-ip address as the SRC ipaddress, then enable proxy-protocol in the configMap of the ingress-nginx controller and also enable proxy-protocol in the configuration of the AWS LoadBalancer.
There is not much else that is within the code of the ingress-nginx controller, related to getting the real-client-ipaddress.
There are many other posts that are related to the whitelisting topic but the neither the details of the original issue description nor the details of the messages from other posts point to a problem, in the ingress-nginx controller code. So open issues like this are not tracking any real problem but they are more of a support related discussion. Hence I would like to close this issue if there can not be a problem shown here that can be reproduced using a kind cluster or a minikube cluster.
If you discover a problem that can be reproduced on a kind cluster or a minikube cluster, please post the step-by-step procedure for that. Then reopen this issue.
/close
@longwuyuan: Closing this issue.
In response to this:
@Kavuti If you want real-client-ip address as the SRC ipaddress, then enable proxy-protocol in the configMap of the ingress-nginx controller and also enable proxy-protocol in the configuration of the AWS LoadBalancer.
There is not much else that is within the code of the ingress-nginx controller, related to getting the real-client-ipaddress.
There are many other posts that are related to the whitelisting topic but the neither the details of the original issue description nor the details of the messages from other posts point to a problem, in the ingress-nginx controller code. So open issues like this are not tracking any real problem but they are more of a support related discussion. Hence I would like to close this issue if there can not be a problem shown here that can be reproduced using a kind cluster or a minikube cluster.
If you discover a problem that can be reproduced on a kind cluster or a minikube cluster, please post the step-by-step procedure for that. Then reopen this issue.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.