aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
AWS NLB - Proxy Protocol v2 NOT enabled in existing NLB created bythe ingress-nginx
Describe the bug I have an existing NLB created by ingress-nginx controller. Now I have a requirement to enable proxy protocol v2 in order to get source IPs. I have added an annotation to the same ingress-nginx's configuration and can see the same in annotation in service. But the NLB is unchanged. I have aws-load-balancer-controller running in cluster
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "ssl"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<Redacted>"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
nginx.ingress.kubernetes.io/proxy-send-timeout: "2000"
domainName: "public.kube.develop.vortexa.com"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
Steps to reproduce Create an NLB with nginx-ingress controller and then try to enable proxy-protocol v2 with annotation. Expected outcome Proxy protocol v2 should be enabled in NLB
Environment
- AWS Load Balancer controller version: 2.7.1
- Kubernetes version: 1.25
- Using EKS (yes/no), if so version? EKS 1.25
/kind bug
I've just hit the same issue yesterday (proxy protocol not getting enabled, EKS 1.28). I haven't found a solution yet.
However! I think this is a wrong project to report this bug.
For one, I don't even have the AWS LB Controller installed on my clusters.
You, OTOH, have it installed, but I think you're not making use of it. This annotation service.beta.kubernetes.io/aws-load-balancer-type: "nlb" means that your LB is created by another controller that's built in to kubernetes (https://github.com/kubernetes/cloud-provider-aws). It would be great if someone from the maintainers of this project could confirm.
It's a bit of a rabbit hole really, but you can start with this article: https://baptistout.net/posts/two-kubernetes-controllers-for-managing-aws-nlb/
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.