aws-load-balancer-controller
                                
                                 aws-load-balancer-controller copied to clipboard
                                
                                    aws-load-balancer-controller copied to clipboard
                            
                            
                            
                        Allow user to set AvailabilityZone "all" or "null" or "autoDetect" when createing TargetGroupBinding
Is your feature request related to a problem?
We have several business groups running in different VPCs,  and now we are trying to migrate them to a single EKS cluster.
Due to the huge complexity of our traffic path, we wanna migrate micro-services into EKS, and reuse the current ELB and target groups.
So, we need to bind PODs to different target group in multiple VPCs otherthan the EKS's native VPC.
As we know, if we need to bind PODs to TargetGroup out of the EKS native VPC,  vpcId should be set to the ID of that VPC().
THE PROBLEMS ARE:
- If vpcIdis associated with another VPC, we can no longger bind PODs to target groups in the EKS's native VPC, beacaus argumentAvailabilityZoneis set to "all" by LBC, but the POD IPs are in the range of native VPC's CIDR.
- vpcIdcan only be associated with ONE VPC, we now have many VPCs to hold the target groups.
Describe the solution you'd like
So, if we set vpcId to the EKS's native VPC's ID, or just omit it, In order to bind PODs to our old target groups, I think we need a feature:  set the value of AvailabilityZone in TargetGroupBinding's manifest by ourselves.
Describe alternatives you've considered
Or, we can create multiple LBC instances like nginx ingress class dose. We can deploy different LBC instance with different vpcId values, and use them in TargetGroupBinding's manifest  on demand.
@wuyudian1, this is not something we can support at the moment. It requires at least the following features:
- Controller support multiple VPCs
- possibly run multiple controller instances in the same cluster
We will keep it in the roadmap
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle stale
- Mark this issue or PR as rotten with /lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with /reopen
- Mark this issue as fresh with /remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.