aws-load-balancer-controller icon indicating copy to clipboard operation
aws-load-balancer-controller copied to clipboard

Allow user to set AvailabilityZone "all" or "null" or "autoDetect" when createing TargetGroupBinding

Open wuyudian1 opened this issue 3 years ago • 1 comments

Is your feature request related to a problem? We have several business groups running in different VPCs, and now we are trying to migrate them to a single EKS cluster. Due to the huge complexity of our traffic path, we wanna migrate micro-services into EKS, and reuse the current ELB and target groups. So, we need to bind PODs to different target group in multiple VPCs otherthan the EKS's native VPC. As we know, if we need to bind PODs to TargetGroup out of the EKS native VPC, vpcId should be set to the ID of that VPC(). THE PROBLEMS ARE:

  1. If vpcId is associated with another VPC, we can no longger bind PODs to target groups in the EKS's native VPC, beacaus argument AvailabilityZone is set to "all" by LBC, but the POD IPs are in the range of native VPC's CIDR.
  2. vpcId can only be associated with ONE VPC, we now have many VPCs to hold the target groups.

Describe the solution you'd like So, if we set vpcId to the EKS's native VPC's ID, or just omit it, In order to bind PODs to our old target groups, I think we need a feature: set the value of AvailabilityZone in TargetGroupBinding's manifest by ourselves.

Describe alternatives you've considered Or, we can create multiple LBC instances like nginx ingress class dose. We can deploy different LBC instance with different vpcId values, and use them in TargetGroupBinding's manifest on demand.

wuyudian1 avatar Jun 15 '22 07:06 wuyudian1

@wuyudian1, this is not something we can support at the moment. It requires at least the following features:

  • Controller support multiple VPCs
  • possibly run multiple controller instances in the same cluster

We will keep it in the roadmap

kishorj avatar Aug 11 '22 22:08 kishorj

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 09 '22 22:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 09 '22 23:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jan 08 '23 23:01 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 08 '23 23:01 k8s-ci-robot