ALBs failing to get created with cross-account VPC
Describe the bug
I have EKS cluster running in private subnet of VPC-EKS. Public Subnet is shared from VPC-Public (using TGW). Subnets are tagged as per AWS requirements. I want to create Private and Public ALB. Private ALB in VPC-EKS, Public ALB in public subnet coming from VPC-Public. I tried creating public ALB first, it was not able to locate public subnet (tag was in place). So I had to hard code VPCID of VPC-Public in terraform.
enable_aws_load_balancer_controller = true set_values = [ { name = "vpcId" value = module.vpc.vpc_id },
It was able to create public ALB after I changed POD IP range to 100.64.0.0/10. (VPC-EKS Secondary Range).
Now when am trying to Create private ALB, i had to hardcode subnetID, security group of VPC-EKS in ingress to force it to create it VPC-EKS.
But it errors
{"level":"error","ts":"2023-10-11T16:22:15Z","msg":"Reconciler error","controller":"targetGroupBinding","controllerGroup":"elbv2.k8s.aws","controllerKind":"TargetGroupBinding","TargetGroupBinding":{"name":"k8s-game2048-service2-87c6054a28","namespace":"game-2048"},"namespace":"game-2048","name":"k8s-game2048-service2-e7c6054a28","reconcileID":"xxxxxx.....","error":"InvalidGroup.NotFound: You have specified two resources that belong to different networks.\n\tstatus code: 400, request id: xxxxx..."}
I think its confused. private ALB is in VPC-EKS and Target Group it's creating in VPC-Public. And not able to bind. I did not find any annotation to force Target Group in VPC-EKS.
Steps to reproduce
- Create 2 VPC - VPC-Public, VPC-EKS
- Deploy EKS cluster in pvt subnet of VPC-EKS
- Add secondary CIDR for VPC-EKS from 100.64.0.0/10, so POD can pick IP address using CNI Plugin.
- Deploy ALB controller and deploy ingress for external and internal ALB creation.
Expected outcome 2 ALB creation - 1 Private and 1 Public
Environment
- AWS Load Balancer controller version : 2.2.6/latest
- Kubernetes version : AWS EKS 1.28
- Using EKS (yes/no), if so version? Yes, 1.28
Additional Context:
/assign @shraddhabang
@oliviassss: GitHub didn't allow me to assign the following users: shraddhabang.
Note that only kubernetes-sigs members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @shraddhabang
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.