aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Externally Managed Load Balancer not works
Describe the bug I try to configure ALBC as described here: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/use_cases/self_managed_lb/ and here: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/guide/targetgroupbinding/spec/. But for some reason is not working. ALBC associate my service with the Load Balancer, but I don´t know why is deleting the Load Balancer Security Group and creating another Target Group.
Steps to reproduce Configure the target group binding with this config below:
apiVersion: v1 items:
- apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
creationTimestamp: "2024-10-01T18:28:53Z"
finalizers:
- elbv2.k8s.aws/resources
generation: 1
labels:
service.k8s.aws/stack-name: myservicename
service.k8s.aws/stack-namespace: mynamespace
name: my-tgb
namespace: mynamespace
resourceVersion: "1671495"
uid: 7333d9e0-8d21-4d2e-be0b-0a448376363a
spec:
ipAddressType: ipv4
networking:
ingress:
- from:
- securityGroup: groupID: sg-0a62c68d80b461ed6 ports:
- port: 31660 protocol: TCP serviceRef: name: myservicename port: 443 targetGroupARN: arn:aws:elasticloadbalancing:us-east-1:xxxxxx:targetgroup/my-tg/f0da90b41db9ae07 targetType: instance vpcID: vpc-xxxxxxxxxxxx status: observedGeneration: 1 kind: List metadata: resourceVersion: ""
- from:
- elbv2.k8s.aws/resources
generation: 1
labels:
service.k8s.aws/stack-name: myservicename
service.k8s.aws/stack-namespace: mynamespace
name: my-tgb
namespace: mynamespace
resourceVersion: "1671495"
uid: 7333d9e0-8d21-4d2e-be0b-0a448376363a
spec:
ipAddressType: ipv4
networking:
ingress:
Create a Service with this configuration below:
service.beta.kubernetes.io/aws-load-balancer-type: external
Tag the Network Load Balancer with this tags below:
elbv2.k8s.aws/cluster = mycluster service.k8s.aws/resource = LoadBalancer service.k8s.aws/stack = mynamespace/myservicename
Expected outcome Connect my service with the Load Balancer without delete the LB Security Group and create another Target Group.
Environment
AWS Load Balancer controller version v2.8.1 Kubernetes version v1.30.2 Using EKS (yes/no), if so version? yes 1.28
Additional Context:
### Tasks
/kind bug
I got this error below in the ALBC log.
{"level":"info","ts":"2024-10-01T23:48:16Z","msg":"registered targets","arn":"arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxx:targetgroup/xxxxxxxx-tg/9ead0864d9521e69"} {"level":"error","ts":"2024-10-01T23:48:16Z","msg":"Reconciler error","controller":"service","namespace":"mynamespace","name":"myservicename","reconcileID":"80faa0ff-6d51-4155-8827-ce664684bb4b","error":"unexpected securityGroup with no resourceID: sg-0a62c68d80b461ed6"}
I don´t know if I need to add a tag in my security group resource to help reconcile process identify the SG.
Any clue?
Also I got this message in the service describe:
Warning FailedDeployModel 37s (x15 over 2m4s) service Failed deploy model due to unexpected securityGroup with no resourceID: sg-0a62c68d80b461ed6
Hello. It seems you are mixing two solutions. From the document you posted: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/use_cases/self_managed_lb. You are responsible for creating and managing the NLB. From the posted yaml:
finalizers:
elbv2.k8s.aws/resources
generation: 1
labels:
service.k8s.aws/stack-name: myservicename
service.k8s.aws/stack-namespace: mynamespace
name: my-tgb
It looks like the LBC is managing this target group binding. Please re-follow this guide https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/use_cases/self_managed_lb/ and create the NLB via the console or CLI. You can then manually create the needed target group binding.
/kind question
Hi @zac-nixon thank for replying.
I re-configured all the solution and still not working. I´m using Helm chart instead of a Kubernetes Manifest, That´s the only diference between my config and the tutorial. There´s any annotation do I have to do in the Service like "service.beta.kubernetes.io/aws-load-balancer-type" as "external"? Or just create the service without any annotation and the ALBC will configure my Load Balancer to the Service?
/kind bug
I think the confusion is that
service.beta.kubernetes.io/aws-load-balancer-type: external
doesn't refer to an externally managed load balancer. It is referring to the IP address type (external vs internal).https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/#lb-type
The expectation for an externally managed load balancer is that the operator creates the load balancer, listeners, and target group using the console or cli. Then you can attach that created target group to the cluster using a targetgroupbinding.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.