aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Ability to get EC2 instance ID differently
Is your feature request related to a problem? This is best described in this issue (which I already closed): https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/3485 - but in summary:
The EC2 instances hosting the Kubernetes cluster might not have the
Node.Spec.ProviderID
populated with an EC2 instance ID which the AWS LB Controller expects. As a result, the LB Controller fails to populate the Target Group.
Describe the solution you'd like If the instance ID is not configured into the Node.Spec.ProviderID, it may be possible to determine the instance ID in a number of different ways. One way would be:
for each node
get the internal IP address of the node
query all EC2 instances with some tag
for each instance
if the instance internal IP matches the internal IP of the node
add the instance ID to the target group
Another way would be to use a defined node label, e.g.:
service.beta.kubernetes.io/aws-load-balancer-internal-ip: n.n.n.n
Then perform the same logic described above except select the EC2 instance with an internal IP address matching the value of the node label.
Describe alternatives you've considered
I could add a step to our RKE2 provisioner that patches Node.Spec.ProviderID
with the EC2 instance ID.
Summary
I would be happy to submit a PR but before beginning work on this I would like to reach alignment on the exact approach. At this time I think the following might be simplest and clearest for the person configuring the controller:
- Require the EC2 instances to be tagged with a specific hard-coded tag with key. This enables the controller to efficiently filter the instances. E.g. --filters "Name=tag:aws-load-balancer-cluster-name,Values=my-cluster"
- Annotate the Service to match. E.g.:
apiVersion: v1 kind: Service metadata: name: foo annotations: service.beta.kubernetes.io/aws-load-balancer-cluster-name: my-cluster spec: ...
- When to controller event fires in response to service create/update - it gets the internal IP address from each Node
- For each Node internal IP address - find the matching instance based on instance internal IP address to obtain the instance id
- Use those instance IDs to populate the Target Group attached to the NLB Listener
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.