aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Auto-discovery of VPC
Is your feature request related to a problem? When deploying a cluster with LBC through IaC, it is not that easy to inject the VPC ID into the LBC Deployment resource. That is, however, necessary when LBC runs with IRSA and with metadata is unavailable.
Describe the solution you'd like It would be nice if VPC could be discovered through tags the same way subnets are.
/kind feature we have been think about new ways to discover vpc, clusterName. One possible way is to use the labels on nodes where the controller run
There are no labels for this information today, but I could definitely help out adding at least a VPC label through Cloud Controller Manager.
cluster name is less of a problem since we do know that up front.
we could get the region from the zone label possibly, and then find the vpcID by the instanceID of node(spec.providerID). and hopefully we can get the clusterName from the tags on instance as well.
on the other hand, using tags on the vpc is also a good idea.
A typical node looks like this:
labels:
kubernetes.io/arch: amd64
kubernetes.io/hostname: i-05d8cbb21ed7e0eab
kubernetes.io/os: linux
kubernetes.io/role: node
node-role.kubernetes.io/node: ""
node.kubernetes.io/instance-type: t3.large
topology.ebs.csi.aws.com/zone: eu-central-1a
topology.kubernetes.io/region: eu-central-1
topology.kubernetes.io/zone: eu-central-1a
And the provider ID is this aws:///eu-central-1a/i-05d8cbb21ed7e0eab
So region can be read directly from the region label. But there is nothing that contains the VPC ID. You can do a describeInstance on the providerID (or the hostname). Or CCM can add a label for the VPC ID.
I don't think it is a bad idea to have CCM add the VPC ID though. It seems useful.
There is no way to get the cluster name from the node object, but that could also be added perhaps by CCM.
There are some ways for LBC to get the data off of the node object, and some off of describeInstance/describeVPC. But I'd avoid using a solution that requires LBC to require both access to the node object and to AWS.
@olemarkus, we could get the Node resource, extract the providerID and invoke the ec2 DescribeInstances API to get the VPC id. The controller already has permissions to access the Node resources from k8s, and IAM permissions for the DescribeInstances call and would not depend on the CCM putting the appropriate label.
That sounds good to me.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@M00nF1sh @kishorj any update on this?
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.