aws-load-balancer-controller icon indicating copy to clipboard operation
aws-load-balancer-controller copied to clipboard

Auto-discovery of VPC

Open olemarkus opened this issue 3 years ago • 10 comments

Is your feature request related to a problem? When deploying a cluster with LBC through IaC, it is not that easy to inject the VPC ID into the LBC Deployment resource. That is, however, necessary when LBC runs with IRSA and with metadata is unavailable.

Describe the solution you'd like It would be nice if VPC could be discovered through tags the same way subnets are.

olemarkus avatar Dec 25 '21 18:12 olemarkus

/kind feature we have been think about new ways to discover vpc, clusterName. One possible way is to use the labels on nodes where the controller run

M00nF1sh avatar Dec 28 '21 23:12 M00nF1sh

There are no labels for this information today, but I could definitely help out adding at least a VPC label through Cloud Controller Manager.

cluster name is less of a problem since we do know that up front.

olemarkus avatar Dec 29 '21 04:12 olemarkus

we could get the region from the zone label possibly, and then find the vpcID by the instanceID of node(spec.providerID). and hopefully we can get the clusterName from the tags on instance as well.

on the other hand, using tags on the vpc is also a good idea.

M00nF1sh avatar Dec 31 '21 04:12 M00nF1sh

A typical node looks like this:

  labels:
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: i-05d8cbb21ed7e0eab
    kubernetes.io/os: linux
    kubernetes.io/role: node
    node-role.kubernetes.io/node: ""
    node.kubernetes.io/instance-type: t3.large
    topology.ebs.csi.aws.com/zone: eu-central-1a
    topology.kubernetes.io/region: eu-central-1
    topology.kubernetes.io/zone: eu-central-1a

And the provider ID is this aws:///eu-central-1a/i-05d8cbb21ed7e0eab

So region can be read directly from the region label. But there is nothing that contains the VPC ID. You can do a describeInstance on the providerID (or the hostname). Or CCM can add a label for the VPC ID.

I don't think it is a bad idea to have CCM add the VPC ID though. It seems useful.

There is no way to get the cluster name from the node object, but that could also be added perhaps by CCM.

There are some ways for LBC to get the data off of the node object, and some off of describeInstance/describeVPC. But I'd avoid using a solution that requires LBC to require both access to the node object and to AWS.

olemarkus avatar Dec 31 '21 05:12 olemarkus

@olemarkus, we could get the Node resource, extract the providerID and invoke the ec2 DescribeInstances API to get the VPC id. The controller already has permissions to access the Node resources from k8s, and IAM permissions for the DescribeInstances call and would not depend on the CCM putting the appropriate label.

kishorj avatar Feb 16 '22 23:02 kishorj

That sounds good to me.

olemarkus avatar Feb 17 '22 18:02 olemarkus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 18 '22 18:05 k8s-triage-robot

/remove-lifecycle stale

BryanStenson-okta avatar May 18 '22 19:05 BryanStenson-okta

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 16 '22 20:08 k8s-triage-robot

@M00nF1sh @kishorj any update on this?

/remove-lifecycle stale

olemarkus avatar Aug 17 '22 05:08 olemarkus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 02 '22 17:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jan 01 '23 18:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jan 31 '23 18:01 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 31 '23 18:01 k8s-ci-robot