aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
Usage of --aws-vpc-tags with --aws-vpc-tag-key is unclear
This was introduced by @jeswinkoshyninan in https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/3656
It's not clear to my why --aws-vpc-tags allows to pass a map with an arbitrary number of items if we only ever pass a single key-value pair (the one with the key specified in --aws-vpc-tag-key) as the tag filter for finding a VPC.
https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/8a0147826cc45be2f824dcdbf630ff95ff85408f/pkg/aws/cloud.go#L141-L143
https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/8a0147826cc45be2f824dcdbf630ff95ff85408f/pkg/aws/cloud.go#L184-L192
Shouldn't it rather just convert the map from --aws-vpc-tags into a list of tag filters, one for each item? This would remove the need for --aws-vpc-tag-key altogether.
Ref: https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/3656#discussion_r1799215219
@jeswinkoshyninan Mey-be you can help to shed more light here?
@pinkavaj okay sure
@alfredkrohmer sorry for the late reply, I was away for my vacation. make sense, I will create a PR to update this behaviour based on your suggestion. But just to give the context here, this feature doesn't make any change in behaviour of fetching VPC ID from metadata-service as it was before. But added as an optional feature to fetch VPC-ID explicitly from a given AWS tag with value as name of the VPC and by default the tag-key will be 'Name' itself and which is defined here.
So in this case you don't want to pass values for --aws-vpc-tags and --aws-vpc-tag-key altogether. But the users should have a way for override the default tag to a custom and in that case they might can use --aws-vpc-tag-key to override. But in most case this is not required. Hope this helps ?
But just to give the context here, this feature doesn't make any change in behaviour of fetching VPC ID from metadata-service as it was before.
That's clear.
But the users should have a way for override the default tag to a custom and in that case they might can use --aws-vpc-tag-key to override.
This still explain why you need two parameters for this. It looks like --aws-vpc-tag-key is just entirely redundant. You could just pass --aws-vpc-tags=<tag key>=<tag value> (even with multiple values) and just pass forward this map as a filter to the DescribeVpcs API call.
@alfredkrohmer I have went into it again and wanted to confirm few more things, If list --aws-vpc-tags has a single value given, it is pretty straight to use the key as the tag key to identify the VPC. But in the case of having multiple tag values, we need an identifier to identify the name tag-key of the VPC Name defined. Assuming "Name" as tag always is not something which we can expect. thoughts ?
for eg: --aws-vpc-tags=VPCName=myvpc,EnvName=myenv, In this case which one can be used as an identifier ? --aws-vpc-tag-key is basically to overcome this problem and allowing user to define.
What do you need an name/identifier for? I thought the feature is used to find the VPC ID, which you need for API calls.
please refer, So here we are trying to fetch VPC-ID from tags given. More details about the problem statement is given in the PR desc. This is useful for cases where access to AWS metadata is blocked and the VPC ID is not known at deploy time, options to infer the VPC ID were limited before.
I have refactored the feature to not use the flag --aws-vpc-tag-key, please review and merge the changes.
Should be fixed by now, @alfredkrohmer can you confirm
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.