cloud-provider-azure
cloud-provider-azure copied to clipboard
IP address allocation from service tags
What would you like to be added:
When creating a new service, all IP addresses must be allocated from the service tag.
[Moved from https://github.com/Azure/aks-engine/issues/4392 because AKS Engine is deprecated and this is a cloud-provider level of task.]
Why is this needed:
We need to be able to tell customers that traffic to our service and/or traffic from our service will always come from a service tag (set of IP addresses) such that they can set up network rules to limit or allow traffic to our services. We'd like to constrain requests for IP addresses from within a resource group (or subscription, if needed) to only come from that resource (or subscription's) IP address pool/service tag.
Requires API from network side.
Refer https://github.com/Azure/aks-engine/issues/4392#issuecomment-829781415:
per my understanding, service tags is used for NSG, instead of allocating IPs from a range. We need an API from network side to allow such allocations.
The challenge is that you have to pre-allocate IPs and get them assigned to a service tag. But then something needs to manage which of those IPs are in use for your service. (And you need to manage this on a per-region basis as the IPs are assigned regionally, for obvious reasons)
Anyway, the fact that this is currently left up to each customer to manage manually is annoying, especially since, with Kubernetes, you can deploy a new service into a cluster with a new public IP and the cloud controller would allocate a new IP address for you or you provide the deployment with an IP address "manually" (managed outside of kubernetes deployments)
This just makes the whole process of deploying a service very specific as to how you allocate your IP addresses from the set of IP addresses you have assigned to the service tag (and how you track that you no longer are using it and some other service in the cluster or clusters can)
If there were an easy plugin that we could do to manage the "allocate" and "release" of IP addresses, we could then have that have the logic to manage our IP address pool from the service tag. Even better, if such a feature was built generically enough such that others could use it and not have every team re-invent the wheel. Maybe even a core feature of Azure such that it is not just a kubernetes cloud controller feature. (Managing IP addresses from service tags is a real pain)
@feiskyer Out of curiosity: Is this a scenario that can be addressed with a variant of service.beta.kubernetes.io/azure-pip-ip-tags?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.