cluster-api-provider-vsphere icon indicating copy to clipboard operation
cluster-api-provider-vsphere copied to clipboard

(feature) Shared IPAM IPRanges for both vips and vm ips

Open lknite opened this issue 1 year ago • 1 comments

Describe the solution you'd like

When managing a workload cluster via cluster-api automate the assignment of the vip and vm ips.

On a namespace level:

  • one or more InClusterIPPool resources
  • label applied to InClusterIPPool resources: 'cluster-api-vips: true'
  • label applied to InClusterIPPool resources: 'cluster-api-vm-ips: true'
  • both labels may be applied to an InClusterIPPool resource

On a global level:

  • one or more GlobalInClusterIPPool resources
  • label applied to GlobalInClusterIPPool resources: 'cluster-api-vips: true'
  • label applied to GlobalInClusterIPPool resources: 'cluster-api-vm-ips: true'
  • both labels may be applied to a GlobalInClusterIPPool resource

Workflow

  • If namespaced InClusterIPPool resources are available in the ns where the cluster is deployed, use those to allocate vip and vm ips, based on labels.
  • If namespaced InClusterIPPool resources are available in the ns where the cluster is deployed, but are exhausted, fail with event out-of-vips, out-of-vm-ips, or out-of-vips-and-vm-ips
  • If no namespaced InClusterIPPool resources exist, use GlobalInClusterIPPool resources if available

Additional labels

  • Because more than one provider may be in use, and it may be desirable for all providers to use the same InClusterIPPools and/or GlobalInClusterIPPools, this can be enabled by applying the label 'cluster-api-allow-all-providers: true'
  • If someone wishes to use GlobalInClusterIPPool resources, if the namespaced InClusterIPPools resources are exhausted, this may be enabled by adding a label 'cluster-api-allow-overflow-vips: true' and/or 'cluster-api-allow-overflow-vm-ips: true'

Anything else you would like to add:

  • Allocating a vip is something that should be automated, in the same way that allocating a vm ip should be automated.
  • Would allow for VIP_IP_RANGES in the ~/.cluster-api/clusterctl.conf in addition to NODE_IP_RANGES.
  • Because labels are being used, this new feature would not break any existing implementations.

lknite avatar Oct 22 '24 16:10 lknite

Note: one example implementation but not using the IPAM provider, but AVI/AKO/NSX instead (note I never tried it):

https://github.com/vmware-tanzu/load-balancer-operator-for-kubernetes/

I think this kind of problem qualifies to be a project independent of CAPV. Coupling with the IPAM provider may be a good thing because it would use standardised resources already, however I think lot's of use-cases may need to work together with proprietary IPAM systems and directly integrating them may be more feasible (e.g. Infoblox).

chrischdi avatar Oct 22 '24 18:10 chrischdi

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 20 '25 19:01 k8s-triage-robot

/remove-lifecycle stale

lknite avatar Jan 20 '25 22:01 lknite

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 20 '25 23:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 20 '25 23:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jun 24 '25 19:06 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jun 24 '25 19:06 k8s-ci-robot