cluster-api-provider-vsphere icon indicating copy to clipboard operation
cluster-api-provider-vsphere copied to clipboard

Failure Domain network configuration should support similar fields of VMTemplate network config

Open rikatz opened this issue 2 years ago • 6 comments

/kind feature

Describe the solution you'd like Per API there is a set of configurations that can be applied for a Network device, not only the portgroup name.

When working on different failure domains, there may be different IP Pools (eg.: AWS you create subnetworks per a VPC region).

Other configurations may differ as well, like DNS Server.

The original Failure domain design proposal also proposed that the Network placement was added to the DeploymentZone (although today it is on failure domain, which is fine).

In fact, this API was implemented but never used. There is no reference for it

This feature request is to move forward with the implementation of per-failure domain network configuration

Use cases

Use case 1 - Different nameservers per Zone

Acme Ltd runs their Kubernetes clusters over a vSphere environment. This vSphere manages a set of clusters within the same datacenter. The concept of cluster, for Acme Ltd is an isolated room on the same datacenter, containing its own network resources: different network switches, different DNS servers

This way, Acme Ltd needs to be able to define different network configurations (DNS Server, routes, even NTP servers) per cluster/deployment zone

Use case 2 - Different network requirements per zone

Acme Ltd is migrating its networking model. They understand that running Kubernetes control plane nodes with dynamic IPv4 addressing via DHCP may be risky for their operations, so they decided to establish IP pools per zones/clusters. As an example, nodes in zone 1 should receive the IPs 10.10.1.0/24, while in zone 2 should receive IPs 10.10.2.0/24. This migration needs to be nondisruptive, which means zone1 may already have IP Pool configured, but zone 2 may be still operating on DHCP, and at the proper moment, a new zone definition will be made with the right IP Pool, and the nodes on that zone migrated to this new zone definition.

This way, Acme Ltd needs to be able to define different Network models (DHCP vs IPAM/IPPool) per zone definition within the same datacenter

rikatz avatar Jun 27 '23 19:06 rikatz

image Credits to @chrischdi on the picture

rikatz avatar Jul 05 '23 14:07 rikatz

A public document to discuss the API evolutions is available at https://docs.google.com/document/d/1Mm8CUVP5ydjP1Uc247uoPHHRatqsBXU3C_jYHQEdVWg/edit?usp=sharing

rikatz avatar Jul 11 '23 17:07 rikatz

@lubronzhan can you take a look at the doc above ^

randomvariable avatar Aug 03 '23 17:08 randomvariable

/assign @rikatz given open doc & PR

sbueringer avatar Aug 18 '23 12:08 sbueringer

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 26 '24 16:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 25 '24 17:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 26 '24 18:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 26 '24 18:03 k8s-ci-robot