Failure Domain network configuration should support similar fields of VMTemplate network config
/kind feature
Describe the solution you'd like Per API there is a set of configurations that can be applied for a Network device, not only the portgroup name.
When working on different failure domains, there may be different IP Pools (eg.: AWS you create subnetworks per a VPC region).
Other configurations may differ as well, like DNS Server.
The original Failure domain design proposal also proposed that the Network placement was added to the DeploymentZone (although today it is on failure domain, which is fine).
In fact, this API was implemented but never used. There is no reference for it
This feature request is to move forward with the implementation of per-failure domain network configuration
Use cases
Use case 1 - Different nameservers per Zone
Acme Ltd runs their Kubernetes clusters over a vSphere environment. This vSphere manages a set of clusters within the same datacenter. The concept of cluster, for Acme Ltd is an isolated room on the same datacenter, containing its own network resources: different network switches, different DNS servers
This way, Acme Ltd needs to be able to define different network configurations (DNS Server, routes, even NTP servers) per cluster/deployment zone
Use case 2 - Different network requirements per zone
Acme Ltd is migrating its networking model. They understand that running Kubernetes control plane nodes with dynamic IPv4 addressing via DHCP may be risky for their operations, so they decided to establish IP pools per zones/clusters. As an example, nodes in zone 1 should receive the IPs 10.10.1.0/24, while in zone 2 should receive IPs 10.10.2.0/24. This migration needs to be nondisruptive, which means zone1 may already have IP Pool configured, but zone 2 may be still operating on DHCP, and at the proper moment, a new zone definition will be made with the right IP Pool, and the nodes on that zone migrated to this new zone definition.
This way, Acme Ltd needs to be able to define different Network models (DHCP vs IPAM/IPPool) per zone definition within the same datacenter
Credits to @chrischdi on the picture
A public document to discuss the API evolutions is available at https://docs.google.com/document/d/1Mm8CUVP5ydjP1Uc247uoPHHRatqsBXU3C_jYHQEdVWg/edit?usp=sharing
@lubronzhan can you take a look at the doc above ^
/assign @rikatz given open doc & PR
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.