cluster-api-provider-vsphere icon indicating copy to clipboard operation
cluster-api-provider-vsphere copied to clipboard

CAPV ControlPlane Failure Domain documentation

Open dimatha opened this issue 3 years ago • 5 comments

/kind feature

Describe the solution you'd like As a cluster admin I'd like to distribute k8s CP nodes across different hosts within the same cluster.

Referring to this proposal: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/proposal/20201103-failure-domain.md#for-user-story-1 I'm struggling to understand how to do the configuration using VSphereFailureDomain CRD to achieve the goal.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-vsphere version: v0.8.1
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

dimatha avatar Nov 18 '21 11:11 dimatha

Ok, I had to spend some time to understand it. And here is my experience.

From the proposal:

Region -> ComputeCluster, Zone -> HostGroup
#no tag required
#hostgroups need to be pre-configured
#CAPV need permissions to create vm groups and affinity rules

From my experience:

  • Create vCenter Tags and categories
  • Create Host Groups
  • Create VM groups with a dummy VM as it can't be empty
  • Create VM to Host rules
  • Create CAPV resources: VSphereDeploymentZone => VSphereFailureDomain
  • make sure VSphereDeploymentZone is ready
  • no need to reference VSphereDeploymentZone from the CP
  • assigned zone should be visible from the Machine spec.failureDomain
  • all ready VSphereDeploymentZones should be visible from the Cluster status.failureDomains

dimatha avatar Nov 19 '21 13:11 dimatha

/assign @srm09

gab-satchi avatar Dec 09 '21 18:12 gab-satchi

/kind documentation /remove-kind feature

srm09 avatar Jan 31 '22 00:01 srm09

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 01 '22 00:05 k8s-triage-robot

/remove-lifecycle stale /lifecycle frozen

srm09 avatar May 10 '22 02:05 srm09