cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
[Feature] Support for anti-affinity/affinity rules for the created machines
/kind feature
Openstack supports defining anti-affinity / affinity rules for VMs. This feature adds support for the user to specify affinity/anti-affinity grouping for the VMs.
Use case: The user created 3 Machine object and want all 3 VMs to run on different hosts to improve resiliency from host failures. This can easily be realized by creating the anti-affinity rule for the 3 VMs
Describe the solution you'd like [A clear and concise description of what you want to happen.]
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
just for the record, it's possible to kinda do this by using serverGroupID -- however, it would be nice to make server groups a natively managed feature by CAPO.
Some progress on this was made in https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/118 but this was never finished.
just for the record, it's possible to kinda do this by using serverGroupID -- however, it would be nice to make server groups a natively managed feature by CAPO.
maybe first allow set this then make CAPO natively support it (e.g create server group on the fly)?
I think we don't lose anything by actually creating server groups with soft anti affinity by default, but allowing the user to change the affinity rule set if needed
I think there might be multiple scenarios where server-groups would make sense. I agree that soft anti affinity would be a good default.
Generally I assume one group for the control-plane and one for all workers would probably be sufficient for most clusters however I could see a few scenarios where a cluster could use a separate server-group per machine deployment.
I assume to add serverGroupName would be a good idea. If the group exists just use it otherwise create a new group with the name. I assume somehow storing if the group is created by CAPO wold be nice so we can clean up all resources created when removing a cluster.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I would very much like to see this in CAPO.
Currently we are looking into CAPI Runtime SDK to automate some manual steps that we have in creating a cluster. Most of them we can automate through it but ServerGroupID is a bit weird. We can use LifeCycleHooks to create them but we dont know the id beforehand. Mutating webhooks should be very fast and using them to create the servergroups is not a good option.
I would like something:
- be able to define
ServerGroupName, capo does not create it but i can use webhooks to create them without the above problems. This would be much easier to implement for capo but would only be useful for people who use RuntimeSDK - have something like
ServerGroupwhere you can definepolicyandrulesand capo creates the server group. This would be much easier for users as capo would also take care of creating the serverGroups etc. But i think it will be tricky to get right for cases where a user want to use the same servergroup across controlPlane and nodegroups, or even across different clusters in the same openstack project. - Have a seperate controller and CRD
OpenstackServerGroupand reference that one on OpenStackMachine`. I think this would be very nice for users as CAPO would create the serverGroups and i think it would remove the problems above. But it is much more work. It is also aligned with the idea to add more controllers as mentioned in #1286
Is it the same feature as requested on https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1256 ? If yes I'll close this one to avoid duplication.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.