cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
[Feature Request] Allow provisioning one Load Balancer per Availability Zone for multi-AZ deployments
/kind feature
Describe the solution you'd like Today although the OpenStack cluster supports a multi-availability-zone deployment, the APIServer Load Balancer is only pinned to a single AZ. If this AZ goes down, even though the compute is multi-AZ, the cluster would still fail which defeats the purpose of the multiple availability zones.
Octavia does not support multi-availability-zone Load Balancers, so I instead propose allowing users to create one Load Balancer per AZ. In addition to this, the Load Balancers should have a setting to allow registering the Control Plane Machines to only the Load Balancer within their AZ (ensuring AZ local traffic), or to all the Load Balancers (where a Load Balancer in AZ1 could forward traffic to Load Balancer in AZ2 or 3).
In the case where a single AZ is given, the logic does not change from today. However in the case where there are multiple AZs given, then we make an assumption that the order in which the AZs are listed and the order in which the Subnets are listed in the APIServerLoadBalancer match and then we create a mapping of LB to Subnet.
Anything else you would like to add: I have a draft version of this working, happy to contribute and get feedback
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Proposal in https://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/2660
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale