cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

[Feature Request] Allow provisioning one Load Balancer per Availability Zone for multi-AZ deployments

Open sebltm opened this issue 8 months ago • 2 comments

/kind feature

Describe the solution you'd like Today although the OpenStack cluster supports a multi-availability-zone deployment, the APIServer Load Balancer is only pinned to a single AZ. If this AZ goes down, even though the compute is multi-AZ, the cluster would still fail which defeats the purpose of the multiple availability zones.

Octavia does not support multi-availability-zone Load Balancers, so I instead propose allowing users to create one Load Balancer per AZ. In addition to this, the Load Balancers should have a setting to allow registering the Control Plane Machines to only the Load Balancer within their AZ (ensuring AZ local traffic), or to all the Load Balancers (where a Load Balancer in AZ1 could forward traffic to Load Balancer in AZ2 or 3).

In the case where a single AZ is given, the logic does not change from today. However in the case where there are multiple AZs given, then we make an assumption that the order in which the AZs are listed and the order in which the Subnets are listed in the APIServerLoadBalancer match and then we create a mapping of LB to Subnet.

Anything else you would like to add: I have a draft version of this working, happy to contribute and get feedback

sebltm avatar Mar 18 '25 12:03 sebltm

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 16 '25 12:06 k8s-triage-robot

/remove-lifecycle stale

sebltm avatar Jun 16 '25 12:06 sebltm

Proposal in https://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/2660

sebltm avatar Aug 18 '25 14:08 sebltm

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 16 '25 14:11 k8s-triage-robot