cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
API Loadbalancing with Keepalived/HAProxy
/kind feature
Describe the solution you'd like Support the deployment of OpenShift clusters which use keepalived and HAProxy.
By default, OpenShift clusters expect there to be an internal API port on the cluster network which is not attached to any server. This port has a floating IP on the external network.
All control plane machines have a port on the cluster network which has an additional allowed address pair such that it can receive traffic for the internal API port. This allows a keepalived process running on each control plane machine to 'take' the internal API port as necessary.
We need to be able to specify a CAPO loadbalancer configuration which supports this.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
This looks very interesting! As we used to discuss about this kind of api exposition solution in another issue, I just wanted to mention that we have the exact same use case in sylva project where we use metallb or kube-vip to expose cluster IP.
In order to follow-up with your comment in this thread, it is also possible to expose cluster IP with metallb, the only issue is that is not easy to install it with using kubeadm control plane provider (as kubeadm tries to reach the api endpoint, and metallb indeed requires the api to be available to set-up, there is a chicken and egg issue) but it works perfectly with kube-vip that is designed to be run as a static pod. On the other hand, metallb works perfectly while using rke2 controlplane provider that has the capability to deploy charts from static manifests.
For now we were adding allowed address pair to control-plane machines templates, but it would be great if these external load-balancing options would be officially supported by this provider. One point to mention is that it is also necessary to ensure that the virtual IP will not be allocated by neutron to another port, for that purpose we are currently creating a port in neutron to reserve this address (it remains down and unbound) using a sidecar operator, if would be great if capo controller could take care of that point too.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.