cluster-api
cluster-api copied to clipboard
Load Balancer Provider
/kind feature
One of the long-standing issues in CAPV is the lack of a default/standard load balancer for vSphere environments - Many options exist (VMC ELB, F5, NSX, IPVS, Client-Side) but nothing would apply to all environments.
We, therefore, need a mechanism to support arbitrary load balancer implementations.
The first question that needs to be answered is where does this belong as a high-level provider similar to the bootstrap provider, or as an implementation detail in CAPV?
Note: This refers to the load balancer for control plane endpoints, not Service Type Load Balancer
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/pull/491
/assign @timothysc @akutz @yastij
Potential implications for: https://github.com/kubernetes-sigs/cluster-api/issues/1197
@moshloop this aligns with my original ideas around how we should rework the "Cluster Infrastructure" when we first started discussing v1alpha2 changes. This also extends beyond just vSphere, but also affects bare metal and even OpenStack environments.
My thought was that we should split the existing monolithic cluster infrastructure into 3 separate components:
- Load Balancer provider
- Network provider
- Firewall provider
What we are lacking is a CAEP/design doc around how we could implement this on top of v1alpha2.
/kind design
If there is enough interest, this functionality should be spec'd in a CAEP.
I will try and create a CAEP for this. /assign
@detiber - If there's enough traction on this then I'm happy to help on this. one question that comes to my mind, if we want to have cluster infrastructure "composability" we'd need some new APIs to support this, would this be for v1a3 ?
@yastij it would definitely be post v1alpha2, whether it would be included in v1alpha3 would need to still be determined based on community planning.
cc @akutz
After reading through the spec several times, I don't see a reason to keep this open in the main CAPI repo as a POC can be done independently. Once the POC is complete and if folks think that it is generally useful we can revisit later.
/reopen Reopening to continue the discussion. Discussed in the 10/23 meeting.
@ncdc: Reopened this issue.
In response to this:
/reopen Reopening to continue the discussion. Discussed in the 10/23 meeting.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen
I think that this similar to what the new ingress API is just addressing with the Gateway API, but I may be wrong :sweat_smile:
https://kubernetes-sigs.github.io/service-apis/
Maybe kinda sorta? š The Gateway API has a GatewayClass (similar in idea to StorageClass, per infra/gateway provider) and then there are individual Gateways/Routes/Listeners that (if I read it correctly) are pretty generic.
What is proposed here is a generic MachineLoadBalancer type that knows how to select Cluster API Machines, with a pointer (ObjectReference) to a specific implementation (HAProxy, AWS ELB, etc). I'm not sure the two things (Gateway, Machine LB) intersect much other than they both broadly are about load balancing.
@moshloop Now that we have experimental packages support in Cluster API, we can probably include these types/controllers in this repository.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen
/milestone v0.4.0
I'm interested in contributing towards this for https://github.com/kubernetes-sigs/cluster-api/milestone/16, should we work on scheduling a kick off meeting for others who may be interested in working on this as well?
/assign
@detiber - Awesome! Can you setup a kickoff?
Count me in!
Created a doodle with some times for next week: https://doodle.com/poll/mct22fq43wga8qzp
Based on responses and trying to accommodate the most folks, I've chosen Monday 10/26 at 9:30 ET. I will drop a link to the zoom in the #cluster-api channel on Kubernetes Slack prior to the meeting. If anyone would like a calendar invite, please reach out to me on Slack (jdetiber) and I'll be more than happy to forward the invite your way.
Maybe kinda sorta? š The Gateway API has a GatewayClass (similar in idea to StorageClass, per infra/gateway provider) and then there are individual Gateways/Routes/Listeners that (if I read it correctly) are pretty generic.
What is proposed here is a generic MachineLoadBalancer type that knows how to select Cluster API Machines, with a pointer (ObjectReference) to a specific implementation (HAProxy, AWS ELB, etc). I'm not sure the two things (Gateway, Machine LB) intersect much other than they both broadly are about load balancing.
Specifically the Service API v2 is not intended to provide lifecycle management of infrastructure resources for providing a load balancer platform's control plane. In other words:
- The Machine Load Balancer proposal could be thought of as a mechanism for deploying the infrastructure required for a load balancer platform's control plane
- The Service API v2 (
GatewayandGatewayClass) is for leveraging the control plane to configure/deploy data path components such as virtual services
As I recall, the MLB does also provide coverage for some of the data path configuration, but I'd recommend scaling that back and leveraging the Service APIs for that aspect. Thinking of it like Cluster API, the:
- Service APIs are the equivalent of Core CAPI, and...
- the MLB becomes the equivalent of CAPI's infrastructure providers
Anyway, that's my two cents. :) I think it would be very interesting to get folks like @robscott and @jpeach involved with MLB. I believe there's an opportunity to evolve Service APIs much like Cluster API, and provide a mechanism for the infrastructure LCM as well as the control plane and data path components.
Thanks for looping me in @akutz! MLB looks like a very interesting proposal. I'd love to set aside some time to see if we can try to make these APIs work well together. You're completely right that the Service APIs we've been working on have not been designed with a LB control plane in mind, but there's likely some significant overlap in the kinds of config that would be needed.
Cool! Although Iām no longer involved with it, I had created a very earlier implementation of MLB in CAPV. I imagine @moshloop or @yastij could show it to you. Or I can too if you want to ping me.
One thing @yastij and I had discussed quite a bit was something tantamount to an IPAM Provider framwework as well, and I think that would couple nicely with the aforementioned concepts.
Working doc for the proposal is here: https://docs.google.com/document/d/1wJrtd3hgVrUnZsdHDXQLXmZE3cbXVB5KChqmNusBGpE/edit
We are planning on having another meeting to sync on the proposal next Monday (November 2nd) at 9:30 US Eastern time.
I do think it would definitely be interesting to see how we could leverage the Service API v2 work and I know that @moshloop has some thoughts around breaking apart the existing Machine Load Balancer (as implemented in the vsphere provider) into 2 separate components that he plans on presenting in more detail next week. It might be interesting to see how much that aligns/conflicts with the Service API v2 thoughts above.
wrt the idea of an IPAM provider, while I do agree that it makes sense to investigate that idea further, I would like to avoid attempting to tackle that as part of the Load Balancer Provider proposal and keep that as a separate concern for now. I think there is already going to be quite a bit of complexity, nuance, and potential for disagreement with keeping things scoped to the Load Balancer Provider and I'd like to avoid adding any additional contentious topics to avoid distractions and tangential bike shedding.
Hi folks, is the proposal ready to be reviewed from a larger group? Waiting for your input before engaging on the document