cloud-provider-kind
cloud-provider-kind copied to clipboard
Support both `Proxy` and `VIP` mode load balancing
A big reason for having cloud-provider-kind is to be able to test the kube-proxy end of load balancing, but there is more code that needs to be tested in the IPMode: VIP case than in the IPMode: Proxy case that cpkind currently uses. So we should support VIP-mode load balancing as well.
(Presumably we'd use an annotation to select which type we wanted? Not sure how this would work in the e2e suite exactly... probably at first we'd have to have [Feature:CloudProviderKind] or something.
(Presumably we'd use an annotation to select which type we wanted? Not sure how this would work in the e2e suite exactly... probably at first we'd have to have [Feature:CloudProviderKind] or something.
It sounds like the tests should be keying on [Feature: IPMode VIP] (which wouldn't be kind specific?) OR the tests are generic to both and we should just run them twice, once with cloud-provider-kind --ipmode=vip and once with cloud-provider-kind --ipmode=proxy?
(TIL https://kubernetes.io/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/)
yeah, tests should try to reflect functionality and features not implementations
It sounds like the tests should be keying on
[Feature: IPMode VIP]
https://github.com/kubernetes/enhancements/pull/4632 talks about trying to figure out how to make LB e2e testing be provider-agnostic. I wanted to avoid having per-subfeature [Feature]s because we'd end up needing a separate subfeature for every single LB test basically :slightly_frowning_face:. (Proposed plan in the KEP is to have the e2e tests retroactively detect whether the LB supported the feature, and skip themselves if not.)
we should just run them twice, once with
cloud-provider-kind --ipmode=vipand once withcloud-provider-kind --ipmode=proxy?
Yeah, that's probably the right approach.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/lifecycle frozen
I don't think we want to stop tracking this, but it may be a bit before it's resolved.