LoadBalancer controller: nodes listing with externalTrafficPolicy == "local"
We use externalTrafficPolicy "local" for our service and want to setup our balancer with members from service endpoints, but we got all cluster nodes as argument to EnsureLoadBalancer.
We are able to perform health check with port from service healthCheckNodePort, but all cluster nodes in NLB members looks like huge overhead.
Proposed solution: In case externalTrafficPolicy == "local", use nodes with service endpoints in argument to EnsureLoadBalancer. Call EnsureLoadBalancer (trigger event) on each service endpoints update (pod scaling, migration etc.)
We are able to perform health check with port from service healthCheckNodePort, but all cluster nodes in NLB members looks like huge overhead.
This is expected behavior -- as you mentioned, using the healthCheckNodePort ensures only nodes with the endpoint is being used by the LB
In case externalTrafficPolicy == "local", use nodes with service endpoints in argument to EnsureLoadBalancer. Call EnsureLoadBalancer (trigger event) on each service endpoints update (pod scaling, migration etc.)
My gut feeling is that this would be too costly. Services that have a lot of endpoints would churn a lot during a rolling update of the pods. Trying to add/remove backend nodes for an LB would make a lot of calls to the cloud provider and it's also possible that the cloud provider is not always able to add/remove backends as quickly as kube-proxy would. But we can make fairly safe assumptions that health check failures from LBs are responded to quickly as they are designed for this type of failure.
This is expected behavior -- as you mentioned, using the healthCheckNodePort ensures only nodes with the endpoint is being used by the LB
Yes, but load balancer will be in unhealthy state, because node with no 'localEndpoints' on it returns 50x error code.
My gut feeling is that this would be too costly. Services that have a lot of endpoints would churn a lot during a rolling update of the pods. Trying to add/remove backend nodes for an LB would make a lot of calls to the cloud provider and it's also possible that the cloud provider is not always able to add/remove backends as quickly as kube-proxy would. But we can make fairly safe assumptions that health check failures from LBs are responded to quickly as they are designed for this type of failure.
Thank you, I agree with it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
We are able to perform health check with port from service healthCheckNodePort, but all cluster nodes in NLB members looks like huge overhead.
This is expected behavior -- as you mentioned, using the healthCheckNodePort ensures only nodes with the endpoint is being used by the LB
In case externalTrafficPolicy == "local", use nodes with service endpoints in argument to EnsureLoadBalancer. Call EnsureLoadBalancer (trigger event) on each service endpoints update (pod scaling, migration etc.)
My gut feeling is that this would be too costly. Services that have a lot of endpoints would churn a lot during a rolling update of the pods. Trying to add/remove backend nodes for an LB would make a lot of calls to the cloud provider and it's also possible that the cloud provider is not always able to add/remove backends as quickly as kube-proxy would. But we can make fairly safe assumptions that health check failures from LBs are responded to quickly as they are designed for this type of failure.
I think watch endpoint is necessary.
If cluster have thousands of nodes, every loadbalancer service will make LB Service add thousands listen for every node, but only several of them ( maybe service only bind one or two pod) work. And in some other case, like in 3-layer network, LB can direct link to pod ip.
I agree with listen endpoint is too costly.. In our case, we add label for LB service's endpoint, and only watch endpoints with label change of the subset.