cloud-provider icon indicating copy to clipboard operation
cloud-provider copied to clipboard

LoadBalancer controller: nodes listing with externalTrafficPolicy == "local"

Open CharlieR-o-o-t opened this issue 3 years ago • 3 comments

We use externalTrafficPolicy "local" for our service and want to setup our balancer with members from service endpoints, but we got all cluster nodes as argument to EnsureLoadBalancer.

We are able to perform health check with port from service healthCheckNodePort, but all cluster nodes in NLB members looks like huge overhead.

Proposed solution: In case externalTrafficPolicy == "local", use nodes with service endpoints in argument to EnsureLoadBalancer. Call EnsureLoadBalancer (trigger event) on each service endpoints update (pod scaling, migration etc.)

CharlieR-o-o-t avatar Apr 20 '22 15:04 CharlieR-o-o-t

We are able to perform health check with port from service healthCheckNodePort, but all cluster nodes in NLB members looks like huge overhead.

This is expected behavior -- as you mentioned, using the healthCheckNodePort ensures only nodes with the endpoint is being used by the LB

In case externalTrafficPolicy == "local", use nodes with service endpoints in argument to EnsureLoadBalancer. Call EnsureLoadBalancer (trigger event) on each service endpoints update (pod scaling, migration etc.)

My gut feeling is that this would be too costly. Services that have a lot of endpoints would churn a lot during a rolling update of the pods. Trying to add/remove backend nodes for an LB would make a lot of calls to the cloud provider and it's also possible that the cloud provider is not always able to add/remove backends as quickly as kube-proxy would. But we can make fairly safe assumptions that health check failures from LBs are responded to quickly as they are designed for this type of failure.

andrewsykim avatar Apr 20 '22 17:04 andrewsykim

This is expected behavior -- as you mentioned, using the healthCheckNodePort ensures only nodes with the endpoint is being used by the LB

Yes, but load balancer will be in unhealthy state, because node with no 'localEndpoints' on it returns 50x error code.

My gut feeling is that this would be too costly. Services that have a lot of endpoints would churn a lot during a rolling update of the pods. Trying to add/remove backend nodes for an LB would make a lot of calls to the cloud provider and it's also possible that the cloud provider is not always able to add/remove backends as quickly as kube-proxy would. But we can make fairly safe assumptions that health check failures from LBs are responded to quickly as they are designed for this type of failure.

Thank you, I agree with it.

CharlieR-o-o-t avatar Apr 21 '22 09:04 CharlieR-o-o-t

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 20 '22 10:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 19 '22 11:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 18 '22 12:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 18 '22 12:09 k8s-ci-robot

We are able to perform health check with port from service healthCheckNodePort, but all cluster nodes in NLB members looks like huge overhead.

This is expected behavior -- as you mentioned, using the healthCheckNodePort ensures only nodes with the endpoint is being used by the LB

In case externalTrafficPolicy == "local", use nodes with service endpoints in argument to EnsureLoadBalancer. Call EnsureLoadBalancer (trigger event) on each service endpoints update (pod scaling, migration etc.)

My gut feeling is that this would be too costly. Services that have a lot of endpoints would churn a lot during a rolling update of the pods. Trying to add/remove backend nodes for an LB would make a lot of calls to the cloud provider and it's also possible that the cloud provider is not always able to add/remove backends as quickly as kube-proxy would. But we can make fairly safe assumptions that health check failures from LBs are responded to quickly as they are designed for this type of failure.

I think watch endpoint is necessary. If cluster have thousands of nodes, every loadbalancer service will make LB Service add thousands listen for every node, but only several of them ( maybe service only bind one or two pod) work. And in some other case, like in 3-layer network, LB can direct link to pod ip. I agree with listen endpoint is too costly.. In our case, we add label for LB service's endpoint, and only watch endpoints with label change of the subset.

AllenXu93 avatar Apr 07 '24 02:04 AllenXu93