cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

TCP for Loadbalancer Monitor causes API server downtime during upgrades

Open MPV opened this issue 2 years ago • 4 comments

/kind bug

What steps did you take and what happened:

  1. Upgrade k8s version in a CAPO target cluster.
  2. When an old control plane node leaves, the API server is removed from the load balancer pool after it has stopped.
    • TCP health monitor fails at time 0s, node is removed at time 90s.
    • ...as the current implementation means:

      After 90 (=3*30) seconds of downtime, API server pool members will be marked as down.

    • ...as per: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/821a1a2ef25ac615db5fb26379eb0c4b947ad284/pkg/cloud/services/loadbalancer/loadbalancer.go#L361-L369
  3. This means any/all API server clients will get intermittent failures (as they round-robin will reach an API server node which isn't there anymore).

What did you expect to happen:

  1. ...
  2. No downtime/errors on the API server, and thus none of the sub-issues seen above.

Anything else you would like to add:

Could we start supporting either/both of:

  • https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1221
  • https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1748

Environment:

  • Cluster API Provider OpenStack version (Or git rev-parse HEAD if manually built): v0.7.0
  • Cluster-API version: v1.3.3
  • OpenStack version: ...
  • Minikube/KIND version: N/A
  • Kubernetes version (use kubectl version): 1.25 -> 1.26
  • OS (e.g. from /etc/os-release): N/A

...but we have manually changed our LB health monitor's to include the changes from #1360 (to alleviate issues seen in #1221 and #1375).

MPV avatar Nov 15 '23 12:11 MPV

An example of a (semi-CAPO-related) symptom we've seen caused by this:

During this time:

  1. Intermittently, nodes can't connect to the API server during this time, thus marking themselves as unready.
  2. Then, pods on these nodes thus get evicted.
  3. ...and workloads starts shifting around in the cluster, potentially triggering cluster-autoscaler, which will have a hard time sizing the cluster correctly as the amount of ready nodes keep changing based on the intermittent (OK/error) answers from the API servers.

MPV avatar Nov 15 '23 12:11 MPV

@MPV is there any conceivable way we can do this while kepeing TCP load balancer monitors?

mnaser avatar Nov 15 '23 14:11 mnaser

Another use case to consider: @seanschneeweiss is using AdditionalPorts on the API loadbalancer to expose SSH on the control plane nodes. If we switch to something other than TCP checks we need to consider that not all ports may be serving https.

mdbooth avatar Nov 24 '23 20:11 mdbooth

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 22 '24 21:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 23 '24 21:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Apr 22 '24 22:04 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 22 '24 22:04 k8s-ci-robot