kueue icon indicating copy to clipboard operation
kueue copied to clipboard

Improve liveness and readiness probes accuracy

Open astefanutti opened this issue 1 year ago • 7 comments

What would you like to be added:

The liveness and readiness probes should succeed only when the operator is fully operational. This means they should report a health status only when all the components, i.e, webhooks, extension API servers, controllers, have started and are servicing requests.

Why is this needed:

At the moment, the liveness and readiness probes report a healthy status unconditionally, as soon as the controller-runtime manager has started, despite the operator and its components (webhooks, extension API servers, controllers) may not be operational yet.

That can happen over the period of time it takes for the certificates generated by cert-controller to be propagated into the secret volume mount for example.

In such cases, having the probes reporting an accurate status can help recovering issues, and other components correctly waiting for the operator readiness.

Completion requirements:

This enhancement requires the following artifacts:

  • [ ] Design doc
  • [ ] API change
  • [ ] Docs update

The artifacts should be linked in subsequent comments.

astefanutti avatar Jan 05 '24 10:01 astefanutti

I believe this issue is already addressed with https://github.com/kubernetes-sigs/kueue/pull/1676 (and cherry-picked to 0.5.x)/ @astefanutti PTAL, if so then we can close.

mimowo avatar Feb 07 '24 14:02 mimowo

@mimowo thanks for the update. I'd say #1676 fixes most of this issue.

There are a couple of things in the scope of this issue that we may still want to address like:

  • Including the status of the visibility API server in the readiness check
  • Improving the accuracy / usefulness of the liveness check

So I think we can close this one, and we can create some finer-grained ones for the two left-overs above. WDYT?

astefanutti avatar Feb 08 '24 09:02 astefanutti

  • Including the status of the visibility API server in the readiness check

Seems like a valid improvement indeed, otherwise we may be getting errors from the server.

  • Improving the accuracy / usefulness of the liveness check

Any concrete example, other than the visibility API server, where we could improve accuracy?

So I think we can close this one, and we can create some finer-grained ones for the two left-overs above. WDYT?

I prefer more fine-grained issues, so that we can prioritize / close them independently.

mimowo avatar Feb 08 '24 09:02 mimowo

  • Including the status of the visibility API server in the readiness check

Seems like a valid improvement indeed, otherwise we may be getting errors from the server.

Agreed.

  • Improving the accuracy / usefulness of the liveness check Any concrete example, other than the visibility API server, where we could improve accuracy?

I wonder if probing for the webhooks in the liveness probe, as done in the readiness probe now with #1676, would be useful to mitigate the situation where cert-controller might fail generating / injecting the certificates at start time, and possibly over cert rotation. Do you think that would be an improvement over the current liveness probe implementation?

So I think we can close this one, and we can create some finer-grained ones for the two left-overs above. WDYT?

I prefer more fine-grained issues, so that we can prioritize / close them independently.

Sounds good, I'll create them and close this one.

astefanutti avatar Feb 08 '24 17:02 astefanutti

I wonder if probing for the webhooks in the liveness probe

Oh, interesting. I assumed the liveness probe = readiness probe, but it indeed the liveness probe = health probe in controller manager: https://github.com/kubernetes-sigs/controller-runtime/blob/7032a3cc91d2afc4c2d54e4a4891cf75da9f75f5/pkg/manager/internal.go#L281-L285.

And currently, the health probe is just ping in Kueue. So, yes I think using the liveness probe checking webhook server is better so that we can see if the server is down. cc @trasc @alculquicondor

mimowo avatar Feb 08 '24 17:02 mimowo

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 08 '24 18:05 k8s-triage-robot

/remove-lifecycle stale

tenzen-y avatar May 09 '24 07:05 tenzen-y

/assign

mbobrovskyi avatar Jul 08 '24 08:07 mbobrovskyi

@astefanutti since this question was posted we have added the readiness probe to Kueue in https://github.com/kubernetes-sigs/kueue/pull/1676 (and later improved in follow ups).

Do we still have some known gaps to cover in the readiness probe, or scenarios that we want to cover in the liveness probe?

If not, I would suggest to park this issue (close) until we have such scenarios.

mimowo avatar Jul 18 '24 10:07 mimowo

@mimowo apologies for the late reply.

I think what's been done to improve the readiness mostly addresses this issue, and I agree with your suggestion to close it, and create some finer-grained ones if needed.

astefanutti avatar Jul 29 '24 10:07 astefanutti

/close

astefanutti avatar Jul 29 '24 10:07 astefanutti

@astefanutti: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jul 29 '24 10:07 k8s-ci-robot