dashboard icon indicating copy to clipboard operation
dashboard copied to clipboard

Show output of Healthcheck

Open fab1an opened this issue 3 years ago • 11 comments

What would you like to be added?

Provide an easy way to view the detailed HTTP-response of a httpGet liveness/readyness-check of pods, in case of error 500.

In order to not clutter the UI this could be only show for crashlooping-containers.

Why is this needed?

If you use a web-framework with a health-endpoint like springboot or dropwizard, the endpoint will provide you with detailed information of what makes the healthcheck fail.

In case of error Kubernetes dashboard only shows a response of 500. To get the details you have to

  • Have logging configured correctly which might not always be the case
  • Disable the liveness-probe to be able to start the container, then call the healthcheck-endpoint yourself, where you also have to go into the container if the service is not yet setup correctly.

fab1an avatar Jun 22 '22 05:06 fab1an

@floreks Is there any realistic chance this will ever be implemented? I think it would be a great addition to the dashboard and help immensely if logging etc. is not yet setup.

fab1an avatar Jul 15 '22 05:07 fab1an

@fab1an not sure if this is what you mean but I have added super simple /health endpoint that returns {running: bool} object. It only checks if Dashboard can access K8S API server as this is our only requirement.

Ref: #7301

floreks avatar Jul 23 '22 16:07 floreks

Hi, no I mean that you see the actual healthcheck output of a deployed deployment, so if you have a pod that get's restarted because of healthcheck returning 500 you can see the payload of that 500 error.

Am 23.07.2022 um 18:55 schrieb Sebastian Florek @.***>:

 @fab1an not sure if this is what you mean but I have added super simple /health endpoint that returns {running: bool} object. It only checks if Dashboard can access K8S API server as this is our only requirement.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.

fab1an avatar Jul 23 '22 17:07 fab1an

/reopen

floreks avatar Jul 23 '22 21:07 floreks

@floreks: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 23 '22 21:07 k8s-ci-robot

@fab1an if this information is available in the K8S API then it should be easy to add, otherwise I don't think we can add such a feature since the dashboard is fully stateless and we only rely on the K8S API. Does kubectl provide the information you need?

floreks avatar Jul 24 '22 12:07 floreks

@kunal-kushwaha when https://github.com/kubernetes/kubernetes/issues/111386 is ready it will be possible to show the output in the dashboard!

fab1an avatar Aug 25 '22 05:08 fab1an

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 23 '22 05:11 k8s-triage-robot

/remove-lifecycle stale

fab1an avatar Nov 23 '22 05:11 fab1an

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 21 '23 06:02 k8s-triage-robot

/lifecycle frozen

floreks avatar Feb 21 '23 08:02 floreks