dashboard icon indicating copy to clipboard operation
dashboard copied to clipboard

k8s-dashboard exec is giving black screen intermittently & sometime with error - Sending Error: Error: http status 404

Open shnigam2 opened this issue 1 year ago • 8 comments

What happened?

When try to exec into pod getting black screen intermittently & sometime with error - Sending Error: Error: http status 404.

Error on dashboard pod logs 2024/03/26 09:31:23 handleTerminalSession: can't Recv: sockjs: session not in open state

What did you expect to happen?

Expectation is exec should work everytime and able to get pod session on every attempt.

How can we reproduce it (as minimally and precisely as possible)?

EKS version we tried and observed same behaviour - 1.26.7 & 1.28.5. Dashboard version - 6.0.8 chart: kubernetes-dashboard repoURL: https://kubernetes.github.io/dashboard/ Observed this issue when running 2 replicas on dashboard pod. With single replica this issue is not reporting. Opened case with openunison as well https://github.com/OpenUnison/openunison-k8s/issues/105

Anything else we need to know?

Detailed info about openunison and ingress is updated in https://github.com/OpenUnison/openunison-k8s/issues/105

What browsers are you seeing the problem on?

Chrome, Safari

Kubernetes Dashboard version

6.0.8

Kubernetes version

1.26.7 & 1.28.5

Dev environment

NA

shnigam2 avatar Mar 26 '24 10:03 shnigam2

@floreks Could you please suggest on this, seems like issue was started reporting https://github.com/kubernetes/dashboard/issues/8771

shnigam2 avatar Mar 26 '24 10:03 shnigam2

Hi @floreks can I offer any specific details/logs/tcp traces for your analysis?

dagobertdocker avatar Apr 03 '24 09:04 dagobertdocker

On our side, it's quite simple. API exposes the WebSocket endpoint and frontend connects to it. In case you are using some kind of additional proxies you need to make sure that they can reliably forward such connections and maintain them. I can imagine that if the connection is dropped and/or retried then the next exec can be forwarded to a different API pod than the previous one. It could potentially result in an error. On our side, the only thing I can think of we could improve is to add a few retry attempts on the frontend side so that you don't have to reload the page. This is only an improvement though. Everything else has to be done on your side as part of your proxy configuration.

floreks avatar Apr 03 '24 10:04 floreks

Observation: when I scale api deployment to a single replica (as the default used to be 3, and which has now been changed in helm chart for 7.1.3, it works consistently.

dagobertdocker avatar Apr 04 '24 08:04 dagobertdocker

That's expected. If the connection will be dropped at any point by i.e. your proxy then it will always reconnect to the same pod that keeps the connection open internally for some time. With more than 1 replica there is no guarantee that the request will be forwarded to the same pod to reconnect. It would always have to create a new connection.

floreks avatar Apr 04 '24 09:04 floreks

Bump

CAR6807 avatar Jun 06 '24 14:06 CAR6807

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 04 '24 14:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 04 '24 15:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 03 '24 16:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Nov 03 '24 16:11 k8s-ci-robot