k9s
k9s copied to clipboard
Hanging / slow when describing pod

Describe the bug A clear and concise description of what the bug is.
Tried using several versions of k9s (latest v0.26.3 - v0.25.18) and when I describe a pod, it is hanging for about 20 seconds. Nothing is showing up in the logs when I run k9s with k9s -l debug
during the time that I am hanging with the command. Getting logs in k9s is fine and I can quickly do a describe with kubectl.
To Reproduce Steps to reproduce the behavior:
- Login to cluster -> exporting KUBECONFIG env var to context ->
k9s
- Press
d
on pod - Wait....
Expected behavior
Should be faster similar to kubectl.
Versions (please complete the following information):
- OS: ubuntu 22.04
- K9s: v0.26.3 - v0.25.18
- K8s: v1.23.5
Additional context
Any other information I can provide but figured I'd start with debug logs.
CustomView watching `/home/rob/.config/k9s/views.yml
Custom view load failed /home/rob/.config/k9s/views.yml "open /home/rob/.config/k9s/views.yml: no such file or directory"
CustomView watcher failed no such file or directory"
TABLE-UPDATER canceled -- "v1/nodes"
TABLE-UPDATER canceled -- "v1/pods"
^^^ This is when I describe pod and no error shows up ****
can confirm also on
Version: v0.26.3
Commit: 0893f13b3ca6b563dd0c38fdebaefdb8be594825
Date: 2022-08-04T05:18:24Z
k8s clientVersion/serverVersion:
buildDate: "2022-07-13T20:21:05Z"
compiler: gc
gitCommit: c1de2d70269039fe55efb98e737d9a29f9155246
gitTreeState: clean
gitVersion: v1.23.9+rke2r1
goVersion: go1.17.11b7
major: "1"
minor: "23"
platform: linux/amd64
describing any object freezes k9s for few seconds depending probably on amount of text that needs to be displayed, from 10 seconds for simple object to 30-40 seconds for pod in my case Reproduced on fresh k8s cluster install also
I'm also seeing the same issue in... K9s: v0.26.3 k8s: v1.23.10 Running on MacOS Monteray 12.5.1, M1 Pro.
Perhaps it is related to https://github.com/derailed/k9s/issues/1715... If someone has access to cluster metrics and can check this case it would be great. Thanks!
@robcxyz Hum... Can seem to repro this. Wondering if it relates to connectivity or volume. Does this happen on every pods on the same cluster? Also is there a log for viewing yaml on the same slow describing pods
?
How about on other clusters?
Will need more info here so we can diagnose.
Thank you all for piping in!
@derailed - Thank you for following up.
"Does this happen on every pods on the same cluster?"
- Sort of. Some pods are extremely laggy (up to ~40 second) and some are basically instantaneous.
"Also is there a log for viewing yaml on the same slow describing pods?"
- Generally the only log I am seeing with logLevel
trace
isDBG TABLE-UPDATER canceled -- "v1/pods"
though sometimes I'll get this output:
7:36AM TRC [CAN] v1/pods([list watch]) &SelfSubjectAccessReview{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] [{k9s Update authorization.k8s.io/v1 2023-01-29 07:36:49 +0530 IST FieldsV1 {"f:spec":{"f:resourceAttributes":{".":{},"f:resource":{},"f:verb":{},"f:version":{}}}} }]},Spec:SelfSubjectAccessReviewSpec{ResourceAttributes:&ResourceAttributes{Namespace:,Verb:list,Group:,Version:v1,Resource:pods,Subresource:,Name:,},NonResourceAttributes:nil,},Status:SubjectAccessReviewStatus{Allowed:true,Reason:,EvaluationError:,Denied:false,},} <<<nil>>>
But I don't think that log is related.
"How about on other clusters?"
- This is happening on vultr clusters along with local minikube / k3s. Vultr seems a bit worse but I have a lot more running on there. Local clusters will have some pods be instantaneous but then I'll also describe the same pod multiple times and it will sometimes be instantaneous and lag very bad other times.
Let me know if there is anything else you need and thank you for all your work.
@robcxyz Thank you for the details!! So from what I've gathered, for a given pod describe could be lagging or fast. I assume this is true on either a local or remote cluster. Is this correct? Do you experience any describe lag when using kubeclt directly on the same pod? Also it might be good if you could email me a kubectl describe for pods that are consistently lagging. My plan is to add some more debugging so we can potentially see if we are hung on a lock and something is going south on either the api call or rendering. Thank you!
@derailed - Sorry for lagging reply
I assume this is true on either a local or remote cluster. Is this correct?
- Correct. This is the same on local minikube / k3s and remote cluster.
Do you experience any describe lag when using kubeclt directly on the same pod?
- No I don't. There is no lag using kubectl.
Also it might be good if you could email me a kubectl describe for pods that are consistently lagging.
- Happy to email you anything you need but in general, this is happening on basically every pod intermittently. Just let me know though and I can send you whatever.
Perhaps if you add some extra logs, I can update to the latest version and send you the relevant log.
Thanks again for your help.
@robcxyz Thank you! Could you please update to the latest rev v0.29.1. Made some perf improvement around describe. Please reopen and we can dig further is the problem still exists. Thank you!