k9s
k9s copied to clipboard
SegFault when refreshing azure token
![](https://raw.githubusercontent.com/derailed/k9s/master/assets/k9s_err.png)
Describe the bug Since roughly last week, k9s crashes on startup. From the stacktrace it looks like it fails to persist the token that was acquired from azure.
To Reproduce Steps to reproduce the behavior: 0. have any AKS kubernetes cluster in your kube config
- start k9s
- k9s will crash immediately and show the following stack trace: https://gist.github.com/superkartoffel/528d18e0c2182d2fa2b17464f1f442af
the interesting call is probably here: k8s.io/client-go/plugin/pkg/client/auth/azure.(*azureTokenSource).storeTokenInCfg(0xc000069c00, 0xc0004e8420) k8s.io/[email protected]/plugin/pkg/client/auth/azure/azure.go:353 +0x3ae fp=0xc0005b8930 sp=0xc0005b88c0 pc=0x563ec188eece
Expected behavior k9s should not crash.
Versions:
- OS: manjaro linux
- K9s: 0.25.18 installed from official repo (community/k9s 0.25.18-2)
- K8s: 1.22.6
Additional context I have multiple AKS cluster in my kube config. All of them have the same k8s version 1.22.6.
Note: a workaround is to delete the config lock file after the crash and use kubectl to refresh the token. For example just call
kubectl get pods
once and k9s will work fine until the token expires.
I have the same issue but with Google Kubernetes Engine. https://gist.github.com/adopauco/dcf8e5c0c97dc3ffbe3bc73465c702ba
- OS: Arch linux
- K9s: 0.25.18 (community/k9s 0.25.18-2)
- K8s: 1.21.11
Using Azure kubelogin seems to fix that issue on AKS and is probably the only supported option in the future. Maybe there was already somewhere some breaking change that broke the legacy way to authenticate: https://github.com/Azure/kubelogin
I am using Azure CLI Token Login.
@adopauco It seems like there was a similar change for the GCP authentication. Maybe this article helps: https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
I'm facing this bug too with Google OIDC auth provider
- OS: Arch Linux
- K9s: 0.25.18
- Kubernetes
1.20.15
However, everything is working as expected with same kubernetes cluster & auth method on MacOS with K9s 0.25.18
.
Currently, I'm using this workaround on my Arch Linux too.
Note: a workaround is to delete the config lock file after the crash and use kubectl to refresh the token. For example just call kubectl get pods once and k9s will work fine until the token expires.
Update:
I no longer experience this issue with the new version v0.25.21 (66c22f1)
, tested with Arch Linux package k9s-git.