K8s manifest inconsistent between describe and edit
Describe the bug When using k9s version v0.32.5 to describe a kubernetes custom resource (Kind: Prometheus, in my case), when describing the resource, I get a different manifest content compared to when editing it.
E.g. on describing the resource, I get
...
Remote Write:
Basic Auth:
Password:
Key: password
Name: some-password
Username:
Key: username
Name: some-username
URL: https://some.url
Write Relabel Configs:
Action: drop
Regex: some_latency_seconds_.*
Source Labels:
__name__
Basic Auth:
Password:
Key: password
Name: some-other-password
Username:
Key: username
Name: some-other-username
URL: https://some.other.url
Write Relabel Configs:
Action: keep
Regex: availability.*
Source Labels:
__name__
Action: labelkeep
Regex: feature
Replicas: 1
...
and on editing the resource, I get
...
remoteWrite:
- basicAuth:
password:
key: password
name: some-password
username:
key: username
name: some-username
url: https://some.url
writeRelabelConfigs:
- action: drop
regex: some_latency_seconds_.*
sourceLabels:
- __name__
replicas: 1
...
To Reproduce Unfortunately, I was unable to generate a minimal reproducible example. So I don't really expect anyone to find the root cause right away, but maybe someone can point me in a direction what I could look for. A colleague of mine, using k9s version v0.32.4, reported that it works for him, so no inconsistencies.
Expected behavior I expect the manifest to be the same when describing and editing the k8s resource.
- OS: Ubuntu 22.04.4 LTS
- K9s: v0.32.5
- K8s: 1.29.6
Additional context Already checked if it's the editor caching anything, but it's not the editor, tried with both nano and vi. Also restarted the PC. Is there something like a cache on k9s I could clear? I don't want to dump my whole config.
Thanks in advance
I found the issue, I had multiple kubeconfigs defined and I used to just switch contexts in k9s. Now if the clusters and/or users (not sure yet if both are the problem or just one of them) between different kubeconfigs have the same name, it leads to inconsistencies, which probably should not be the case. Renaming the clusters and users to unique names between the different kubeconfigs fixed the problem for me.
@bravenut so the issue is fixed for you? Can this issue be closed then?
The issue is fixed yes. But in case other people have different kubeconfigs with the clusters having the same name, it will lead to the same problem. Maybe there's a workaround/easy fix for that? E.g. giving clusters, contexts, users a unique internal identifier per kubeconfig?
This issue is stale because it has been open for 30 days with no activity.