podman-desktop
podman-desktop copied to clipboard
Kubernetes context UI: Add logos based on cluster name / URL's
Is your enhancement related to a problem? Please describe
Reference: https://github.com/containers/podman-desktop/pull/4560
When implementing the Kubernetes context, in the UI we show logo's based upon the cluster as so:
The only way to determine what it is, is to check the name as well as the URL.
For example, sandbox is determined via the URL containing: openshift.rhcloud.com/, kind would be determined by the cluster name (typically would be kind-default).
Describe the solution you'd like
@benoitf Any recommendations on implementation?
I was honestly going to just store logo's as base64, but maybe it should be in a separate extension / be called via the extension / return a logo based upon an input?
Right now the implementation https://github.com/containers/podman-desktop/pull/4560 will simply just use the Kubernetes logo.
Describe alternatives you've considered
No response
Additional context
No response
For me there are two cases here: contexts that are represented as Kubernetes connections within Podman Desktop (e.g. I just created a Kind cluster in Settings > Resources, and now there is an entry in kube config), and contexts where we could match a provider but not to a specific/known connection.
I would highly prefer we focus the first case on figuring out the 'connection association', and then the icon or anything else comes from the connection - because now we know if we can link to Resources, add start/stop actions, 'cleanly' delete, etc. I'm hoping the Kubernetes connection can give enough info to make that association. For the second case, I would keep it simple for now - e.g. maybe provider can give regex and we just use the logo of the extension?
The context names are vague at best, I think you need more information from the provider in a side channel.
https://github.com/containers/podman-desktop/issues/3385#issuecomment-1666438775
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* default default default
kind-kind-cluster kind-kind-cluster kind-kind-cluster
kubernetes-admin@kubernetes kubernetes kubernetes-admin
minikube minikube minikube default
It is mostly an issue with the default cluster names provided by the upstream k3s and kubeadm installers...
The endpoint does not help much either, since it is tunneled over a random port (or always localhost:6443)
EDIT: The actual PR shows a kubernetes logo for the connections that it doesn't know about.
That would work just fine, change the logo for the "known" clusters and leave it for the others
For me there are two cases here:
I should have mentioned: to me the next few features planned for the view take priority over this; we don't need to know the connection association until after we do delete (because yes, I have accidentally deleted a context for a connection I was still using 😉). It's the situations like that we can improve which matter more, having the correct logo is really just a nice side-effect.
Having correct logos for other entries would be nice too, but a very low/future priority.
Thanks for the insight, for now I will use the generic Kubernetes logo for the icon until we find a solution to associate names / url information to the respective provider.
I do agree with @afbjorklund that it will be difficult to figure it out due to the limited information that kube context provides.
When we implement #4562 we are doing a query to the Kubernetes cluster anyways for node, pod, deployment information.
If we are querying information from the node through the API (equivalent for kubectl get node). We can safely determine what cluster is being used from the metadata or containers.
For example, for k3s: kubectl get nodes -o=jsonpath='{.items[*].metadata.labels.k3s\.io/hostname}'
Kind does the same for hostname: kubernetes.io/hostname: kind-cluster-control-plane as well as k0s.
Same goes for OpenShift, SUSE, Rancher, etc.
Using the metadata from kubectl get nodes will help determine the logo to use.
Doing some kind of "average" on the node names to get a cluster name sounds like even more magic...
But sure, for instance the lima hostnames will all start with lima- so that will tell you the VM provider.
Another approach could be to drive more cluster metadata to the upstream, but that can take "a while"
Not even sure what SIG owns the format of the kubeconfig, but it seems the YAML could fit some more in?
This issue has been automatically marked as stale because it has not had activity in the last 6 months. It will be closed in 30 days if no further activity occurs. Please feel free to leave a comment if you believe the issue is still relevant. Thank you for your contributions!
Keep open.