podman-desktop icon indicating copy to clipboard operation
podman-desktop copied to clipboard

Kubernetes context UI: Add logos based on cluster name / URL's

Open cdrage opened this issue 2 years ago • 8 comments
trafficstars

Is your enhancement related to a problem? Please describe

Reference: https://github.com/containers/podman-desktop/pull/4560

When implementing the Kubernetes context, in the UI we show logo's based upon the cluster as so:

image

The only way to determine what it is, is to check the name as well as the URL.

For example, sandbox is determined via the URL containing: openshift.rhcloud.com/, kind would be determined by the cluster name (typically would be kind-default).

Describe the solution you'd like

@benoitf Any recommendations on implementation?

I was honestly going to just store logo's as base64, but maybe it should be in a separate extension / be called via the extension / return a logo based upon an input?

Right now the implementation https://github.com/containers/podman-desktop/pull/4560 will simply just use the Kubernetes logo.

Describe alternatives you've considered

No response

Additional context

No response

cdrage avatar Oct 30 '23 23:10 cdrage

For me there are two cases here: contexts that are represented as Kubernetes connections within Podman Desktop (e.g. I just created a Kind cluster in Settings > Resources, and now there is an entry in kube config), and contexts where we could match a provider but not to a specific/known connection.

I would highly prefer we focus the first case on figuring out the 'connection association', and then the icon or anything else comes from the connection - because now we know if we can link to Resources, add start/stop actions, 'cleanly' delete, etc. I'm hoping the Kubernetes connection can give enough info to make that association. For the second case, I would keep it simple for now - e.g. maybe provider can give regex and we just use the logo of the extension?

deboer-tim avatar Oct 31 '23 04:10 deboer-tim

The context names are vague at best, I think you need more information from the provider in a side channel.

https://github.com/containers/podman-desktop/issues/3385#issuecomment-1666438775

CURRENT   NAME                          CLUSTER             AUTHINFO            NAMESPACE
*         default                       default             default             
          kind-kind-cluster             kind-kind-cluster   kind-kind-cluster   
          kubernetes-admin@kubernetes   kubernetes          kubernetes-admin    
          minikube                      minikube            minikube            default

It is mostly an issue with the default cluster names provided by the upstream k3s and kubeadm installers...

The endpoint does not help much either, since it is tunneled over a random port (or always localhost:6443)


EDIT: The actual PR shows a kubernetes logo for the connections that it doesn't know about.

That would work just fine, change the logo for the "known" clusters and leave it for the others

afbjorklund avatar Oct 31 '23 06:10 afbjorklund

For me there are two cases here:

I should have mentioned: to me the next few features planned for the view take priority over this; we don't need to know the connection association until after we do delete (because yes, I have accidentally deleted a context for a connection I was still using 😉). It's the situations like that we can improve which matter more, having the correct logo is really just a nice side-effect.

Having correct logos for other entries would be nice too, but a very low/future priority.

deboer-tim avatar Oct 31 '23 11:10 deboer-tim

Thanks for the insight, for now I will use the generic Kubernetes logo for the icon until we find a solution to associate names / url information to the respective provider.

I do agree with @afbjorklund that it will be difficult to figure it out due to the limited information that kube context provides.

When we implement #4562 we are doing a query to the Kubernetes cluster anyways for node, pod, deployment information.

If we are querying information from the node through the API (equivalent for kubectl get node). We can safely determine what cluster is being used from the metadata or containers.

For example, for k3s: kubectl get nodes -o=jsonpath='{.items[*].metadata.labels.k3s\.io/hostname}'

Kind does the same for hostname: kubernetes.io/hostname: kind-cluster-control-plane as well as k0s.

Same goes for OpenShift, SUSE, Rancher, etc.

Using the metadata from kubectl get nodes will help determine the logo to use.

cdrage avatar Oct 31 '23 13:10 cdrage

Doing some kind of "average" on the node names to get a cluster name sounds like even more magic...

But sure, for instance the lima hostnames will all start with lima- so that will tell you the VM provider.

afbjorklund avatar Oct 31 '23 14:10 afbjorklund

Another approach could be to drive more cluster metadata to the upstream, but that can take "a while"

Not even sure what SIG owns the format of the kubeconfig, but it seems the YAML could fit some more in?

afbjorklund avatar Oct 31 '23 14:10 afbjorklund

This issue has been automatically marked as stale because it has not had activity in the last 6 months. It will be closed in 30 days if no further activity occurs. Please feel free to leave a comment if you believe the issue is still relevant. Thank you for your contributions!

github-actions[bot] avatar May 01 '24 00:05 github-actions[bot]

Keep open.

deboer-tim avatar May 01 '24 11:05 deboer-tim