Bug: Kubernetes contexts not being detected on SSH connection
It seems xpipe is detecting kubernetes contexts configured on my local machine where xpipe is running, but on an SSH connection I have to a VM that has a variety of contexts configured it isn't detecting any of them.
How does xpipe determine what kubernetes contexts are available for a connection? The user home directory on the VM in question has a .kube subdirectory that contains a standard config file with a few different contexts configured as well as a variety of separate yaml files in that directory with other contexts configured.
It only checks whether kubectl is available in the PATH and loads the locally configured config.
If this is a remote context, if you connect to that from your local kubectl client by configuring the config, that should also work
kubectl is on my PATH on that ssh connection. And running kubectl config current-context right after connecting to it does show a context is set. However, when I have xpipe check for available connections on that host it still doesn't show any kubernetes clusters for some reason.
Okay, I looked into the code and formally it runs kubectl --help and checks whether the exit code of that is 0. It also checks whether kubectl is not located in a directory on /mnt/ for some WSL cases.
This is on an Ubuntu VM and kubectl is installed with homebrew so it is located here: /home/linuxbrew/.linuxbrew/bin/kubectl
And kubectl --help runs successfully with exit code 0 ?
You can also go to Settings -> Troubleshoot -> Launch in debug mode to see debug output of what is running when searching for available connections and why k8s doesn't show up
Are there any updates on this? Wasn't able to reproduce this so far
So I reworked some parts of that, ideally this should work now