Feature requests: change kubectl binary version per cluster
Hello i have few k8s clusters. Old clusters in version 1.13.12, and new in 1.16.13. When i execute command with kubectl version 1.16 kubectl describe ingress on cluster 1.13, i have error:
Error from server (NotFound): the server could not find the requested resource
So i must use older kubectl version, for example 1.14. I want to kubie set my kubectl binary per cluster. Kubie can read this kubectl binary version from cluster config file.
This same problem i have with helm - on old clusters i use helm2, and on new i use helm3.
My config proposal is to add aliases configuration:
apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
certificate-authority-data:
server:
aliases:
kubectl: "~/bin/kubectl-1.14"
helm: "~/bin/helm2.14.1"
name:
contexts:
- context:
cluster:
user:
namespace:
name:
current-context:
users:
- name:
user:
client-certificate-data:
client-key-data:
That's a pretty cool idea. I haven't experienced the issue you have with kubectl, but I could definitely use the helm selector.
An alternative approach would be that of kbenv use auto (https://github.com/little-angry-clouds/kubernetes-binaries-managers/tree/master/cmd/kbenv#use-version) which automatically detects the version of k8s running in the cluster and downloads an appropriate kubectl binary to use.
That functionality could be added to kubie [to avoid depending on kbenv -- or potentially not. kbenv installs a binary kubectl-wrapper. If you add kubectl to a directory in $PATH as a link that points to kubectl-wrapper, then each invocation of kubectl will transparently use the functionality of kbenv incl. the auto version downloading functionality. So technically, I think, no tool changes are needed to either kubie nor kbenv to use this (just a change to the user's environment to wire them both together)