kubectx
kubectx copied to clipboard
[Idea] make it possible to use different contexts/namespaces in multiple shells
What do people following kubectx
think of this idea:
We can detect if kubectx
command is executed in interactive mode OR eval'ed in a shell (like eval $(kubectx ...)
, piped elsewhere or redirected to a file, by checking if /dev/stdout is a TTY or not.
If kubectx is evaluated in a shell, we can print an alias
that aliases kubectl to kubectl --context=NAME
. So when that shell is exited, the default context would go back to what's set before. This would let users work on multiple clusters simultaneously in different terminal tabs. It would look like this:
eval $(kubectx NAME)
and it would print something like alias kubectl=kubectl --context=NAME
which then is evaluated in the current shell, therefore creating an alias. Similarly, eval $(kubens ...)
would also evaluate an alias kubectl=kubectl --namespace=NAME
(these two wouldn't work together, but I might have a solution for that too, by writing a basic bash function that picks up these names from env.
Just an idea. The good old way of using kubectx/kubens would still be around and would continue to work the same way.
This is a feature I would find really useful!
Would instead of (or in addition to) using eval $(kubectx ...)
kubectx could have a -a
parameter to activate
a context without changing the default context, pretty much in the same way as you describe but instead of having to eval, kubectx would simply set a new alias for kubectl. To me having to use eval
makes it a bit less obvious and prone to errors than typing kubectx -a my_gke_cluster
in my shell and go on with my work. kubectx -a
could also support completion.
In reality, this has to work like either
eval $(kubectx -a NAME)
or
source <$(kubectx -a NAME)
so that we can inject a function to bash, like:
func kubectl() {
if [[ -n "$KUBE_CONTEXT" ]]; then
command kubectl --context="$KUBE_CONTEXT" $@
else
command kubectl $@
fi
}
this would give you the illusion of "changing the active shell on the context".
There has been discussion of kubectl actively supporting $KUBE_CONTEXT and $KUBE_NAMESPACE but these proposals did not take off. I think as kubectx
/kubens
we can do better here and support these env vars.
The main challenge is: either we'll ask people to source a script for kubectx/kubens during the installation –or we'll need them to use eval/source syntax above.
Couple of notes, because I like this idea, and I'd really prefer a kubectx
that I can use from multiple shells at the same time (and I'm using my own version that does something like that):
- The problem with just aliasing the
kubectl
command is that any non-interactive scripts that call out to kubectl don't follow the context switch.- This varies from annoying (e.g.
kube-ps1
displays the wrong context), to dangerous (you verify some state interactively withkubectl describe blah
, but then you run some helper script where someone on your team forgot toshopt -s expand_aliases
and it runs on the wrong cluster)
- This varies from annoying (e.g.
- But there's a better variant of this idea I think: use the existing
$KUBECONFIG
env var that is supported, in combination with the "produce a single context config" trick introduced in 1.11 (kubectl config view --minify --flatten --context=$KUBE_CONTEXT > $KUBE_TMP_CONFIG_FILE
) to generate configs on the fly- Then you just
export KUBECONFIG=$KUBE_TMP_CONFIG_FILE
from your shell function - That way, anything that supports
KUBECONFIG
(basically everything, including helm, kube-ps1 etc.) follows the context switch, even non-interactively. - You can generate the temporary config files in some
mktemp -d
dir, and basically cache them there against the context/namespace name - You squirrel the previous value of
KUBECONFIG
away in a different env var, so that you can support setting it back to whatever was the default later
- Then you just
- You can detect whether the script is being sourced pretty reliably with things like (bash)
[[ "${BASH_SOURCE[0]}" == "${0}" ]]
or (zsh)[[ "$ZSH_EVAL_CONTEXT" == 'toplevel' ]]
and print instructions for sourcing the script correctly, or just fall back to the old kubectx behavior
Thanks it took me a while to unpack your idea @dominics. Thanks a lot, I mostly agree with the KUBECONFIG route.
- use the existing
$KUBECONFIG
env var
I agree that we can use $KUBECONFIG for per-shell settings. ✅
- You can generate the temporary config files in some
mktemp -d
dir
If we can, it would be great to avoid creating a new file. Something like:
foo="$(kubectl config --minify [...])
KUBECONFIG=<(echo "$foo")
This doesn't work because fd is readable only once. But I bet we can find a trick without writing to fs.
- You can detect whether the script is being sourced pretty reliably with things like (bash)
[[ "${BASH_SOURCE[0]}" == "${0}" ]]
or (zsh)[[ "$ZSH_EVAL_CONTEXT" == 'toplevel' ]]
I don't think we should complicate it like that.
That said, I'm actually more interested in the user experience about how env var gets exported.
- I think user should NOT source a script and we should NOT change installation instructions for this. So I'm -1 for the following:
-
source /path/to/kubectx-helper.sh
-
source <(kubectx print-shell-func)
-
- I think we should start a new shell for each kubectx/kubens command.
- It's a pain to exit each nested shell though after running the cmd a few times ❌
- We can easily start the shell like
env KUBECONFIG=... bash
What do you think?
In my opinion starting a new shell is very hacky, as I user I wouldn't expect a command to start a new shell, so I would be very surprised to discover this behavior.
There's nothing wrong with using source
command, what's your concern about it? If you are thinking that it's a long command to type, it is always possible to wrap it in some aliases 😉
I'd avoid using <()
because it won't allow fuzzy filter to work, but mktemp
sounds like a perfect approach, it exists for a good reason, no need to be afraid of writing tiny scripts to disk 🙂
In other words, this doesn't look too bad to me:
$ kubectx ctx1 # set context globally
$ . $(kubectx --local ctx2) # set context in this terminal
$ . $(kubectx --local $(kubectx | fzy)) # choose context with fuzzy filter and set it in this terminal
Where kubectx --local
is implemented like this:
context="$1"
file="$(mktemp -t "kubectx.XXXXXX")"
echo 'export KUBECONFIG=...' >$file
echo $file
If I could have kubectx
be limited to the current shell/session that would solve much of mine. I daily administer multiple clusters and not being able to have different shells/sessions configured for different clusters causes lots of switching for me.
Really looking forward to this feature too!
Just a word of warning regarding this feature, I have written a custom script which uses the $KUBECONFIG
environment variable by first writing whatever context you want as current-context to a file /tmp/kubecontext/${context}.yaml
. Example, /tmp/kubecontext/prod.yaml
might contain "current-context: prod
" (and that line only).
Then just export KUBECONFIG=/tmp/kubecontext/prod.yaml:$HOME/.kube/config
, as per the docs, the first file to set a particular value or map key wins, so whatever context was set in /tmp/kubecontext/prod.yaml
should be used
This works flawlessly most of the time, but sometimes after running some kubectl
commands, the current-context
in $HOME/.kube/config
gets overwritten by what was set in /tmp/kubecontext/${context}.yaml
.
This is happening on kubectl 1.16.1 (client version). I've yet to find the cause for this bug though, it might also perhaps be a bug in how my shell renders PS1. Found this project while searching for solutions. :)
@Eeemil Are you trying to use Kubernetes without your custom script in some cases? If not, why does it matter that the current-context
in the main .kube/config
file gets modified? It should always be overridden by $KUBECONFIG
, right?
Also, would you mind sharing your script? It sounds pretty simple, but it would be nice not to have to write it myself.
Yes, I'm trying to use Kubernetes without my custom script in some cases. The problem is that when I run my script, set-cluster -l prod
, I expect to only be logged in to prod in that specific terminal. Still, after running a few kubectl
commands, I may sometimes be logged in to prod in all my terminals. (Which is a pretty scary surprise)
Here is my script, with some names redacted. My script also affects GKE, btw, so some of my script might be superfluous
#!/usr/bin/env bash
# Set gcloud/kubernetes cluster
# Usage:
# Either:
# eval $(./tools/set-cluster.sh [-l|-g local or global] prod|stage)
# (eval needed to export environment variables)
# or
# Just steal the functions to your rc-file. Fish is incompatible with almost
# everything though ¯\_(ツ)_/¯
# or
# Add the following FISH COMPATIBLE function to your rc
# function set-cluster
# eval (PATH-TO-THIS-SCRIPT.sh -FISH $argv)
# end
# For zsh auto completion
_set-cluster() {
#compdef set-cluster
_arguments "1: :(-l -g stage prod)"
_arguments "2: :(stage prod)"
}
# Switch between k8s&gcloud clusters
# Usage: set-cluster [-POSIX (default)|-FISH] [-l|g local/global] prod|stage
set-cluster () {
local SYNTAX="POSIX"
if [ "$1" = "-POSIX" ]; then
SYNTAX="POSIX";
shift
elif [ "$1" = "-FISH" ]; then
SYNTAX="FISH";
shift
fi
if [ "$1" = "-l" ]; then
local SET_GLOBAL="false"
shift
elif [ "$1" = "-g" ]; then
local SET_GLOBAL="true"
shift
else
local SET_GLOBAL="true"
fi
local G_CLUSTER=$1
local PROD_CLUSTER="prod-cluster-name-in-kubeconfig"
local STAGE_CLUSTER="stage-cluster-name-in-kubeconfig"
_apply-cluster () {
# $1: google project
# $2: GKE cluster
# $3: Kubecontext name
# $4: Should set global? (true|false)
# Unset environment variables. If they are not unset, running `set
# cluster stage` after `set cluster -l prod` will keep you logged in to
# prod due to environment variables overriding global settings)
if [ "$SYNTAX" = "POSIX" ]; then
echo "unset CLOUDSDK_CORE_PROJECT;"
echo "unset CLOUDSDK_CONTAINER_CLUSTER;"
echo "unset KUBECONFIG;"
elif [ "$SYNTAX" = "FISH" ]; then
echo "set -e CLOUDSDK_CORE_PROJECT;"
echo "set -e CLOUDSDK_CONTAINER_CLUSTER;"
echo "set -e KUBECONFIG;"
fi
unset CLOUDSDK_CORE_PROJECT
unset CLOUDSDK_CONTAINER_CLUSTER
unset KUBECONFIG
local G_PROJECT=$1
local G_CLUSTER=$2
local K_CONTEXT=$3
local SET_GLOBAL=$4
if [ "$SET_GLOBAL" = true ]; then
gcloud config set project "$G_PROJECT"
gcloud config set container/cluster "$G_CLUSTER"
>&2 kubectl config use-context "$K_CONTEXT"
>&2 echo "Globally switched to cluster ${G_CLUSTER}/${K_CONTEXT}"
else
# https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
export CLOUDSDK_CORE_PROJECT=$G_PROJECT
export CLOUDSDK_CONTAINER_CLUSTER=$G_CLUSTER
mkdir -p "/tmp/kubecontext"
local currentcontext="/tmp/kubecontext/${K_CONTEXT}.yaml"
echo "current-context: $K_CONTEXT" > "${currentcontext}"
export KUBECONFIG="$currentcontext:$HOME/.kube/config"
if [ "$SYNTAX" = "POSIX" ]; then
echo "export KUBECONFIG=$currentcontext:$HOME/.kube/config;"
echo "export CLOUDSDK_CORE_PROJECT=$G_PROJECT;"
echo "export CLOUDSDK_CONTAINER_CLUSTER=$G_CLUSTER;"
elif [ "$SYNTAX" = "FISH" ]; then
echo "set -gx KUBECONFIG $currentcontext:$HOME/.kube/config;"
echo "set -gx CLOUDSDK_CORE_PROJECT $G_PROJECT;"
echo "set -gx CLOUDSDK_CONTAINER_CLUSTER $G_CLUSTER;"
fi
>&2 echo "Locally switched to cluster ${G_CLUSTER}/${K_CONTEXT}" 2>&1
fi
}
case "${G_CLUSTER}" in
# Prod aliases
"prod" | "production" | "other" | "names" | "$PROD_CLUSTER" )
local K_CONTEXT=$(kubectl config view -o jsonpath="{.contexts[?($.context.cluster=='$PROD_CLUSTER')].name}")
_apply-cluster "google-project-name" "google-cluster-name" "${K_CONTEXT}" "$SET_GLOBAL"
;;
# Stage aliases
"stage" | "staging" | "other" | "aliases" | "$STAGE_CLUSTER" )
local K_CONTEXT=$(kubectl config view -o jsonpath="{.contexts[?($.context.cluster=='$STAGE_CLUSTER')].name}")
_apply-cluster "google-stage-project-name" "google-stage-cluster-name" "${K_CONTEXT}" "$SET_GLOBAL"
;;
*)
echo "Unknown cluster ${G_CLUSTER}"
;;
esac
}
shell="$(ps -p $$ | awk '$1 != "PID" {print $(NF)}')"
# Auto completion (zsh only)
[ "$shell" = "zsh" ] && compdef _set-cluster set-cluster
if [ $# -gt 0 ]; then
# If sourced without arguments, just load the functions
set-cluster $@
fi
@Eeemil Thank you very much!
My version is at https://github.com/vital-software/kc if anyone wants to take a look. Not sure about fish compatibility, but it works with bash/zsh including completion. It has some iterm2, kube-ps1, and aws-vault integration, but those are optional.
using zsh/oh-my-zsh I have a file at $ZSH_CUSTOM/kubectx.zsh
(auto sourced as part of oh-my-zsh I think) with:
https://gist.github.com/stuart-warren/e5d22f51ee0affd17cbd459ddf2f67c9
which allows me to use different contexts/namespaces in different tmux splits
@stuart-warren Tried your script. Got an error for the part kubectl --kubeconfig $KUBECTXTTYCONFIG config current-context
. It says: error: current-context is not set
@stuart-warren Oh I just realized that even without running the ct
function, just the regular kubectx
command would just work. This is so awesome. thanks! The snippet I put in my .zshrc
:
# kubeconfig per session
file="$(mktemp -t "kubectx.XXXXXX")"
export KUBECONFIG="${file}:${KUBECONFIG}"
cat <<EOF >"${file}"
apiVersion: v1
kind: Config
current-context: ""
EOF
I wrote a tool, which plays well with kubectx, to accomplish the suggested functionality. It is doing something similar to the scripts previously suggested here, but my tool is written in go. Maybe some of you find it useful: https://github.com/jlesquembre/kubeprompt
I also came here to request for a similar feature. I've kinda gotten used to how aws-vault
does it. For example, to (temporarily) use an AWS profile for a command, you would go aws-vault exec $PROFILE --
followed by whatever command.
So, FWIW I'm throwing my hat in for a suggestion of the form:
kubectx exec $CONTEXT -- helm ls
I had the same issue with managing multiple clusters and solved it with direnv.
There are two directories named A
and B
- one for each cluster. In each directory I created .envrc
file that sets KUBECONFIG
variable. Cluster configurations are in ~/.kube/config-A
and ~/.kube/config-B
files.
When I cd A
my kubectl
commands are targeted to cluster A. When I leave that directory Kubernetes context unsets and I am not worried to damage some random cluster.
Third-party tools like k9s
also work fine.
Hope this solution helps even if doesn't include kubectx
:)
Eventually, I created kubech which set the context/namespace per shell/terminal.
The nice thing about kubech
is that it requires zero extra config, it simply works with any cluster in ~/.kube/config
with no change in kube config file, and most important thing is that it still can be used along with kubectx/kubens :-)
Has this been added into kubectx
? Is there any plan on adding/releasing it? I think #219 addressed this, but I'm not sure if I've understood that PR correctly.
Another tool in this vein is jx shell
: docs; key bit of sources
https://github.com/sbstp/kubie
Drafted a quick tool in the subshell style (save as kubectl-shell
in $PATH
):
#!/bin/bash
if [[ $# = 0 ]]
then
kubectl config get-contexts
exit
elif [[ $# != 1 ]]
then
echo 'Usage: kubectl shell [<context-name>]'
exit 1
fi
ctx=$1
kc=/tmp/kubeconfig-$RANDOM.json
rc=/tmp/bashrc-$RANDOM.sh
cat >$rc <<EOF
[ -f /etc/bash.bashrc ] && . /etc/bash.bashrc
[ -f ~/.bashrc ] && . ~/.bashrc
export KUBECONFIG=$kc
export PS1='$ctx\$ '
EOF
kubectl config view --flatten --merge --output json | jq --arg ctx $ctx '(.contexts[] | select(.name == $ctx)) as $c | {apiVersion, kind, preferences, "current-context": $ctx, contexts: [$c], clusters: [.clusters[] | select(.name == $c.context.cluster)], users: [.users[] | select(.name == $c.context.user)]}' > $kc
trap "rm -f $kc $rc" EXIT
chmod go-rwx $kc
bash --rcfile $rc
Combining a few of the suggestions above to make a solution that works with a folder of kubeconfig files and adds different context per shell. Just add your kubeconfigs in CUSTOM_KUBE_CONTEXTS.
# Set the default kube context if present
DEFAULT_KUBE_CONTEXTS="$HOME/.kube/config"
if test -f "${DEFAULT_KUBE_CONTEXTS}"
then
export KUBECONFIG="$DEFAULT_KUBE_CONTEXTS"
fi
# Additional contexts to be added in a folder
CUSTOM_KUBE_CONTEXTS="/path/to/kubeconfigs"
mkdir -p "${CUSTOM_KUBE_CONTEXTS}"
OIFS="$IFS"
IFS=$'\n'
for contextFile in `find "${CUSTOM_KUBE_CONTEXTS}" -type f -name "*.yaml"`
do
chmod 600 $contextFile
export KUBECONFIG="$contextFile:$KUBECONFIG"
done
IFS="$OIFS"
# Needed in order for kubectx to work independently in each terminal session
export KUBECTXTTYCONFIG="${HOME}/.kube/tty/$(basename $(tty) 2>/dev/null || echo 'notty')"
mkdir -p "$(dirname $KUBECTXTTYCONFIG)"
export KUBECONFIG="${KUBECTXTTYCONFIG}:${KUBECONFIG}"
cat <<EOF >${KUBECTXTTYCONFIG}
apiVersion: v1
kind: Config
current-context: ""
EOF
# Required to speed up namespace setup. Lookups are not required
kns() {
if [ -z "$1" ]; then
kubens
echo
else
kubectl config set-context --current --namespace $1
fi
}
It's not just a convenience thing, it's a safety thing.
If I have one tab open with my ctx set to my development cluster, then when I open a bright red tab for production to check something out real quick, I'm gonna set my context to prod obviously. When I'm done, I close the prod tab and 10 minutes later need to delete some pods in my development context, whoops. My shell session in the first tab was changed to the production ctx. I've just bangarang'd prod.
Then I dust off my resume, because 💩
@joelmellon you should take a look at https://github.com/vmware-archive/ktx
it is discontinued, but i use it every day. it helps a lot when using more than one shell.
I use a relatively straightforward shell alias to achieve this. I published it as a gist here: https://gist.github.com/bitti/183771a7308b030d933dbe4ea9c5cc9f. It currently doesn't support switching namespaces because I don't need that but this could be easily added.
While researching this I also found https://github.com/aabouzaid/kubech which is pretty similar. And then there is https://github.com/sbstp/kubie which does this and much more. It seems to be a 'kitchen sink' though, so maybe overkill for some for just this usecase. It also works a little different since it spawns a new shell with the new environment instead of changing the environment of the current shell.
@bitti The reason for that is you can't really change env vars of the current shell from a program executed from that shell. You have to spawn a new shell as a subprocess to do that (or run source <(program)
on your shell, which I assume is a lot less desirable). Correct me if I'm wrong.
I've been thinking of adding a minimal implementation that launches an sub-shell with a temporary/isolated kubeconfig file derived from the context specified (e.g. kubectx -s CTX_NAME
). That kubeconfig file would go away at the end of the session, and that introduces somewhat unexpected behavior, as people would think they can still edit their kubeconfig file and have it work in the current window, or have their current-namespace preference saved etc. It might be worth pursuing as long as it's intuitive and minimal (i.e. not a kitchen sink 😉 ).
@bitti The reason for that is you can't really change env vars of the current shell from a program executed from that shell. You have to spawn a new shell as a subprocess to do that (or run
source <(program)
on your shell, which I assume is a lot less desirable). Correct me if I'm wrong.
Yes you can either source a script (which is not that bad with an alias, see my gist) or use a shell function. I don't like the subshell solution because initializing a new shell implies reading ~/.bashrc
etc. and therefore changes the state of the current session. kubie
is pursuing this solution though. Just after I posted my comment above I also found https://github.com/danielfoehrKn/kubeswitch though which is pursuing the bash function approach and seems to be the most mature of all.
Sadly what both kubie
and kubeswitch
get wrong is that they miss to set the modified config for the current session to readonly, which means tools which are modifying the config will silently update the local config instead of the global one. My gist takes care to set this to readonly, so at least such attempts will fail and remind me that I need to switch back to global context first. I also don't like the complexity of merging configs when kubectl
already supports merging multiple configs via :
in KUBECONFIG
. So I think I keep using my gist till I find a need for the advanced features kubeswitch
provides.