vscode-dev-containers
vscode-dev-containers copied to clipboard
Copying .kube/config may not be the $KUBECONFIG configured context
- VSCode Version: Version: 1.68.1
- Local OS Version: MacOS 12.3.1
- Local chip architecture: arm64
- Reproduces in: Remote - Containers
- Name of Dev Container Definition with Issue:
kubernetes-helm kubernetes-helm-minikube
This applies to the kubernetes "from" the host situation.
Steps to Reproduce:
- Install docker, docker-compose, helm and kubectl in the devcontainer.
- Configure devcontainer with copy-kube-config.sh
- Bind mount docker and .kube in the devcontainer
"mounts": [
"source=/var/run/docker.sock,target=/var/run/docker-host.sock,type=bind",
"source=${env:HOME}${env:USERPROFILE}/.kube,target=/usr/local/share/kube-localhost,type=bind"
],
"remoteEnv": {
"SYNC_LOCALHOST_KUBECONFIG": "true"
},
- On the host
export KUBECONFIG="${KUBECONFIG}:$DO_KUBECONFIG:${HOME}/.kube/config"
DO_KUBECONFIG is the path to a Digital Ocean Kubernetes cluster config providing a context "do-nyc3-k8s". ${HOME}/.kube/config is kubernetes cluster config for the local Rancher Desktop Kubernetes providing a context "rancher-desktop". This results in both configs being accessible on the host by running
kubectl config get-contexts
- Set the context to the remote "do-nyc3-k8s" cluster.
- Verify the do-nyc3-k8s context is set using get-contexts on the host
- Rebuild the container using Rancher Desktop (providing docker and the local rancher-desktop kubernetes)
- Run get-contexts in the container. Only the "rancher-desktop" config is available.
The approach for obtaining kubeconfig doesn't seem correct. For development it is often the case to have multiple configured clusters. According to https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#supporting-multiple-clusters-users-and-authentication-mechanisms, multiple clusters should be configured with $KUBECONFIG from which it merges the configs to make them available for kubectl. As a result, literally copying the .kube/config may not result in the $KUBECONFIG being properly synced with the host as it will only contain the default cluster config.
I have resolved the issue in the dev container by placing additional configurations in a config-files folder within the ~/.kube folder. Thus, they are copied into the container without modifications to the script.
That said, I have modified the copy-kube-config.sh script to set the default kubeconfig on $KUBECONFIG. The other modification is to create a ~/.kube/config-files if it does not exist and iterate over the additional config-files, appending these to $KUBECONFIG so they can provide a merged config when calling kubectl config get-contexts. This enables the developer to set a context from multiple possible contexts in the dev container. I can send the patch.
Thanks for opening @scriptjs, and glad to hear you found a solution!
Please feel free to open a PR and share the PR # here.