lens
lens copied to clipboard
gke-gcloud-auth-plugin failed with exit code 1
Describe the bug After upgrading macOS 12.6 -> 13.0.1, I experience the following error when attempting to connect to any GKE cluster:
F1113 21:56:57.109933 19794 cred.go:123] print credential failed with error: Failed to retrieve access token:: failure while executing gcloud, with args [config config-helper --format=json]: exit status 127
E1113 21:56:57.110496 19793 proxy_server.go:147] Error while proxying request: getting credentials: exec: executable gke-gcloud-auth-plugin failed with exit code 1
getting credentials: exec: executable gke-gcloud-auth-plugin failed with exit code 1
I am about to connect using kubectl
from the command line.
To Reproduce Steps to reproduce the behavior:
- Install gcloud SDK
-
gcloud components in gke-gcloud-auth-plugin
- Create kubeconfig file that connects to GKE
- Launch Lens with this kubeconfig
- Attempt to load cluster and observe error
Expected behavior I should be able to connect to the GKE cluster without auth error.
Environment (please complete the following information):
- Lens Version:
Lens: 2022.11.101953-latest Extension API: 6.1.19 Electron: 19.1.5 Chrome: 102.0.5005.167 Node: 16.14.2
- OS: macOS Ventura 13.0.1
- Arch: Apple M1 arm64
- Installation method: DMG
Kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <omitted>
server: <omitted>
name: prod
contexts:
- context:
cluster: prod
user: prod
name: prod
current-context: prod
kind: Config
preferences: {}
users:
- name: prod
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args: null
command: gke-gcloud-auth-plugin
env: null
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
interactiveMode: IfAvailable
provideClusterInfo: true
It appears running kubectl get pods
refreshes the auth token for 60 minutes and fixes the issue. However, I have to run this often.
Does this happen immediately or does it start failing after 60 minutes?
Hey everyone, I am facing the same issue. In my case, Lens IDE isn't able to use gke-auth-plugin
at all. Every time I try connecting with my GKE cluster using Lens, I get the same error as the one mentioned in the original post. Does any one have any idea how to resolve this problem?
Where is gke-auth-plugin
installed on your machine? Do you ever get a notification about shell sync env failing?
I stopped encountering this issue sometime in late Nov or Dec. Perhaps an OS or Lens or AV update resolved the issue.
Hello, I was with this same error, but after lost some time I fix here. Follow what I did:
- Add in your
.zshrc
or.bashrc
:export USE_GKE_GCLOUD_AUTH_PLUGIN=False
- Run in you shell:
gcloud compoents update
- Also run in your shell:
gcloud container clusters get-credentials {{your cluster}} --zone={{your zone}}
Now your .kube/config
will be something like that:
users:
- name: {{your cluster}}
user:
auth-provider:
config:
access-token: {{access-token}}
cmd-args: config config-helper --format=json
cmd-path: /home/{user}/google-cloud-sdk/bin/gcloud
expiry: {{expiry time}}
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
And you can try connect again using Lens
Facing the same issue, following @naanadr 's to change USE_GKE_GCLOUD_AUTH_PLUGIN
to false (export USE_GKE_GCLOUD_AUTH_PLUGIN=False
) seems to fix the issue on lens, but unfortunately it brakes kubectl
, so we end up having lens working but not kubectl.
k get pods
error: The gcp auth plugin has been removed.
Please use the "gke-gcloud-auth-plugin" kubectl/client-go credential plugin instead.
See https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details
Hello, I was with this same error, but after lost some time I fix here. Follow what I did:
- Add in your
.zshrc
or.bashrc
:export USE_GKE_GCLOUD_AUTH_PLUGIN=False
- Run in you shell:
gcloud compoents update
- Also run in your shell:
gcloud container clusters get-credentials {{your cluster}} --zone={{your zone}}
Now your
.kube/config
will be something like that:users: - name: {{your cluster}} user: auth-provider: config: access-token: {{access-token}} cmd-args: config config-helper --format=json cmd-path: /home/{user}/google-cloud-sdk/bin/gcloud expiry: {{expiry time}} expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp
And you can try connect again using Lens
@naanadr This solution worked for me perfectly. Both lens and kubectl work fine in case
Hello, I was with this same error, but after lost some time I fix here. Follow what I did:
1. Add in your `.zshrc` or `.bashrc` : `export USE_GKE_GCLOUD_AUTH_PLUGIN=False` 2. Run in you shell: `gcloud compoents update` 3. Also run in your shell: `gcloud container clusters get-credentials {{your cluster}} --zone={{your zone}}`
Now your
.kube/config
will be something like that:users: - name: {{your cluster}} user: auth-provider: config: access-token: {{access-token}} cmd-args: config config-helper --format=json cmd-path: /home/{user}/google-cloud-sdk/bin/gcloud expiry: {{expiry time}} expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp
And you can try connect again using Lens
This worked for me as well. All of kubectl
, k9s
and Lens
work after this. Thank you!
I solved this by editing the shell used by Lens by using my main shell (fish). (The option is set in Preferences->Terminal)
After Kubernetes 1.26 solution with export USE_GKE_GCLOUD_AUTH_PLUGIN=False
doesn't work anymore. Lens works, but kubectl gives this error:
error: The gcp auth plugin has been removed.
Please use the "gke-gcloud-auth-plugin" kubectl/client-go credential plugin instead.
See https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details
workarounded by creating two contexts and two users. It should work until token is expired.
@antonr-p2p I've had the same issue, you only need to add the full path for gke-gcloud-auth-plugin
on .kube/config
tag command:
.
and set export USE_GKE_GCLOUD_AUTH_PLUGIN=True
Btw, I'm on MacOS.
@antonr-p2p I've had the same issue, you only need to add the full path for
gke-gcloud-auth-plugin
on.kube/config
tagcommand:
. and setexport USE_GKE_GCLOUD_AUTH_PLUGIN=True
Btw, I'm on MacOS.
Looks like this is the fix. I think gcloud components install gke-gcloud-auth-plugin
is installing this in a path where the initial tar files where extracted. And this path is getting added in to zsh.rc
in my case. And this is not being used by lens it seems. Hence the command was unavailable for lens.
@antonr-p2p I've had the same issue, you only need to add the full path for
gke-gcloud-auth-plugin
on.kube/config
tagcommand:
. and setexport USE_GKE_GCLOUD_AUTH_PLUGIN=True
Btw, I'm on MacOS.
Hello, I was with this same error, but after lost some time I fix here. Follow what I did:
- Add in your
.zshrc
or.bashrc
:export USE_GKE_GCLOUD_AUTH_PLUGIN=False
- Run in you shell:
gcloud compoents update
- Also run in your shell:
gcloud container clusters get-credentials {{your cluster}} --zone={{your zone}}
Now your
.kube/config
will be something like that:users: - name: {{your cluster}} user: auth-provider: config: access-token: {{access-token}} cmd-args: config config-helper --format=json cmd-path: /home/{user}/google-cloud-sdk/bin/gcloud expiry: {{expiry time}} expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp
And you can try connect again using Lens
I tried both of the the fixes but none of them worked. EKS clusters working fine.
@mtahaahmed but it does not have anything to do with EKS, only GKE.
I was about to fix using the below
cd ~/.config/gcloud/virtenv
bin/pip3 uninstall cffi -y
bin/pip3 install --no-binary :all: cffi --ignore-installed
I was about to fix using the below
cd ~/.config/gcloud/virtenv bin/pip3 uninstall cffi -y bin/pip3 install --no-binary :all: cffi --ignore-installed
This solution worked for me perfectly!
I was about to fix using the below
cd ~/.config/gcloud/virtenv bin/pip3 uninstall cffi -y bin/pip3 install --no-binary :all: cffi --ignore-installed
This worked like a charm for Kubectl, thank you! ~However, Lens is still broken. Any idea what's up? I tried all the other solutions in here but can't get Lens to work.~ Changing the command in the kube config as mentioned above worked for fixing the Lens issue.
I was about to fix using the below
cd ~/.config/gcloud/virtenv bin/pip3 uninstall cffi -y bin/pip3 install --no-binary :all: cffi --ignore-installed
I have no idea how I got into this state, but this also worked for me to get kubectl working again. Thank you!