lens
lens copied to clipboard
Can't connect to DO cluster : http proxy error
Describe the bug Hello everyone, I cannot connect to my DigitalOcean cluster. After selecting my kubeconfig and trying to connect to the cluster, I get the error message "http: proxy error: getting credentials: exec: exit status 1".
To Reproduce
- Start Lens
- Click on cluster icon to try to connect
Expected behavior As explained in various tutorials, no further configuration is needed so it should just work.
Screenshots
Environment (please complete the following information):
- Lens Version: 3.5.3
- OS: Ubuntu 20.04
- Installation method: snap
Logs:
error: Failed to connect to cluster do-fra1-k8s-dev: {"name":"StatusCodeError","statusCode":502,"message":"502 - \"getting credentials: exec: exit status 1\"","error":"getting credentials: exec: exit status 1","options":{"json":true,"timeout":10000,"headers":{"host":"3cd200fd-7272-4999-8507-34815abcce79.localhost:38215"},"uri":"http://127.0.0.1:38215/api-kube/version","simple":true,"resolveWithFullResponse":false,"transform2xxOnly":false},"response":{"statusCode":502,"body":"getting credentials: exec: exit status 1","headers":{"content-type":"text/plain","date":"Mon, 17 Aug 2020 09:43:18 GMT","connection":"close","transfer-encoding":"chunked"},"request":{"uri":{"protocol":"http:","slashes":true,"auth":null,"host":"127.0.0.1:38215","port":"38215","hostname":"127.0.0.1","hash":null,"search":null,"query":null,"pathname":"/api-kube/version","path":"/api-kube/version","href":"http://127.0.0.1:38215/api-kube/version"},"method":"GET","headers":{"host":"[hidden].localhost:38215","accept":"application/json"}}}}
Kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [hidden]
server: https://[hidden].k8s.ondigitalocean.com
name: do-fra1-k8s-dev
contexts:
- context:
cluster: do-fra1-k8s-dev
user: do-fra1-k8s-dev-admin
name: do-fra1-k8s-dev
current-context: do-fra1-k8s-dev
kind: Config
preferences: {}
users:
- name: do-fra1-k8s-dev-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- kubernetes
- cluster
- kubeconfig
- exec-credential
- --version=v1beta1
- --context=default
- [hidden]
command: doctl
env: null
Thanks in advance for your help !
You can try to use the full path of doctl
in kubeconfig.
Using /snap/bin/doctl
instead of just doctl
is giving me the same error.
@jakolehm do you know are there still some issues related to snap regarding this?
I also tried running the command doctl kubernetes cluster kubeconfig exec-credential --version=v1beta1 --context=default ***-***-***
to see the output :
{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1beta1","spec":{},"status":{"expirationTimestamp":"2020-08-31T14:06:33Z","token":"****************"}}
Looks like there isn't any problem with this.
@nevalla no known issues. @Ridzu95 does it work if you start Lens from a terminal where kubectl works?
@jakolehm The logs I mentioned in my first message are the ones I get as an output when running from a terminal. So no it doesn't work unfortunately.
Got this reproduced and I think I found the reason too. When opening Lens, snap will set XDG_CONFIG_HOME
env variable to point to Lens's snap sandbox:
declare -x XDG_CONFIG_HOME="/home/parallels/snap/kontena-lens/110/.config"
and doctl
uses that env var to determine default config file:
-c, --config string Specify a custom config file (default "/home/parallels/snap/kontena-lens/110/.config/doctl/config.yaml")
And the issue is that there is no access token in that config file.
I think the workaround is to add --config=/path/to/doctl/config.yaml
to kubeconfig file and re-add the cluster to the Lens. Alternatively you can do doctl auth init
in Lens terminal if you have some working cluster available.
@nevalla could we override XDG_CONFIG_HOME
env for kubectl when we detect snap environment?
Maybe. I think ~/.config
would be the correct path.
Update : I did not manage to make it work by adding a --config
argument to my kubeconfig, but I can connect to my cluster by creating the doctl
folder in Lens's snap sandbox and a symbolic link :
$ mkdir /home/$USER/snap/kontena-lens/110/.config/doctl
$ ln -s /home/$USER/.config/doctl/config.yaml /home/$USER/snap/kontena-lens/110/.config/doctl/config.yaml
This solution is not ideal but it works fine, thanks to @nevalla indications.
Right...I might have created that dir too during debugging.
@Ridzu95 that solution was a real lifesaver for me today, thank you.
Update : I did not manage to make it work by adding a
--config
argument to my kubeconfig, but I can connect to my cluster by creating thedoctl
folder in Lens's snap sandbox and a symbolic link :$ mkdir /home/$USER/snap/kontena-lens/110/.config/doctl $ ln -s /home/$USER/.config/doctl/config.yaml /home/$USER/snap/kontena-lens/110/.config/doctl/config.yaml
This solution is not ideal but it works fine, thanks to @nevalla indications.
This solution works for me.
@nevalla is it possible to fix this problem with the XDG_CONFIG_HOME in a future release?
Just to add on, I was having simialr issues as well as http: proxy error: proxyconnect tcp: dial tcp [::1]:8001: connect: connection refused
, and I solved it by opening Lens in sudo.
sudo DEBUG=true /Applications/Lens.app/Contents/MacOS/Lens
Turns out Lens, started by just clicking the Mac icon ,without sudo didn't have the permissions to start the auth proxy it needs! Works perfectly now that I open it with sudo from the terminal.
@antsankov
Got the same problem :) Any chance someone would do PR :)?
Update : I did not manage to make it work by adding a
--config
argument to my kubeconfig, but I can connect to my cluster by creating thedoctl
folder in Lens's snap sandbox and a symbolic link :$ mkdir /home/$USER/snap/kontena-lens/110/.config/doctl $ ln -s /home/$USER/.config/doctl/config.yaml /home/$USER/snap/kontena-lens/110/.config/doctl/config.yaml
This solution is not ideal but it works fine, thanks to @nevalla indications.
This solution works for me.
@nevalla is it possible to fix this problem with the XDG_CONFIG_HOME in a future release?
Yes, my problem is on AWS. I guess it is because of the need to use aws-cli to authenticate the k8s cluster.
I wonder if there is a more elegant solution to this problem?
Having issues with my team accessing the cluster connect feature in Lens. I am guessing this has to do with the ~/Kubeconfig file - currently using mini kube on M1 and I created a teams space - and allowed access to specific users. They can see the teams space but cannot access the cluster.
I think what happens..still doing some research.. but when you add a cluster it pulls off your kube.config. when you create your space I think you need a new config that then shares throughout the space. When my team tries to access the clusters I shared it gives them an SSL error which is directly from the config.
When you go to your space I think it pulls your config... I think what needs to happen is you pull a new config in the space and promote it. Then the users you invite to the space use the same and you resolve the proxy error.
Please advise
Hi any updates?
**dingdayu ** commented on Oct 27, 2021
You find solution for AWS?? I'm getting same. kubectl is able to connect with server while configuring Lens through error "Unable to locate credentials. You can configure credentials by running "aws configure"."
Same Issue here.
kubectl is ok but by lens a have the same error message:
Just a note on the previous replies as version 110 is quite old now. If you use "current" instead of "110" it will patch the installed version.
doctl auth init
doctl account get
mkdir -p /home/$USER/snap/kontena-lens/current/.config/doctl
ln -s /home/$USER/.config/doctl/config.yaml /home/$USER/snap/kontena-lens/current/.config/doctl/config.yaml