rancher-desktop
rancher-desktop copied to clipboard
Kubernetes was unable to start: TypeError [ERR_INVALID_PROTOCOL]: Protocol "http:" not supported. Expected "https:"
When installing Rancher Desktop behind a proxy, the following error occurs when k3s starts.
Kubernetes was unable to start: TypeError [ERR_INVALID_PROTOCOL]: Protocol "http:" not supported. Expected "https:"
I am behind a proxy and have the HTTP_PROXY
and HTTPS_PROXY
variables set. The proxy used by my company is available over "http" only so both variables are set to something like "http://my-proxy-url". I also have values set in the NO_PROXY
variable for some internal domains. Does there need to be an additional entry in NO_PROXY
for Kubernetes?
Screenshots
Setup (please complete the following information):
- OS: Windows 10 Enterprise 10.0.19042 Build 19042
- Rancher Desktop version 0.6.0-186-g0c32457
- Using nightly build due to this issue
- Kubernetes version: 1.21.5
Snippet from the background.log
Kubernetes was unable to start: TypeError [ERR_INVALID_PROTOCOL]: Protocol "http:" not supported. Expected "https:"
at new ClientRequest (_http_client.js:155:11)
at TunnelingAgent.request (http.js:50:10)
at TunnelingAgent.createSocket (C:\Users\myuser\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\tunnel-agent\index.js:135:25)
at TunnelingAgent.createSecureSocket [as createSocket] (C:\Users\myuser\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\tunnel-agent\index.js:200:41)
at TunnelingAgent.createConnection (C:\Users\myuser\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\tunnel-agent\index.js:98:8)
at TunnelingAgent.addRequest (C:\Users\myuser\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\tunnel-agent\index.js:92:8)
at new ClientRequest (_http_client.js:306:16)
at Object.request (https.js:313:10)
at Request.start (C:\Users\myuser\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\request\request.js:751:32)
at Request.end (C:\Users\myuser\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\request\request.js:1505:10) {
code: 'ERR_INVALID_PROTOCOL'
}
We have exactly the same behavior behind our corporate proxy. Our proxy also uses only http as protocol. I tested it with the Rancher Dektop version 0.6.0-225-gb4f88b4
Just tried with the version 0.7.0 and get the same error.
Me too getting the same issue. Do we have any workaround or solution to get this running?
I'm facing the same issue with version 0.7.1
Hey, I got the same issue. Solved it for me by looking at the .kube/config
for the cluster IP address and then adding it to the NO_PROXY
environment variable.
It was able to proceed and finish the installation process, but my kubectl
command keeps failing with binary not found like this:
Error: unable to retrieve binary from "https://storage.googleapis.com/kubernetes-release/release/v1.23.1+k3s2/bin/windows/amd64/kubectl.exe" (404 Not Found)
Not sure if you patch the proxy to make googleapis address to return something but if you try to access the resource URL the binary seems to not be found
I'm running on 1.0.0-beta.1
I was connected to company VPN through http proxy and also had this issue. I had to completely turn off VPN, windows proxy and remove PROXY environment variables to successfully start rancher desktop. After first successful run, when I connect to VPN and restart rancher desktop, there is an issue with connecting to kubernetes cluster: Error: connect ETIMEDOUT When I'm on VPN connection, kubectl commands to rancher cluster are not working:
Unable to connect to the server: dial tcp RANCHER_IP:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
I recently install V1.0.0 Rancher Desktop on Windows 10 machine behind Proxy. I got the same error. When I looked into ~/.kube/config
file I found ip address for server,
Like server: https://10.173.120.120:6443
I manually updated to server: https://localhost:6443
. This fixed my error. I am able to run kubectl command now. But still the GUI gives me the same error.
I had similar issue with k3d too.. with docker desktop. In there I used to pass --api-port 127.0.0.1:6443
when I spin up a cluster.
Hi, the Kubernetes API only works behind https
. So it rejects the connection from an http
proxy because there's no authentication. See https://github.com/kubernetes-client/javascript/issues/648 for a similar issue when communicating directly through the underlying @kubernetes/client-node
library. And that issue references the library's docs on configuring proxies at https://github.com/request/request/blob/master/README.md#proxies . You might find something there that will help.
I don't understand why you're not also running an https proxy, but IMO finding a way around this sounds like an invitation to get a CVE, at best.
Note to further research a solution.
I also faced the same "ERR_INVALID_PROTOCOL" problem behind the Proxy and was able to solve it with the help of this issue. Thank you very much.
I have thought about it for a while, but there seems to be no good solution. If I rewrite the NO_PROXY environment variable automatically, I'll have a problem with unnecessary definitions later on...
f I rewrite the NO_PROXY environment variable automatically, I'll have a problem with unnecessary definitions later on...
How did you fix this issue. Add the ip address to NO_PROXY settings ?
@ericpromislow If I understand this correctly, the issue is about the Electron app making kube api requests over the IP of the VM, right? (And not about communication internal to the VM)
Wouldn't it make sense for the app to always add the VM IP to the NO_PROXY
setting before making any calls? The IP address is obviously known at this point, so it should be easy, and there should be no need for the user to configure anything.
Same boat. Trying to use the WSLENV functionality to get environment data into WSL which works, but then that throws off k8s/k3s. Disable k8s and no issues, but start it and you get this issue. I have tried excluding the 172.16.0.0/12 via no_proxy, but guessing that is not working.
I am having this issue too. Is this fixed ?
Nope hoping to get more folks to keep flagging area/proxy as a place to invest.
I'm working with a company who is also evaluating Rancher Desktop. For remote workers who use VPN we have HTTP_PROXY
environment variables set mainly for the purpose of other tools (ex. Cypress). When running Rancher Desktop on these machines we are seeing errors such as the one described here as the proxy address we use is under the http
scheme/protocol. We're looking for a reasonable workaround or a fix to address this issue at this time in order to move forward with our evaluation.
We are using Windows 10 and my build is 10.0.19044 N/A Build 19044
.
Error from background.log
:
2022-08-15T15:01:17.356Z: Kubernetes was unable to start: TypeError [ERR_INVALID_PROTOCOL]: Protocol "http:" not supported. Expected "https:"
at new NodeError (node:internal/errors:371:5)
at new ClientRequest (node:_http_client:158:11)
at TunnelingAgent.request (node:http:96:10)
at TunnelingAgent.createSocket (C:\Users\[USER]\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\tunnel-agent\index.js:135:25)
at TunnelingAgent.createSecureSocket [as createSocket] (C:\Users\[USER]\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\tunnel-agent\index.js:200:41)
at TunnelingAgent.createConnection (C:\Users\[USER]\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\tunnel-agent\index.js:98:8)
at TunnelingAgent.addRequest (C:\Users\[USER]\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\tunnel-agent\index.js:92:8)
at new ClientRequest (node:_http_client:305:16)
at Object.request (node:https:353:10)
at Request.start (C:\Users\[USER]\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\request\request.js:751:32) {
code: 'ERR_INVALID_PROTOCOL'
}
Sometimes a blank error dialog shows up as well without anything showing inside. I'm guessing this is a separate display issue with the Desktop GUI.
The core of the problem is the underlying process (electron app) is mismanaging the HTTP(s)_PROXY
environment variable by making HTTP calls to the proxy where upstream requires HTTPS and vice versa. We will try to set up an environment to reproduce this. Once it is reproduced we will have a better idea of tackling it.
Below are the scenarios to consider when tackling this issue:
Electron App —----- http —----- proxy —----- http —----- upstream
Electron App —----- https —----- proxy —----- https —----- upstream
Electron App —----- https —----- proxy —----- http —----- upstream
Electron App —----- http —----- proxy —----- https —----- upstream
For me the resolution for this specific error has been using the no_proxy in the distro context to include the elected cluster IP, but this has issues in WSL2 because of the dynamic subnet. I can't exclude by DNS unless intra-cluster traffic shifts to connections by name as there isn't CIDR support in no_proxy. I tried to do a provisioning script, but the timing of k3s start seemed to be an issue.
The interesting thing is I only see this error on WSL2 and not LimaVM for Mac when proxy is injected into the Alpine environment. I see other errors in macOS where internal traffic seems to be sent to the proxy when it shouldn't, but key traffic seems to stay in the VM.
Docker uses their own in distro proxy to avoid having to manage all of this in env. They can inject a forward rule for egress to the next proxy in the chain for the host network and everything else hairpins back into the distro network space. So they just push everything to the proxy and put the intelligence in there.
We use HTTP for negotiating both HTTP and HTTPS external connections on our proxy as it's just a standard CONNECT.
Is there any update on this issue , how to make k3s/k8s work in RD behind http_proxy ?
Is there any update on this issue , how to make k3s/k8s work in RD behind http_proxy ?
Unfortunately not yet ☹️
I tracked this down to the /etc/environment file in the VM. If rancher-desktop is started (for first-time or after a reset) and at that time the environment variable http_proxy
(etc.) are set, they seems to get picked up and inserted into /etc/environment
, everything then gets them even things that shouldn't:
#LIMA-START
HTTP_PROXY=http://proxy.corp.com:9443/
HTTPS_PROXY=http://proxy.corp.com:9443/
NO_PROXY=...
#LIMA-END
I avoided this by writing a rancher
wrapper script for the team developers to use which configures a provisioning script before launch and also explicitly unsets those pesky variables from the environment. The provisioning is such that only /etc/init.d/docker gets proxy env variables when needed. We don't want them getting into every daemon, environment, particularly not k3s.
Without pasting the whole script, the provisioning function is approximately:
function configureOverrides {
local OVERRIDE_YAML="$1"
local PROXY="$2"
local NO_PROXY="$3"
local PROXY_SH="/etc/profile.d/proxy.sh"
local PRIVATE_PROXY_SH="/etc/profile.d/proxy.private"
local PROXY_INFO="Added by myscript $(date)"
local SOURCE_PROXY="source ${PRIVATE_PROXY_SH}"
local INIT_DOCKER="/etc/init.d/docker"
local INIT_CONTAINERD="/etc/init.d/containerd"
local CONFIG_DIR=$(dirname "${OVERRIDE_YAML}")
if [[ ! -d "${CONFIG_DIR}" ]] ; then
# In case first-time launch has not been done
mkdir -p "${CONFIG_DIR}"
fi
# Note:
# /etc/profile.d/*.sh files are loaded for all login shells, but other file extensions are ignored
# /etc/init.d/docker explicitly loads /etc/profile.d/proxy.sh at startup
# We want proxy set on dockerd only, not kubernetes etc.
# (beware very specific escaping and quoting on nested 'here' docs...)
cat <<-EoM >"${OVERRIDE_YAML}"
provision:
- mode: system
script: |
#!/bin/sh
# Delete proxy settings that may have leaked into /etc/environment (belt-and-braces)
sed -i -E -e '/^[a-z]+_proxy=/d' -e '/^[A-Z]+_PROXY=/d' /etc/environment
#
# Clear all HTTP_PROXY, set PRIVATE_PROXY as needed (this file loaded by all shells)
cat <<'EoF1' >${PROXY_SH}
# ${PROXY_INFO}
unset all_proxy http_proxy https_proxy no_proxy
unset ALL_PROXY HTTP_PROXY HTTPS_PROXY NO_PROXY
PRIVATE_PROXY='${PROXY}'
PRIVATE_NO_PROXY='${NO_PROXY}'
export PRIVATE_PROXY PRIVATE_NO_PROXY
EoF1
#
# Configure HTTP_PROXY based on PRIVATE_PROXY (this file will be loaded by dockerd only)
# Lowercase-only needed, uppercase variables get set from them automatically
cat <<'EoF2' >${PRIVATE_PROXY_SH}
# ${PROXY_INFO}
if [ -n "\${PRIVATE_PROXY}" ] ; then
http_proxy="\${PRIVATE_PROXY}"
https_proxy="\${PRIVATE_PROXY}"
no_proxy="\${PRIVATE_NO_PROXY}"
export http_proxy https_proxy no_proxy
fi
EoF2
#
# Load proxy.private into dockerd
for INIFILE in ${INIT_DOCKER} ${INIT_CONTAINERD} ; do
grep -qxF '${SOURCE_PROXY}' \${INIFILE} || echo -e '# ${PROXY_INFO}\n${SOURCE_PROXY}' >>\${INIFILE}
done
EoM
echo "Rancher proxy [${PROXY:-off}] configuration set [${OVERRIDE_YAML}]."
}
It's called more-or-less like this:
YAML="${HOME}/Library/Application Support/rancher-desktop/lima/_config/override.yaml"
PROXY="http://proxy.corp.com:9443"
NO_PROXY="localhost,.internal,...." # whatever
# when proxy needed, either:
configureOverrides "${YAML}" "${PROXY}" "${NO_PROXY}"
# or when proxy not needed:
configureOverrides "${YAML}" "" ""
# then start rancher-desktop
# Always clear proxy settings to prevent propagation into lima
unset all_proxy http_proxy https_proxy no_proxy
unset ALL_PROXY HTTP_PROXY HTTPS_PROXY NO_PROXY
rdctl start --path="${RANCHER_APP}" --container-engine=moby ...