cloudflared
cloudflared copied to clipboard
kubectl tunnel not working
Hi,
I've been trying to setup an Argo tunnel for exposing my Kube API but apparently, the socks5 solution is not working out for me.
This is the command I run on my kube host (origin):
cloudflared tunnel --hostname k8s.my-domain.com --url tcp://127.0.0.1:6443 --socks5=true
It runs fine. This is what I run in the client:
cloudflared access tcp --hostname k8s.my-domain.com --url 127.0.0.1:1234
Then, when I try to run kubectl
with the SOCKS5 proxy in the client, this is what I get in the origin logs:
2021-03-14T19:01:53Z ERR 127.0.0.1:6443 is not a http service
2021-03-14T19:01:53Z ERR CF-RAY: 62ffc0d37f87d45f-HAM Proxying to ingress 0 error: Not a http service
A curl/kubectl to 127.0.0.1:6443
from within the origin works perfectly fine.
I'm using k3s with kubectl v.1.15.5.
I've set all possible log levels to debug but couldn't find any meaningful information.
Thanks for any help!
Hi @RTodorov can you try setting a SOCKS5 environment variable to route the traffic to the cloudflared
listener:
env HTTPS_PROXY=socks5://127.0.0.1:1234 kubectl get pods
If that works, you can save time with this alias going forward:
alias kubeone="env HTTPS_PROXY=socks5://127.0.0.1:1234 kubectl"
If that doesn't work or if you're already using that and I missed it in the description, could you share any additional configuration details about your client-side environment.
Hi @TownLake ,
Thanks for answering. I was indeed using the env var, the request is actually reaching the origin server via Cloudflare, the problem seems to be between cloudflared
and my origin server. I've also tried the native kubectl proxy-url
parameter on the ~/.kube/config
file but the result is the same. The request reaches the origin but errors out.
The client-side environment is nothing special, I'm using the kube config file from the server, I just changed the server URL to be https://k8s.my-domain.com
.
- cluster:
certificate-authority-data: <my-cert-here>
proxy-url: socks5://127.0.0.1:1234
server: https://k8s.my-domain.com (I've tried adding :6443 here, no luck)
name: k3s-local
On the server side, I'm using k3s.
Looks like the error happens here (https://github.com/cloudflare/cloudflared/blob/39065377b5d0593bd85ed3ff0698aca34fd1eb72/origin/proxy.go#L119-L123) but I wasn't able to tell what rule.Service.(ingress.HTTPOriginProxy)
does.
Hi @RTodorov I recommend using named tunnel and ingress rules. First follow https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/create-tunnel to create a tunnel. Then in your config file, add the following
tunnel: <tunnel name or ID that you created in the first step>
credentials-file: <path to secret that was generated when you create the tunnel>
ingress:
- hostname: k8s.my-domain.com
service: tcp://127.0.0.1:6443
originRequest:
proxyType: socks
- service: http_status: 404
You can read more about ingress rules and available settings in https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/ingress.
Hi @chungthuang ,
I've tried this as well and the result is unfortunately the same. I spent my entire Sunday on this, read the ingress config params a hundred times, tried legacy tunnel creation, new method, the result was always the same.
I'm a bit intrigued if this is something related to k3s but I don't think so, if I expose the API via HTTP and authenticate using a kubernetes service account, it works fine but I really don't want to expose the API without the argo tunnel.
Is there anything else I can do to debug this issue further? the debug logs doesn't seem to help much.
Thanks!
I'm sorry to hear that. Can you try replacing the scheme in tcp://127.0.0.1:6443
to http
? That will tell cloudflared to connect to 127.0.0.1:6443
over http instead of tcp.
@chungthuang if I do that then there's a problem with TLS handshake:
error="remote error: tls: handshake failure"
From what I understood from the Cloudflare documentation, the idea of using socks5 is exactly to avoid having the TLS handshake issue.
This is the excerpt from the documentation:
The proxy allows your local kubectl tool to connect to cloudflared via a SOCKS5 proxy, which helps avoid issues with TLS handshakes to the cluster itself. In this model, TLS verification can still be exchanged with the kubectl API server without disabling or modifying that flow for end users.
I would try with changing your ingress to service: https://127.0.0.1:6443
. The TLS handshake will be between cloudflared and your origin, but not between your end user and your origin. If your origin is using a self-signed certificate, you can add notlsverify option
- hostname: k8s.my-domain.com
service: https://127.0.0.1:6443
originRequest:
noTLSVerify: true
Another issue might be your kubectl version. proxy-url seem to only be available with kubectl 1.19+.
Hi @chungthuang. I think this is something with the latest versions. I've downgraded to cloudflared 2021.2.5
and now the TCP proxy works, or at least I'm getting this in the logs
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Serving with ingress rule 0"}
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Request content length 0"}
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Status: 200 OK served by ingress 0"}
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Response Headers map[Content-Type:[text/html; charset=utf-8] Date:[Mon, 22 Mar 2021 22:20:29 GMT]]"}
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Response content length unknown"}
Unfortunately, on the client-side, I still get this:
2021-03-22T22:20:29Z ERR failed to connect to origin error="websocket: bad handshake" originURL=https://k3s.my-domain.com
and from kube client:
I0323 11:55:41.030312 77764 request.go:943] Got a Retry-After 1s response for attempt 1 to https://k3s.my-domain.com/api/v1/nodes
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
I've upgraded both my k3s cluster and kubectl to latest (1.20.4), so it's not an issue with the proxy.
At this stage, I don't know what could be the issue, since I'm seeing the 200 OK
from the k3s API in the cloudflared logs.
What is the cloudflared version on the client side? Can you try logging at debug level? I would expect a 101 response in the tunnel log if the client request reached the tunnel because it's establishing a websocket connection.
I am also having the same issue as @RTodorov with the kubectl setup though cloudflared. Has any solution been forthcoming on this issue?
I kinda gave up... wasted too much time on this and the tool is clearly not working, at least not with k3s.
I am also having this issue. Using aks.
Fixed by upgrading to version 2021.12.1
@ajrpayne Hi, I am trying to tunnel Kube API server traffic through Cloudflared. The cluster is AKS. The error we are getting is E0318 16:31:20.724694 5944 azure.go:154] Failed to acquire a token: unexpected error when refreshing token: refreshing token: adal: Failed to execute the refresh request. Error = 'Post "https://login.microsoftonline.com/e85feadf-11e7-47bb-a160-43b98dcc96f1/oauth2/token": read tcp 127.0.0.1:52195->127.0.0.1:1234: read: connection reset by peer'
Looks like it's not able to get a token after proxy. Have you encountered this?
@ajrpayne Hi, I am trying to tunnel Kube API server traffic through Cloudflared. The cluster is AKS. The error we are getting is
E0318 16:31:20.724694 5944 azure.go:154] Failed to acquire a token: unexpected error when refreshing token: refreshing token: adal: Failed to execute the refresh request. Error = 'Post "https://login.microsoftonline.com/e85feadf-11e7-47bb-a160-43b98dcc96f1/oauth2/token": read tcp 127.0.0.1:52195->127.0.0.1:1234: read: connection reset by peer'
Looks like it's not able to get a token after proxy. Have you encountered this?
I haven't had this issue. I do run az aks get-credentials before tunneling.
I'm hitting the same issue on EKS (v1.24) and cloudflared v2023.1.0 (on both server and client) - any update on this? noTlsVerify is set to true, so I'm not sure where the handshake is failing