k0sctl behind http_proxy | worker hangs o: "Running phase: Install workers"
The servers are behind company proxy. Trying to install in combination with haproxy.
Error during install:
FATA apply failed - log file saved to /home/kafka/.cache/k0sctl/k0sctl.log: failed on 1 hosts:
- [ssh] 192.168.2.6:22: context deadline exceeded
k0sctl config
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: ddacluster
user: admin
spec:
hosts:
- ssh:
address: 192.168.2.3
user: kafka
port: 22
keyPath: ~/.ssh/id_ed25519
role: controller
uploadBinary: true
environment:
HTTP_PROXY: proxy.test.nl:3128
HTTPS_PROXY: proxy.test.nl:3128
NO_PROXY: 127.0.0.1,192.168.2.0/24,10.244.0.0/16,10.96.0.0/12
http_proxy: proxy.test.nl:3128
https_proxy: proxy.test.nl:3128
no_proxy: 127.0.0.1,192.168.2.0/24,10.244.0.0/16,10.96.0.0/12
- ssh:
address: 192.168.2.4
user: kafka
port: 22
keyPath: ~/.ssh/id_ed25519
role: controller
uploadBinary: true
environment:
HTTP_PROXY: proxy.test.nl:3128
HTTPS_PROXY: proxy.test.nl:3128
NO_PROXY: 127.0.0.1,192.168.2.0/24,10.244.0.0/16,10.96.0.0/12
http_proxy: proxy.test.nl:3128
https_proxy: proxy.test.nl:3128
no_proxy: 127.0.0.1,192.168.2.0/24,10.244.0.0/16,10.96.0.0/12
- ssh:
address: 192.168.2.5
user: kafka
port: 22
keyPath: ~/.ssh/id_ed25519
role: controller
uploadBinary: true
environment:
HTTP_PROXY: proxy.test.nl:3128
HTTPS_PROXY: proxy.test.nl:3128
NO_PROXY: 127.0.0.1,192.168.2.0/24,10.244.0.0/16,10.96.0.0/12
http_proxy: proxy.test.nl:3128
https_proxy: proxy.test.nl:3128
no_proxy: 127.0.0.1,192.168.2.0/24,10.244.0.0/16,10.96.0.0/12
- ssh:
address: 192.168.2.6
user: kafka
port: 22
keyPath: ~/.ssh/id_ed25519
role: worker
uploadBinary: true
environment:
HTTP_PROXY: proxy.test.nl:3128
HTTPS_PROXY: proxy.test.nl:3128
NO_PROXY: 127.0.0.1,192.168.2.0/24,10.244.0.0/16,10.96.0.0/12
http_proxy: proxy.test.nl:3128
https_proxy: proxy.test.nl:3128
no_proxy: 127.0.0.1,192.168.2.0/24,10.244.0.0/16,10.96.0.0/12
k0s:
config:
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
k0sApiPort: 9443
port: 6443
externalAddress: 192.168.2.2
sans:
- 192.168.2.2
installConfig:
users:
etcdUser: etcd
kineUser: kube-apiserver
konnectivityUser: konnectivity-server
kubeAPIserverUser: kube-apiserver
kubeSchedulerUser: kube-scheduler
konnectivity:
adminPort: 8133
agentPort: 8132
network:
kubeProxy:
disabled: true
mode: iptables
kuberouter:
autoMTU: true
mtu: 0
peerRouterASNs: ""
peerRouterIPs: ""
podCIDR: 10.244.0.0/16
provider: custom
serviceCIDR: 10.96.0.0/12
podSecurityPolicy:
defaultPolicy: 00-k0s-privileged
storage:
type: etcd
telemetry:
enabled: false
options:
wait:
enabled: true
drain:
enabled: true
gracePeriod: 2m0s
timeout: 5m0s
force: true
ignoreDaemonSets: true
deleteEmptyDirData: true
podSelector: ""
skipWaitForDeleteTimeout: 0s
concurrency:
limit: 30
workerDisruptionPercent: 10
uploads: 5
evictTaint:
enabled: false
taint: k0sctl.k0sproject.io/evict=true
effect: NoExecute
controllerWorkers: false
Error log
INFO ==> Running phase: Install controllers
INFO [ssh] 192.168.2.3:22: generate join token for [ssh] 192.168.2.4:22
INFO [ssh] 192.168.2.3:22: generate join token for [ssh] 192.168.2.5:22
INFO [ssh] 192.168.2.4:22: validating api connection to https://192.168.2.2:9443
INFO [ssh] 192.168.2.5:22: validating api connection to https://192.168.2.2:9443
INFO [ssh] 192.168.2.5:22: writing join token to /etc/k0s/k0stoken
INFO [ssh] 192.168.2.4:22: writing join token to /etc/k0s/k0stoken
INFO [ssh] 192.168.2.4:22: installing k0s controller
INFO [ssh] 192.168.2.5:22: installing k0s controller
INFO [ssh] 192.168.2.5:22: updating service environment
INFO [ssh] 192.168.2.4:22: updating service environment
INFO [ssh] 192.168.2.5:22: starting service
INFO [ssh] 192.168.2.5:22: waiting for the k0s service to start
INFO [ssh] 192.168.2.4:22: starting service
INFO [ssh] 192.168.2.4:22: waiting for the k0s service to start
INFO ==> Running phase: Install workers
INFO [ssh] 192.168.2.3:22: generating a join token for worker 1
INFO [ssh] 192.168.2.6:22: validating api connection to https://192.168.2.2:6443 using join token
INFO [ssh] 192.168.2.6:22: writing join token to /etc/k0s/k0stoken
INFO [ssh] 192.168.2.6:22: installing k0s worker
INFO [ssh] 192.168.2.6:22: updating service environment
INFO [ssh] 192.168.2.6:22: starting service
INFO [ssh] 192.168.2.6:22: waiting for node to become ready
INFO * Running clean-up for phase: Acquire exclusive host lock
INFO * Running clean-up for phase: Upload k0s binaries to hosts
INFO * Running clean-up for phase: Install k0s binaries on hosts
INFO * Running clean-up for phase: Initialize the k0s cluster
INFO [ssh] 192.168.2.3:22: cleaning up
WARN [ssh] 192.168.2.3:22: k0s reset failed
INFO * Running clean-up for phase: Install controllers
INFO * Running clean-up for phase: Install workers
INFO [ssh] 192.168.2.6:22: cleaning up
WARN [ssh] 192.168.2.6:22: k0s reset failed
INFO ==> Apply failed
FATA apply failed - log file saved to /home/kafka/.cache/k0sctl/k0sctl.log: failed on 1 hosts:
- [ssh] 192.168.2.6:22: context deadline exceeded
node worker01 is not ready
Does it work with your OpenSSH config? If yes, then you can instruct k0sctl to use OpenSSH instead of the built-in SSH client.
No that didn't work either, same issue. But it only happends on the worker nodes
During install via k0sctl is see this:
INFO ==> Running phase: Install controllers
INFO [OpenSSH] [email protected]:22: generate join token for [OpenSSH] [email protected]:22
INFO [OpenSSH] [email protected]:22: generate join token for [OpenSSH] [email protected]:22
INFO [OpenSSH] [email protected]:22: validating api connection to https://192.168.106.98:9443
INFO [OpenSSH] [email protected]:22: validating api connection to https://192.168.106.98:9443
INFO [OpenSSH] [email protected]:22: writing join token to /etc/k0s/k0stoken
INFO [OpenSSH] [email protected]:22: writing join token to /etc/k0s/k0stoken
INFO [OpenSSH] [email protected]:22: installing k0s controller
INFO [OpenSSH] [email protected]:22: installing k0s controller
INFO [OpenSSH] [email protected]:22: starting service
INFO [OpenSSH] [email protected]:22: starting service
INFO [OpenSSH] [email protected]:22: waiting for the k0s service to start
INFO [OpenSSH] [email protected]:22: waiting for the k0s service to start
INFO ==> Running phase: Install workers
INFO [OpenSSH] [email protected]:22: generating a join token for worker 1
INFO [OpenSSH] [email protected]:22: validating api connection to https://192.168.106.98:6443 using join token
INFO [OpenSSH] [email protected]:22: writing join token to /etc/k0s/k0stoken
INFO [OpenSSH] [email protected]:22: installing k0s worker
INFO [OpenSSH] [email protected]:22: starting service
INFO [OpenSSH] [email protected]:22: waiting for node to become ready
INFO * Running clean-up for phase: Acquire exclusive host lock
INFO * Running clean-up for phase: Upload k0s binaries to hosts
INFO * Running clean-up for phase: Install k0s binaries on hosts
INFO * Running clean-up for phase: Initialize the k0s cluster
INFO [OpenSSH] [email protected]:22: cleaning up
WARN [OpenSSH] [email protected]:22: k0s reset failed
INFO * Running clean-up for phase: Install controllers
INFO * Running clean-up for phase: Install workers
WARN [OpenSSH] [email protected]:22: failed to invalidate worker join token: command failed: client exec: command failed: command wait: exit status 1
INFO [OpenSSH] [email protected]:22: cleaning up
INFO ==> Apply failed
FATA apply failed - log file saved to /home/kafka/.cache/k0sctl/k0sctl.log: failed on 1 hosts:
- [OpenSSH] [email protected]:22: context deadline exceeded
failed to get node status: command failed: client exec: command failed: command wait: exit status 1
And in the mean time I see the controllers bounding from UP to Down on haproxy:
Sep 28 10:03:28 haproxy01 haproxy[1087]: Server controllerJoinAPI_backend/k0s-controller2 is UP, reason: Layer6 check passed, check duration: 1524ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:03:28 haproxy01 haproxy[1087]: Server controllerJoinAPI_backend/k0s-controller2 is UP, reason: Layer6 check passed, check duration: 1524ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:03:34 haproxy01 haproxy[1087]: [WARNING] (1087) : Server kubeAPI_backend/k0s-controller1 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 0 active and 0 backup servers left. 1 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:03:34 haproxy01 haproxy[1087]: [ALERT] (1087) : backend 'kubeAPI_backend' has no server available!
Sep 28 10:03:34 haproxy01 haproxy[1087]: Server kubeAPI_backend/k0s-controller1 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 0 active and 0 backup servers left. 1 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:03:34 haproxy01 haproxy[1087]: Server kubeAPI_backend/k0s-controller1 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 0 active and 0 backup servers left. 1 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:03:34 haproxy01 haproxy[1087]: backend kubeAPI_backend has no server available!
Sep 28 10:03:34 haproxy01 haproxy[1087]: backend kubeAPI_backend has no server available!
Sep 28 10:03:39 haproxy01 haproxy[1087]: [WARNING] (1087) : Server kubeAPI_backend/k0s-controller1 is UP, reason: Layer6 check passed, check duration: 347ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:03:39 haproxy01 haproxy[1087]: Server kubeAPI_backend/k0s-controller1 is UP, reason: Layer6 check passed, check duration: 347ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:03:39 haproxy01 haproxy[1087]: Server kubeAPI_backend/k0s-controller1 is UP, reason: Layer6 check passed, check duration: 347ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:03:58 haproxy01 haproxy[1087]: [WARNING] (1087) : Server kubeAPI_backend/k0s-controller1 is DOWN, reason: Layer6 timeout, check duration: 2000ms. 0 active and 0 backup servers left. 1 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:03:58 haproxy01 haproxy[1087]: [ALERT] (1087) : backend 'kubeAPI_backend' has no server available!
Sep 28 10:03:58 haproxy01 haproxy[1087]: Server kubeAPI_backend/k0s-controller1 is DOWN, reason: Layer6 timeout, check duration: 2000ms. 0 active and 0 backup servers left. 1 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:03:58 haproxy01 haproxy[1087]: Server kubeAPI_backend/k0s-controller1 is DOWN, reason: Layer6 timeout, check duration: 2000ms. 0 active and 0 backup servers left. 1 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:03:58 haproxy01 haproxy[1087]: backend kubeAPI_backend has no server available!
Sep 28 10:03:58 haproxy01 haproxy[1087]: backend kubeAPI_backend has no server available!
Sep 28 10:04:04 haproxy01 haproxy[1087]: [WARNING] (1087) : Server controllerJoinAPI_backend/k0s-controller2 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:04:04 haproxy01 haproxy[1087]: Server controllerJoinAPI_backend/k0s-controller2 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:04:04 haproxy01 haproxy[1087]: Server controllerJoinAPI_backend/k0s-controller2 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:04:15 haproxy01 haproxy[1087]: [WARNING] (1087) : Server controllerJoinAPI_backend/k0s-controller2 is UP, reason: Layer6 check passed, check duration: 825ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:04:15 haproxy01 haproxy[1087]: Server controllerJoinAPI_backend/k0s-controller2 is UP, reason: Layer6 check passed, check duration: 825ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:04:15 haproxy01 haproxy[1087]: Server controllerJoinAPI_backend/k0s-controller2 is UP, reason: Layer6 check passed, check duration: 825ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:04:21 haproxy01 haproxy[1087]: [WARNING] (1087) : Server kubeAPI_backend/k0s-controller1 is UP, reason: Layer6 check passed, check duration: 219ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:04:21 haproxy01 haproxy[1087]: Server kubeAPI_backend/k0s-controller1 is UP, reason: Layer6 check passed, check duration: 219ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:04:21 haproxy01 haproxy[1087]: Server kubeAPI_backend/k0s-controller1 is UP, reason: Layer6 check passed, check duration: 219ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 28 10:05:04 haproxy01 haproxy[1087]: [WARNING] (1087) : Server controllerJoinAPI_backend/k0s-controller2 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:05:04 haproxy01 haproxy[1087]: Server controllerJoinAPI_backend/k0s-controller2 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 28 10:05:04 haproxy01 haproxy[1087]: Server controllerJoinAPI_backend/k0s-controller2 is DOWN, reason: Layer6 timeout, check duration: 2001ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
This is the haproxy config
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
mode tcp
option tcplog
# timeout connect 5s
# timeout client 50s
# timeout server 50s
timeout connect 10s
timeout client 300s
timeout server 300s
frontend kubeAPI
bind :6443
mode tcp
default_backend kubeAPI_backend
frontend konnectivity
bind :8132
mode tcp
default_backend konnectivity_backend
frontend controllerJoinAPI
bind :9443
mode tcp
default_backend controllerJoinAPI_backend
backend kubeAPI_backend
mode tcp
server k0s-controller1 192.168.106.99:6443 check check-ssl verify none
server k0s-controller2 192.168.106.100:6443 check check-ssl verify none
server k0s-controller3 192.168.106.101:6443 check check-ssl verify none
backend konnectivity_backend
mode tcp
server k0s-controller1 192.168.106.99:8132 check check-ssl verify none
server k0s-controller2 192.168.106.100:8132 check check-ssl verify none
server k0s-controller3 192.168.106.101:8132 check check-ssl verify none
backend controllerJoinAPI_backend
mode tcp
server k0s-controller1 192.168.106.99:9443 check check-ssl verify none
server k0s-controller2 192.168.106.100:9443 check check-ssl verify none
server k0s-controller3 192.168.106.101:9443 check check-ssl verify none
What do I miss?
Maybe run with debug enabled to see commands that fail?
TLS handshake error from (192.168.106.99) haproxy?
Sep 29 07:37:49 controller01 sudo[840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Sep 29 07:37:49 controller01 sudo[840]: pam_unix(sudo:session): session closed for user root
Sep 29 07:37:50 controller01 k0s[614]: time="2025-09-29 07:37:50" level=error msg="Failed to count controller lease holders" component=controllerlease error="Get \"https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases\": context deadline exceeded"
Sep 29 07:37:50 controller01 k0s[614]: time="2025-09-29 07:37:50" level=info msg="E0929 07:37:43.939247 664 status.go:71] \"Unhandled Error\" err=\"apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\\\"etcdserver: leader changed\\\"}: etcdserver: leader changed\" logger=\"UnhandledError\"" component=kube-apiserver stream=stderr
Sep 29 07:37:50 controller01 k0s[614]: time="2025-09-29 07:37:50" level=info msg="E0929 07:37:44.992223 664 wrap.go:53] \"Timeout or abort while handling\" logger=\"UnhandledError\" method=\"POST\" URI=\"/apis/apiextensions.k8s.io/v1/customresourcedefinitions\" auditID=\"15cbe0e5-2ed9-4ebd-a0ae-8d55fadd09d4\"" component=kube-apiserver stream=stderr
Sep 29 07:37:50 controller01 k0s[614]: time="2025-09-29 07:37:50" level=info msg="E0929 07:37:50.817890 664 finisher.go:175] \"Unhandled Error\" err=\"FinishRequest: post-timeout activity - time-elapsed: 4.430725419s, panicked: false, err: context deadline exceeded, panic-reason: <nil>\" logger=\"UnhandledError\"" component=kube-apiserver stream=stderr
Sep 29 07:37:50 controller01 k0s[614]: time="2025-09-29 07:37:50" level=info msg="E0929 07:37:42.028748 664 status.go:71] \"Unhandled Error\" err=\"apiserver received an error that is not an metav1.Status: &errors.errorString{s:\\\"http: Handler timeout\\\"}: http: Handler timeout\" logger=\"UnhandledError\"" component=kube-apiserver stream=stderr
Sep 29 07:37:52 controller01 k0s[614]: time="2025-09-29 07:37:52" level=error msg="failed to resync etcd members" component=EtcdMemberReconciler error="Get \"https://localhost:6443/apis/etcd.k0sproject.io/v1beta1/etcdmembers\": context deadline exceeded"
Sep 29 07:37:52 controller01 k0s[614]: time="2025-09-29 07:37:52" level=debug msg="watch triggered on controller02" component=EtcdMemberReconciler
Sep 29 07:37:52 controller01 k0s[614]: time="2025-09-29 07:37:52" level=debug msg="Not the leader, skipping" component=EtcdMemberReconciler
Sep 29 07:37:53 controller01 k0s[614]: time="2025-09-29 07:37:53" level=info msg="E0929 07:37:22.544993 664 watcher.go:342] watch chan error: etcdserver: no leader" component=kube-apiserver stream=stderr
Sep 29 07:37:55 controller01 k0s[614]: time="2025-09-29 07:37:55" level=info msg="2025/09/29 07:37:54 http: TLS handshake error from 192.168.106.98:56282: write tcp 192.168.106.99:9443->192.168.106.98:56282: write: connection reset by peer" component=k0s-control-api stream=stderr
Sep 29 07:37:55 controller01 k0s[614]: time="2025-09-29 07:37:55" level=debug msg="Probing components" component=prober
Sep 29 07:37:57 controller01 k0s[614]: E0929 07:37:57.312733 614 leaderelection.go:429] Failed to update lock optimistically: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k0s-endpoint-reconciler": http2: client connection lost, falling back to slow path
Sep 29 07:37:57 controller01 k0s[614]: E0929 07:37:57.384947 614 leaderelection.go:429] Failed to update lock optimistically: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k0s-ctrl-controller01": http2: client connection lost, falling back to slow path
Sep 29 07:37:57 controller01 k0s[614]: E0929 07:37:57.824267 614 leaderelection.go:429] Failed to update lock optimistically: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k0s-worker-config-1.33": http2: client connection lost, falling back to slow path
@lzwaan Port 3128 is this a Squid proxy?
k0sctl / k0s make downloads from GitHub and Quay.io. Both are backed by Amazaon S3. The orignal download link is doing a redirect to the CDN with an expiring key. In the access log you will find:
1759079252.607 0 192.168.125.21 TCP_MEM_HIT/302 2339 GET https://quay.io/v2/k0sproject/pause/blobs/sha256:14185ca2e7ecfcf605a06776a46796612062dcec4e22bd40cee652560d64f38f - HIER_NONE/- text/html
1759079252.620 6 192.168.125.21 TCP_CF_REFRESH_MODIFIED/403 694 GET https://cdn01.quay.io/quayio-production-s3/sha256/14/14185ca2e7ecfcf605a06776a46796612062dcec4e22bd40cee652560d64f38f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250928%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250928T143636Z&X-Amz-Expires=600.....
1759084264.422 243 192.168.125.11 TCP_CF_REFRESH_MODIFIED/302 5087 GET https://github.com/k0sproject/k0s/releases/download/v1.33.4%2Bk0s.0/k0s-v1.33.4%2Bk0s.0-amd64 - HIER_DIRECT/140.82.121.4 text/html
1759084297.020 32359 192.168.125.13 TCP_REFRESH_UNMODIFIED/200 279174316 GET https://release-assets.githubusercontent.com/github-production-release-asset/271250363/e457a1f1-593b-4711-b593-8146c02ac328?sp=r&sv=2018-11-09&sr=b&spr=https&se=2025-09-28T19%3A21%3A05Z&rscd=attachment%3B+filename%3Dk0s-v1.33.4%2Bk0s.0-amd64&rsct=application%2Foctet-st....
Mind the status 403 for quay.io. This is caused by an outdated CDN url from the cache.
Either bypass or disable the caching, or, configure a storeid mapping in Squid.
I am using HAProxy with the config from the k0sctl page
@lzwaan I thought the the proxy to the internet might make problems with the downloads.
Please mind that the IPs in k0sctl.log do not fit the configuration you posted. The first controller there is 192.168.106.99 while in the config its 192.168.2.3
Maybe k0stctl did pick up a wrong config file?
oh sorry no, i am using incus to create multiple vm's, both were running on different hardware, so please ignore the different ip ranges.