autok3s
autok3s copied to clipboard
[BUG] docker-compose - windows - Cannot connect to the Docker daemon at unix:///var/run/docker.sock
Describe the bug Not sure whether it is a missconfiguration or a bug
To reproduce Steps to reproduce the behavior:
- Create a docker-compose.yaml:
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
autok3s:
image: cnrancher/autok3s:v0.6.0
init: true
ports:
- 8080
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- C:\\Users\\doublefx\\.autok3s\\:/home/.autok3s/
environment:
- AUTOK3S_CONFIG=/home/.autok3s/
- VIRTUAL_HOST=autok3s.vcap.me
- run docker-compose -d up
- create a k3d cluster
- See an error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
OS: Windows 11 / Docker Desktop v4.13.1 (WSL2) AutoK3s Version: v0.6.0
@doublefx Thanks for your feedback!
Try changing the volume mounts path /var/run/docker.sock:/var/run/docker.sock:ro
like this:
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
autok3s:
image: cnrancher/autok3s:v0.6.0
init: true
ports:
- 8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- C:\\Users\\doublefx\\.autok3s\\:/home/.autok3s/
environment:
- AUTOK3S_CONFIG=/home/.autok3s/
- VIRTUAL_HOST=autok3s.vcap.me
I tried but it failed to start the nginx service.
Weirdly, when using /tmp/docker.sock
and connecting to Container autok3s-autok3s-1, then running:
curl --unix-socket /tmp/docker.sock http://0.0.0.0/version
It works:
{"Platform":{"Name":"Docker Desktop 4.13.1 (90346)"},"Components":[{"Name":"Engine","Version":"20.10.20","Details":{"ApiVersion":"1.41","Arch":"amd64","BuildTime":"2022-10-18T18:18:35.000000000+00:00","Experimental":"false","GitCommit":"03df974","GoVersion":"go1.18.7","KernelVersion":"5.15.68.1-microsoft-standard-WSL2","MinAPIVersion":"1.12","Os":"linux"}},{"Name":"containerd","Version":"1.6.8","Details":{"GitCommit":"9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6"}},{"Name":"runc","Version":"1.1.4","Details":{"GitCommit":"v1.1.4-0-g5fd4c4d"}},{"Name":"docker-init","Version":"0.19.0","Details":{"GitCommit":"de40ad0"}}],"Version":"20.10.20","ApiVersion":"1.41","MinAPIVersion":"1.12","GitCommit":"03df974","GoVersion":"go1.18.7","Os":"linux","Arch":"amd64","KernelVersion":"5.15.68.1-microsoft-standard-WSL2","BuildTime":"2022-10-18T18:18:35.000000000+00:00"}
So, I'm puzzling at figuring out why the create cluster cannot communicate with docker via sock.
This should be a bug in k3d
provider. Only the default socket path is considered here, and the custom DOCKER_HOST
env is not open to configure.
You can try to use the autok3s -d serve
command to get it up and running.
If you mean from directly from the CLI, see my other issue
Try this docker compose(which only running autok3s container with host port mapping), then access ui using http://127.0.0.1:<host mapping>
:
services:
autok3s:
image: cnrancher/autok3s:v0.6.0
init: true
ports:
- 8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $HOME/.autok3s/:$HOME/.autok3s/
environment:
- AUTOK3S_CONFIG=$HOME/.autok3s/
Yes, got it working with:
services:
autok3s:
image: cnrancher/autok3s:v0.6.0
init: true
ports:
- 80:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $HOME/.autok3s/:/home/.autok3s/
environment:
- AUTOK3S_CONFIG=/home/.autok3s/
Do you know any other way to get a virtual host on the top?
All right, a bit better:
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
autok3s:
image: cnrancher/autok3s:v0.6.0
init: true
ports:
- 8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $HOME/.autok3s/:/home/.autok3s/
environment:
- AUTOK3S_CONFIG=/home/.autok3s/
- VIRTUAL_HOST=autok3s.vcap.me
Which allow to connect to autok3s.vcap.me
and create a cluster but unfortunately the cluster does not appear in the UI.
But it still created:
And can connect to it from >_ Launch Kubectl
I will support enviroment DOCKER_HOST
feature as soon as possible to support this scenario.
Yes please as not having the cluster in the UI, I cannot do any action on it (Delete, Clone, ...)
@Jason-ZW Actually after a bit, I refreshed and seen the cluster in the cluster list, so I:
- Deleted it
- Recreated the cluster
- Managed to Launch Kubectl from the UI
- Enabled explorer
- connected to it.
Also, I wonder if it has some connectivity consequences as none of the create pods succeeded, see FailedCreatePodSandBox in the above log, what do you think?
Can your environment access the Internet? I try on my environment and the pods can be pulled down.
Unfortunately I think 0.6.0
is a failed version, it is recommended to use 0.5.2
to experience. Will release 0.6.1
ASAP.
I can manually curl https://auth.docker.io/token?scope=repository%3Arancher%2Fpause%3Apull&service=registry.docker.io
from inside the cluster
I will try your steps on windows tomorrow, then give your feedback message.
@doublefx Using Windows 11 22H2 can not reproduce your situation.
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
autok3s:
image: cnrancher/autok3s:v0.6.0
init: true
ports:
- 8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $HOME/.autok3s/:/home/.autok3s/
environment:
- AUTOK3S_CONFIG=/home/.autok3s/
- VIRTUAL_HOST=autok3s.vcap.me

Hi @Jason-ZW
I would love to see the same on my side but the server node does not seem to have any connectivity to the outside word:
E1115 18:10:12.787117 7 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"rancher/pause:3.1\": failed to pull image \"rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head \"https://registry-1.docker.io/v2/rancher/pause/manifests/3.1\": dial tcp: lookup registry-1.docker.io: Try again"
This could be a network issue in your environment.
I will try today with the last RC