weave icon indicating copy to clipboard operation
weave copied to clipboard

Using weave with podman

Open deminngi opened this issue 5 years ago • 8 comments

Is there a chance to use weave CNI plugin with podman?

I tried to start weaveexec with sudo podman --log-level debug run --rm --privileged --net host --pid host -v /:/host -e HOST_ROOT=/host -e DOCKERHUB_USER=weaveworks -e WEAVE_VERSION -e WEAVE_DEBUG -e WEAVE_DOCKER_ARGS -e WEAVE_PASSWORD -e WEAVE_PORT -e WEAVE_HTTP_ADDR -e WEAVE_STATUS_ADDR -e WEAVE_CONTAINER_NAME -e WEAVE_MTU -e WEAVE_NO_FASTDP -e WEAVE_NO_BRIDGED_FASTDP -e DOCKER_BRIDGE -e DOCKER_CLIENT_HOST= -e DOCKER_CLIENT_ARGS -e PROXY_HOST=127.0.0.1 -e COVERAGE -e CHECKPOINT_DISABLE -e AWSVPC weaveworks/weaveexec:2.5.2 --local launch

Got Failed to get netdev for "docker0" bridge: Link not found

Here an excerpt of the debug log:

INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de.scope 
DEBU[0000] Received: 581                                
INFO[0000] Got Conmon PID as 553                        
DEBU[0000] Created container 04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de in OCI runtime 
DEBU[0000] Attaching to container 04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de 
DEBU[0000] connecting to socket /var/run/libpod/socket/04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de/attach 
DEBU[0000] Starting container 04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de with command [/home/weave/sigproxy /home/weave/weave --local launch] 
DEBU[0000] Started container 04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de 
DEBU[0000] Enabling signal proxying                     
Failed to get netdev for "docker0" bridge: Link not found
DEBU[0000] Cleaning up container 04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] unmounted container "04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de" 
DEBU[0000] Successfully cleaned up container 04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de 
DEBU[0000] Container 04a0eac0f55ed48d5e8cb5bc79f9852dce09a40101024ce5fa77478aa95774de storage is already unmounted, skipping... 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] cached value indicated that metacopy is not being used 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false

Here my images:

$ sudo podman images
REPOSITORY                       TAG      IMAGE ID       CREATED        SIZE
docker.io/library/alpine         latest   e9a72a7c189c   3 weeks ago    5.61 MB
docker.io/weaveworks/weavedb     latest   6898eac75586   2 months ago   6.42 kB
docker.io/weaveworks/weaveexec   2.5.2    429ac05cb8ee   8 months ago   167 MB
docker.io/weaveworks/weave       2.5.2    d2dc0122f4e3   8 months ago   115 MB
k8s.gcr.io/pause  

Here the defined rootful networks:

$ sudo podman network ls
NAME     VERSION   PLUGINS
weave    0.3.0     weave-net,portmap
podman   0.4.0     bridge,portmap,firewall

Here the content of /etc/cni/net.d/10-weave.conflist:

$ cat 10-weave.conflist 
{
    "cniVersion": "0.3.0",
    "name": "weave",
    "plugins": [
        {
            "name": "weave",
            "type": "weave-net",
            "hairpinMode": true
        },
        {
            "type": "portmap",
            "capabilities": {"portMappings": true},
            "snat": true
        }
    ]
}

Here the libpod config file:

NOTE: I changed the container runtime runc with crun (!)

$ cat /home/podman/.config/containers/libpod.conf
volume_path = "/home/podman/.local/share/containers/storage/volumes"                                                                                           
image_default_transport = "docker://"                                                                                                                          
runtime = "crun"                                                                                                                                               
runtime_supports_json = ["crun"]                                                                                                                               
runtime_supports_nocgroups = ["crun"]                                                                                                                          
conmon_path = ["/usr/libexec/podman/conmon", "/usr/local/libexec/podman/conmon", "/usr/local/lib/podman/conmon", "/usr/bin/conmon", "/usr/sbin/conmon", "/usr/local/bin/conmon", "/usr/local/sbin/conmon", "/run/current-system/sw/bin/conmon"]                                                                               
conmon_env_vars = ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"]                                                                        
cgroup_manager = "cgroupfs"                                                                                                                                    
init_path = ""                                                                                                                                                 
static_dir = "/home/podman/.local/share/containers/storage/libpod"                                                                                             
tmp_dir = "/run/user/1000/libpod/tmp"                                                                                                                          
max_log_size = -1                                                                                                                                              
no_pivot_root = false                                                                                                                                          
cni_config_dir = "/etc/cni/net.d/"                                                                                                                             
cni_plugin_dir = ["/usr/libexec/cni", "/usr/lib/cni", "/usr/local/lib/cni", "/opt/cni/bin"]                                                                    
infra_image = "k8s.gcr.io/pause:3.1"                                                                                                                           
infra_command = "/pause"                                                                                                                                       
enable_port_reservation = true                                                                                                                                 
label = true                                                                                                                                                   
network_cmd_path = ""                                                                                                                                          
num_locks = 2048                                                                                                                                               
lock_type = "shm"                                                                                                                                              
events_logger = "journald"                                                                                                                                     
events_logfile_path = ""                                                                                                                                       
detach_keys = "ctrl-p,ctrl-q"                                                                                                                                  
SDNotify = false                                                                                                                                               
                                                                                                                                                               
[runtimes]                                                                                                                                                     
  crun = ["/usr/bin/crun", "/usr/local/bin/crun"]

Should I define a custom bridge called docker0?

deminngi avatar Jan 17 '20 07:01 deminngi

Having exactly the same issue with 2.6 @giminni Did you get anywhere with this ?

primeroz avatar Feb 11 '20 19:02 primeroz

even creating a fake "docker0" bridge does not help

it does look like the weave helper script is really geared only towards docker ( Probably for the proxy feature )

 podman run --rm --privileged --net host --pid host -v /:/host -e HOST_ROOT=/host -e DOCKERHUB_USER=weaveworks -e WEAVE_VERSION -e WEAVE_DEBUG -e WEAVE_DOCKER_ARGS -e WEAVE_PASSWORD -e WEAVE_PORT -e WEAVE_HTTP_ADDR -e WEAVE_STATUS_ADDR -e WEAVE_CONTAINER_NAME -e WEAVE_MTU -e WEAVE_NO_FASTDP -e WEAVE_NO_BRIDGED_FASTDP  -e DOCKER_CLIENT_HOST= -e DOCKER_CLIENT_ARGS -e PROXY_HOST=127.0.0.1 -e COVERAGE -e CHECKPOINT_DISABLE -e AWSVPC -e WEAVE_DEBUG weaveworks/weaveexec:2.6.0 --local launch archpi  
cannot locate running docker daemon
Warning: unable to detect proxy TLS configuration. To enable TLS, launch the proxy with 'weave launch' and supply TLS options. To suppress this warning, supply the '--no-detect-tls' option.
unable to create container: Get http://unix.sock/v1.18/version: dial unix /var/run/docker.sock: connect: no such file or directory

trying to start weave pod directly

podman run --rm --privileged --net host --pid host -v /:/host -e HOST_ROOT=/host -e DOCKERHUB_USER=weaveworks -e WEAVE_VERSION -e WEAVE_DEBUG -e WEAVE_DOCKER_ARGS -e WEAVE_PASSWORD -e WEAVE_PORT -e WEAVE_HTTP_ADDR -e WEAVE_STATUS_ADDR -e WEAVE_CONTAINER_NAME -e WEAVE_MTU -e WEAVE_NO_FASTDP -e WEAVE_NO_BRIDGED_FASTDP  -e DOCKER_CLIENT_HOST= -e DOCKER_CLIENT_ARGS -e PROXY_HOST=127.0.0.1 -e COVERAGE -e CHECKPOINT_DISABLE -e AWSVPC -e WEAVE_DEBUG weaveworks/weave:2.6.0 --port 6783 --nickname archrock64 --host-root=/host --docker-bridge docker0 --weave-bridge weave --datapath datapath --ipalloc-range 10.32.0.0/12 --dns-listen-address 172.17.0.1:53 --http-addr 127.0.0.1:6784 --status-addr 127.0.0.1:6782 --resolv-conf /var/run/weave/etc/resolv.conf --proxy=false --plugin archpi
INFO: 2020/02/11 19:18:21.778456 Command line options: map[datapath:datapath dns-listen-address:172.17.0.1:53 docker-bridge:docker0 host-root:/host http-addr:127.0.0.1:6784 ipalloc-range:10.32.0.0/12 nickname:archrock64 plugin:true port:6783 proxy:false resolv-conf:/var/run/weave/etc/resolv.conf status-addr:127.0.0.1:6782 weave-bridge:weave]
INFO: 2020/02/11 19:18:21.778659 weave  2.6.0
INFO: 2020/02/11 19:18:22.457999 Bridge type is bridged_fastdp
INFO: 2020/02/11 19:18:22.458080 Communication between peers is unencrypted.
INFO: 2020/02/11 19:18:22.508815 Our name is 0e:cb:f2:fa:9f:47(archrock64)
INFO: 2020/02/11 19:18:22.509033 Launch detected - using supplied peer list: [archpi]
FATA: 2020/02/11 19:18:22.512414 Unable to start docker client: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory

primeroz avatar Feb 11 '20 19:02 primeroz

Yes, the weave script assumes Docker. If you have necessary tools installed you can run weave --local.

Add --docker-api='' to stop it trying to talk to Docker. Take a look at how the Kubernetes Daemonset image runs - it has no contact with Docker.

Happy to review PRs to make it work better with Podman.

bboreham avatar Feb 12 '20 15:02 bboreham

Thanks for the pointer , i will give it a shot and see if i can come up with changes that will support podman to the script

@bboreham in general do you prefer

  • a flag to the script like --podman
  • a try an error in the script to detect if docker is present else failover to podman
  • a third option i did not think about

?

primeroz avatar Feb 12 '20 15:02 primeroz

I know almost nothing about Podman. I guess --podman would parallel --local. If it has an API like Docker then --podman-api would parallel --docker-api. Perhaps an environment variable. Probably not magic failover. Sadly the script has hundreds of odd corners - it's ok to get just a couple of main tasks working.

bboreham avatar Feb 12 '20 17:02 bboreham

podman has a docker-compatible API socket (since podman v4 I believe) which can be activated through systemd with systemctl start podman (which simply runs podman system service in the background for you)

The actual socket's location will vary depending on how you use podman (rootful or rootless). The following command (tested with podman v4.3.1) will give you its location: podman info -f json | jq -r .host.remoteSocket.path

Therefore, you should be able to go past the error above (Unable to start docker client: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory) by mounting podman's API socket at /var/run/docker.sock inside the container.

In your commands, you may also need to adapt --docker-bridge docker0 so as to match podman's default bridge network (podman)

fpoirotte avatar May 23 '23 22:05 fpoirotte

@primeroz did you get anywhere with supporting podman?

byrnedo avatar Nov 18 '23 19:11 byrnedo

May I check if podman can be used with weavenet ?

lchunleo avatar Jan 24 '24 01:01 lchunleo