find_veth_docker
find_veth_docker copied to clipboard
Overlay Network - Docker Service veth is null
Hello,
I am trying to find an easy way to find the veths for my docker containers, and your tool seems to be working perfectly for containers that use local networks, nevertheless for docker containers that use overlay networks I get the following output:
Usage: grep [OPTION]... PATTERNS [FILE]... Try 'grep --help' for more information. Usage: grep [OPTION]... PATTERNS [FILE]... Try 'grep --help' for more information. null null my_docker_service.1.mguxaw0n8vckg907glzrpaeg4
hi, can you provide an example docker-compose.yaml file for me to reproduce your setting?
Thanks
hi, can you provide an example
docker-compose.yamlfile for me to reproduce your setting?Thanks
This is an example docker-compose.yaml that I deployed it to test it and didn't work:
version: '3.3'
services:
zookeeper:
hostname: zookeeper
image: wurstmeister/zookeeper:latest
restart: always
networks:
- storm_overlay_network
deploy:
placement:
constraints:
- node.role==manager
nimbus:
image: storm
container_name: nimbus
command: storm nimbus
depends_on:
- zookeeper
restart: always
ports:
- 6627:6627
networks:
- storm_overlay_network
deploy:
placement:
constraints:
- node.role==manager
supervisor:
image: storm
container_name: supervisor
command: storm supervisor
depends_on:
- zookeeper
- nimbus
networks:
- storm_overlay_network
deploy:
placement:
constraints:
- node.role==manager
networks:
storm_overlay_network:
external: true
Hi, sorry for the late response. I have used your docker-compose.yml, which first instructed me to create that external overlay network via sudo docker network create storm_overlay_network
After that your containers came up, and my script works completely fine.
./find_veth_docker.sh
Testing dependencies (jq)... [DONE]
VETH@HOST VETH_MAC CONTAINER_IP CONTAINER_MAC Bridge@HOST Bridge_IP Bridge_MAC CONTAINER
veth4c553f7 1a:8b:29:65:0e:41 172.19.0.4 02:42:ac:13:00:04 br-4724b6dd5d03 172.19.0.1/16 02:42:a7:08:2a:34 supervisor
veth28744f3 3a:53:cd:51:36:02 172.19.0.3 02:42:ac:13:00:03 br-4724b6dd5d03 172.19.0.1/16 02:42:a7:08:2a:34 nimbus
veth63c1eb7 ba:87:be:c5:53:f5 172.19.0.2 02:42:ac:13:00:02 br-4724b6dd5d03 172.19.0.1/16 02:42:a7:08:2a:34 tmp_zookeeper_1
Are you sure you are using the latest version? Or maybe the actual docker-compose.yml of yours looks different? There are some issues with - and _ sometimes in the names of the containers or networks.
Hello again! Yeah, it worked for you since you created a local network (sorry - it was my mistake for not mentioning it)
The problem is when creating the services using an overlay network, you can try this using this command: docker network create -d overlay storm_overlay_network. Using an overlay network returns the output I mentioned in the issue.
Hi, I managed to reproduce your setup, although I had to enable docker swarm to do so.
Moreover, your network creation command was lacking of an important argument --attachable, without which I could not run your compose file (however, the supervisor exits).
Anyway, I found the issue in my code and it was around the interfaces and gateways used by your containers. I added extra checks and error handling. Now, if you run the script without any specific argument you will see that the veth information for your containers will be N/A as they have no exact host-facing interfaces attached to their eth0 interface. Also, for the same reason, they do not have Bridge-related information.
Testing dependencies (jq)... [DONE]
VETH@HOST VETH_MAC CONTAINER_IP CONTAINER_MAC Bridge@HOST Bridge_IP Bridge_MAC CONTAINER
N/A N/A 10.0.2.47 02:42:0a:00:02:2f N/A N/A N/A nimbus
N/A N/A 10.0.2.6 02:42:0a:00:02:06 N/A N/A N/A tmp_zookeeper_1
If I take a closer look on each individual commands the script runs, it turns out that your containers have an eth1 interface as well, so you might specify it when running the script, e.g., ./find_veth_docker -i eth1. Then, you see this:
Testing dependencies (jq)... [DONE]
VETH@HOST VETH_MAC CONTAINER_IP CONTAINER_MAC Bridge@HOST Bridge_IP Bridge_MAC CONTAINER
veth6d05e8d 8e:38:97:20:b1:68 10.0.2.52 02:42:0a:00:02:34 N/A N/A N/A nimbus
veth6bfdffa 16:82:6f:3c:53:53 10.0.2.6 02:42:0a:00:02:06 N/A N/A N/A tmp_zookeeper_1
Yet, there is still no bridge info due to the overlay network, which is the intended behavior.
So, for now, these errors are handled. Please give it a try and get back to me.
Hello again! I decided to give it a try so I am letting you know, I am getting this result now:
VETH@HOST VETH_MAC CONTAINER_IP CONTAINER_MAC Bridge@HOST Bridge_IP Bridge_MAC CONTAINER
N/A N/A 192.168.16.3 02:42:c0:a8:10:03 br-7f193209189e 192.168.16.1/20 02:42:68:75:b6:ef nimbus
N/A N/A 192.168.16.2 02:42:c0:a8:10:02 br-7f193209189e 192.168.16.1/20 02:42:68:75:b6:ef find_veth_docker-master_zookeeper_1
N/A N/A N/A N/A N/A buildx_buildkit_mybuilder0
jq: error: sdk_default/0 is not defined at <top-level>, line 1:
.[].NetworkSettings.Networks.example-sdk_default.IPAddress
jq: 1 compile error
jq: error: sdk_default/0 is not defined at <top-level>, line 1:
.[].NetworkSettings.Networks.example-sdk_default.MacAddress
jq: 1 compile error
jq: error: sdk_default/0 is not defined at <top-level>, line 1:
.[].NetworkSettings.Networks.example-sdk_default.Gateway
jq: 1 compile error
N/A N/A N/A N/A N/A example-sdk_example_1
If you see N/A for veth, try using different interface identifier, e.g., eth1
Have in mind that if I run 'docker ps' this is what I have:
e4008ad52931 storm "/docker-entrypoint.…" 7 minutes ago Up 28 seconds 0.0.0.0:6627->6627/tcp, :::6627->6627/tcp nimbus
96f76409c8c0 wurstmeister/zookeeper:latest "/bin/sh -c '/usr/sb…" 7 minutes ago Up 7 minutes 22/tcp, 2181/tcp, 2888/tcp, 3888/tcp find_veth_docker-master_zookeeper_1
3f6cb2cb2e23 moby/buildkit:buildx-stable-1 "buildkitd --allow-i…" 2 months ago Up 2 months buildx_buildkit_mybuilder0
8fda8b4339fa example "/tini -- jupyter no…" 3 months ago Up 2 months 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp example-sdk_example_1