vic-product
vic-product copied to clipboard
Allow user to specify internal Docker networks
https://github.com/vmware/vic-product/issues/666 As a user I want to specify what subnet internal Docker networks use so that they do not interfere with existing IP addressing. This needs research on how to do this in Docker
Acceptance criteria:
- [ ] Specify internal networks from OVA deploy wizard
- [ ] Specify internal networks from OVA init API
- [ ] Use specified networks for docker containers we run
ref: https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/
This seems like a simple change to vic-product/installer/build/scripts/systemd/docker.service.
We need to use the ovfenv tool to get the paramaters for a custom bridge network cidr from ovf input. Then we change the start line in the system unit to ExecStart=/usr/bin/dockerd --storage-driver=overlay2 --bip=$CIDR
I'm not sure if any of the other bridge network options are required.
@dbarkelew Following these instructions it is possible to configure the docker0
bridge, but other networks used by Harbor are not configurable because they are launched by compose - see this issue https://github.com/moby/moby/pull/29376 - this is a limitation of compose at this time
root@localhost [ ~ ]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.160.223.253 0.0.0.0 UG 1024 0 0 eth0
10.160.192.0 0.0.0.0 255.255.224.0 U 0 0 0 eth0
10.160.223.253 0.0.0.0 255.255.255.255 UH 1024 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-491b8d025dc6
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-9f4c3204f737
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-7b0e30461167
172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-8d6947e556f6
172.21.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-b45f0f95629a
172.31.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
root@localhost [ ~ ]# docker network ls
NETWORK ID NAME DRIVER SCOPE
f3ec71871f35 bridge bridge local
491b8d025dc6 harbor_harbor bridge local
8d6947e556f6 harbor_harbor-clair bridge local
b45f0f95629a harbor_harbor-notary bridge local
9f4c3204f737 harbor_notary-mdb bridge local
7b0e30461167 harbor_notary-sig bridge local
61aa09a8b211 host host local
7bde232e72e5 none null local
root@localhost [ ~ ]# cat /etc/docker/daemon.json
{
"bip": "172.31.0.1/16",
"fixed-cidr": "172.31.0.0/16",
"mtu": 1500,
"dns": ["10.118.81.1"]
}
The workaround if the 172.17.x.x-172.21.x.x
addrs are needed would be to configure (or provide a script to configure) Harbor compose files @reasonerjt
For the VIC appliance. Would it make since to shrink the default range on this as well while we are at it? Not seeing a need really for a /16 subnet.
If we do provide a method to automatically configure this that would be easy to do
adding to 1.3 and making high priority per @pdaigle
A customer has used and confirmed the following as a workaround:
/usr/bin/printf '[Service]\nExecStart=\nExecStart=/usr/bin/dockerd --bip=192.168.110.1/24\nRestart=on-failure\n[Install]\nWantedBy=multi-user.target' > /etc/systemd/system/docker.service
cd /etc/docker/harbor
/usr/local/bin/docker-compose -f /etc/vmware/harbor/docker-compose.yml \
-f /etc/vmware/harbor/docker-compose.notary.yml \
-f /etc/vmware/harbor/docker-compose.clair.yml down
#!/bin/bash
docker network create --subnet=192.168.9.0/24 harbor_harbor
docker network create --subnet=192.168.10.0/24 harbor_harbor-clair
docker network create --subnet=192.168.11.0/24 harbor_harbor-notary
docker network create --subnet=192.168.12.0/24 harbor_notary-mdb
docker network create --subnet=192.168.13.0/24 harbor_notary-sig
reboot
@pdaigle thanks! I replace the network configuration with a running Harbor Registry. In my case the registry already have 300 GB used so tier down the apps means to restore backups or perform migration jobs.
/usr/local/bin/docker-compose -f /etc/vmware/harbor/docker-compose.yml \
-f /etc/vmware/harbor/docker-compose.notary.yml \
-f /etc/vmware/harbor/docker-compose.clair.yml stop
docker network rm harbor_harbor
docker network rm harbor_harbor-clair
docker network rm harbor_harbor-notary
docker network rm harbor_notary-mdb
docker network rm harbor_notary-sig
docker network create --subnet=192.168.9.0/24 harbor_harbor
docker network create --subnet=192.168.10.0/24 harbor_harbor-clair
docker network create --subnet=192.168.11.0/24 harbor_harbor-notary
docker network create --subnet=192.168.12.0/24 harbor_notary-mdb
docker network create --subnet=192.168.13.0/24 harbor_notary-sig
/usr/local/bin/docker-compose -f /etc/vmware/harbor/docker-compose.yml \
-f /etc/vmware/harbor/docker-compose.notary.yml \
-f /etc/vmware/harbor/docker-compose.clair.yml start
We've also hit this issue as we use the 172.* range of IPs for internal networking and this causes conflicts with Docker bridge networks. Starting up Harbour basically cuts access to the machine externally due to routing.
The docker network solution above works, but only until the Harbour service is restarted as the networks are torn down by docker compose. Changing the docker0 bridge IP also doesn't help.
We found the issue docker/compose#4336 which states that you can add manual routes and Docker will then ignore those IP ranges when creating bridge networks, which is still not an ideal scenario as it needs to be done for each VIC appliance.
Ideally the OVF deployment would have options for subnet to use and pre-fill the docker compose scripts with these IP ranges e.g. something like shown here docker/compose#4336
@drjaydenm Thank you for sharing your findings and research.
A little too late but if you are interested the workaround that is supported through VMware support can be found here https://kb.vmware.com/s/article/56445 Since upgrading the VIC Appliance will reset all configurations it would be a good one to also bookmark until VIC Appliance bridge customization feature is added.
Hi Stuart, we don't plans to fix this in 1.5.4 and future release as it has a workaround available.