docker-openvpn
docker-openvpn copied to clipboard
Server container on multiple overlay networks and routing client packets
Dear Community and Owner,
This piece of image has been a great asset to our tooling in recent months. We have been using it at least on 5 deployments. Easy to configure, deploy and use. However in our latest deployment as an OVPN client we would like to:
- Access several Docker Overlay networks at once;
- Use Docker embedded DNS on each network to resolve containers by name.
Our current server container is deployed to a system
network. Current networks defined are:
NETWORK ID NAME DRIVER SCOPE
aa6eb2a61e12 bridge bridge local
nzfc85vuzmn7 development overlay swarm
5c05f6707db2 docker_gwbridge bridge local
zhxqi5qtif9t experiment overlay swarm
7e0400d4e510 host host local
ct1yvji5mikp ingress overlay swarm
82724fdfb0ec none null local
30o2rm16vt56 production overlay swarm
qz5r53r7eb7j system overlay swarm
The client uses the same DNS server that Docker provides to the OVPN server on the system
network, so every container is accessible by a connected client by name on the system
net. Of course, for that the
system
network's subnet's traffic is routed through the OVPN gateway on the client side; and there is also a DNS entry in the ovpn
configuration.
route 172.100.0.0 255.255.0.0
route 172.101.0.0 255.255.0.0
route 172.102.0.0 255.255.0.0
route 172.200.0.0 255.255.0.0
dhcp-option DNS 192.168.255.1
We would like to achieve the same name resolving and access from each client to other networks' containers, for example containers from development
or production
network.
When the server container is attached to another network, within the container the new network appears on another interface (eth2
, eth3
, ...).
default 172.19.0.1 0.0.0.0 UG 0 0 0 eth1
172.19.0.0 * 255.255.0.0 U 0 0 0 eth1
172.101.0.0 * 255.255.0.0 U 0 0 0 eth2 // development network
172.200.0.0 * 255.255.0.0 U 0 0 0 eth0 // system network
192.168.254.0 192.168.255.2 255.255.255.0 UG 0 0 0 tun0
192.168.255.0 192.168.255.2 255.255.255.0 UG 0 0 0 tun0
192.168.255.2 * 255.255.255.255 UH 0 0 0 tun0
However, in this setting, packets from client can not reach the development
network. If the server container is restarted, packets can not reach the system
network, but can reach the development
network.
Has anyone tried a setup like this? How one could refine routes or interfaces in this containerized OVPN server to have correct routing to different networks attached to different interfaces?
Thanks for the help!
Cheers, Zoltán
I have the same setup and trying to find a solution. One solution is to enable NAT on the docker-openvpn container. This works but is not the ideal solution. I think the host where docker is running does not know how to route back the packets from the openvpn IPs. If I set static routes "ip route add VPNSUBNET via VPNServerIP" where VPNServerIP is in your development network, then it is possible to reach all hosts from the development network as a VPN client. If I use the VPNServerIP from the system network then I can reach all system network clients. I have not found a solution to reach both networks. So imho this is a routing issue.
I have enabled NAT and it works fine so far for my use cases.
A guide would indeed be nice on how to fix the routing. On 2018-02-17 23:12:38, Erhan [email protected] wrote: I have the same setup and trying to find a solution. One solution is to enable NAT on the docker-openvpn container. This works but is not the ideal solution. I think the host where docker is running does not know how to route back the packets from the openvpn IPs. If I set static routes "ip route add VPNSUBNET via VPNServerIP" where VPNServerIP is in your development network, then it is possible to reach all hosts from the development network as a VPN client. If I use the VPNServerIP from the system network then I can reach all system network clients. I have not found a solution to reach both networks. So imho this is a routing issue. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub [https://github.com/kylemanna/docker-openvpn/issues/336#issuecomment-366475228], or mute the thread [https://github.com/notifications/unsubscribe-auth/AH--TbtrDEoxu0xNZ4zNY5p4pvBQw9-cks5tV07VgaJpZM4Qyajn].
Yeah that's fine if you use your network alone but you won't know which VPN client connected to your various services. If you have logging enabled in any of your services, you will only see the VPN container IP as you are using NAT now.
I have similar issue - would like to enable routing for extra network used to assign static IP for dns server. As a workaround I followed the suggestion and provided custom setupIptablesAndRouting
definition on ovpn_env.sh
hardcoding MASQUERADE
for extra network.
I think It could be enough though to accept OVPN_NATDEVICES
to be a list of interfaces and loop through them in the starting script.
can you guys show how did you configure NAT on docker containers and did you configured the DNS on VPN Clients to point to 192.168.255.1 so you could access a service through its name instead of its IP address?
Wow after two years and searching a solution, I find my same problem again :D I am trying it again here:
https://github.com/kylemanna/docker-openvpn/issues/622