wg-access-server
wg-access-server copied to clipboard
How to avoid wireguard docker NAT and masquerading issues
This is not really a feature request, but something I struggled with and I think other people may too. So I wanted to document the solution here and it can be closed right away.
By default docker places containers (including wg-access-server) in it's own network like 172.28.0.0/16 and then uses NAT and masquerading to route the traffic into the internal network.
Let's say the wg-access-server container has the IP 172.28.0.8. All traffic coming from it to the host's network will seem to come from the docker0 IP 172.28.0.1.
What it means for wg-access-server is that all connections from the VPN clients will also appear to come from 172.28.0.1 instead of from their VPN IP (like 10.44.0.3).
You can test this by running nc -kvl 5060 on one of the servers in the host network and then connecting from a VPN client to it using telnet
The only way I could find to solve this issue was by running the container in network-mode: host.
I did this by adding this to my docker-compose.yml:
version: "3.7"
services:
wg-access-server:
[...]
network_mode: "host"
If you already have a wireguard interface on the host you also need to change the interface in the config.yaml:
wireguard:
# The network interface name for wireguard
# Optional
interfaceName: wg1
After making that change connections from VPN clients come from their IPs in the VPN network.
I am using traefik as my reverse proxy in front of docker services and this made it impossible to use the automatic discovery. So I added a file provider with the following config:
# Traefik can't route to containers running in host network mode.
# So we use this config to define it.
http:
routers:
vpn:
service: vpn_service
rule: "Host(`vpn.example.com`)"
services:
vpn_service:
loadBalancer:
servers:
- url: http://172.17.0.1:8000
Another thing that was particular to my own setup is that I am running internal services on the same host. I wanted them to be accessible from both the internet and from the VPN. When access from the VPN they should either request a login or show a "Connect to VPN to access message". And when accessed from within the VPN they should be allowed without an additional login.
I was able to achieve this by adding the server's external IP to my wireguard AllowedIPs config, such as:
AllowedIPs = 10.44.0.0/16, 10.0.0.0/16, 35.233.XX.XX/32
But since now all connections to 35.233.XX.XX were routed through the internet of the VPN server all the logs showed that the traffic came also from the external IP 35.233.XX.XX (VPN and services are on the same server in my case).
I was able to solve this by changing requests to that IP to the IP of the traefik container:
sudo iptables -t nat -A PREROUTING -p tcp -d 35.233.XX.XX --dport 443 -i wg1 -j DNAT --to 172.28.0.6:443
Now when a VPN client tries to connect to service.example.com:443 their request is redirected from 35.233.XX.XX:443 to 172.28.0.6:443 and the logs show the connection coming from the user's VPN IP (like 10.44.0.3).
Hopefully this can save someone some time.
This is an interesting use-case. I'll have to give this another read tomorrow and perhaps ask some more questions.
A common goal of a VPN server is to hide the client's IP and it sounds like you're trying to do the opposite here because it's your own network and you want to use the client's source IP to authenticate with a service.
What you've done to get this working is really interesting and makes logical sense to me, especially the last iptables trick.
I'm not sure how I can really support this use-case natively in wg-access-server as it's pretty advanced.
I wonder if you could approach the problem a different way by adding the services (docker containers) as VPN clients so that other VPN clients communicate directly in the VPN VLAN. Perhaps this is a use-case that wg-access-server could make simpler somehow.
Yeah it depends on what your use-case is.
One primary use-case for VPNs is to browse the internet and hide your real IP-- particularly if you want to basically simulate your own VPN service instead of using express VPN or whatever.
But another big use-case is to give you access to an internal network. In my case, we have servers in the cloud like AWS, Google Cloud and Azure and instead of exposing those services over the internet I only want to expose them to team members through a VPN. So in that case I don't want to hide the IP. I want to keep the accountability of who connected where in the logs.
Other people may want to connect their local network to the internet. For example, there are a lot of people who host plex and other media center software from home and want it to be accessible when they are away, but without exposing it publicly to the internet.
I think you may get a lot of users like me for your software. What attracted me to it is that you support OIDC login, which makes it very easy for me to onboard team members and let them manage all their devices.
And I don't think you really need to do anything to support this use-case, except perhaps document it somewhere.
As I said, I didn't write this as a feature request or real issue with your software. I just had a really hard time figuring out how to set this up and wanted to document it somewhere so that other people can do it quicker.
The only thing that may make sense is to implement something like wireguard's native PostUp / PostDown feature to allow people to add their own ip tables rules or other automations.
@Place1 this use case is similar to mine, I would like to access my Kubernetes network (and services) using wireguard VPN.
Right now I am using openvpn as a helm/docker running inside the cluster adn exposing the vpn port to th einternet so I cna generate a key per user
this use case is similar to mine, I would like to access my Kubernetes network (and services) using wireguard VPN
@jalberto i've just tested this use-case and I can confirm it works with wg-access-server currently.
I've deployed wg-access-server in a k8s cluster with allowed ips 0.0.0.0/0 (the default) and I can curl ClusterIP services.
If you'd like DNS to work as well then you could configure wg-access-server with it's upstream DNS set to your kubernetes coredns endpoint.
Thanks for this comment. I was headed down this road, but I would have run into problems without your note about interfaceName, as my Docker host is also a WireGuard client node (although with network_mode: host I probably don't need it to be a client anymore).
I think this is a pretty common usage scenario. My Docker host is my VPS, and I want my WireGuard clients to be able to connect to it. I want to control access to my reverse proxy by VPN ID, but with WireGuard running in Docker, all the client connections were coming from Docker IP addresses. I treat my other Docker containers like hostile entities and normally don't let them connect to each other unless there's a compelling reason (e.g. drone-server needing to talk to drone-runner). Having to open my reverse proxy access to Docker's network wasn't ideal. This will fix that.
Thanks @infused-kim!