gluetun
gluetun copied to clipboard
Help: Name resolution from Gluetun or stack sharing container to other containers on network does not work
TLDR: Unable to resolve containers on the same user-defined network using built-in docker dns.
-
Is this urgent?
- [ ] Yes
- [x] No
-
What VPN service provider are you using?
- [x] PIA
-
What's the version of the program?
You are running on the bleeding edge of latest!
-
What are you using to run the container?
- [x] Docker Compose
-
Extra information
Logs:
Working example from container: alpine
$ docker exec -it alpine /bin/sh
/ # host jackett
jackett has address 172.18.0.2
/ # host gluetun
gluetun has address 172.18.0.5
/ #
Example from container: gluetun where dns fails
$ docker exec -it gluetun /bin/sh
/ # host sonarr
Host sonarr not found: 3(NXDOMAIN)
/ # host jackett
Host jackett not found: 3(NXDOMAIN)
/ # host google.com
google.com has address 172.217.14.238
Configuration file:
version: "3.7"
services:
gluetun:
image: qmcgaw/private-internet-access
container_name: gluetun
cap_add:
- NET_ADMIN
networks:
- frontend
ports:
- 8000:8000/tcp # Built-in HTTP control server
- 8080:8080/tcp #Qbittorrent
# command:
volumes:
- /configs/vpn:/gluetun
environment:
# More variables are available, see the readme table
- VPNSP=private internet access
# Timezone for accurate logs times
- TZ=America/Los_Angeles
# All VPN providers
- USER=username
# All VPN providers but Mullvad
- PASSWORD=pwd
# All VPN providers but Mullvad
- REGION=CA Vancouver
- PORT_FORWARDING=on
- PORT_FORWARDING_STATUS_FILE="/gluetun/forwarded_port"
- PIA_ENCRYPTION=normal
- GID=1000
- UID=1000
- FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24
restart: always
qbittorrent:
image: linuxserver/qbittorrent
container_name: qbittorrent
network_mode: "service:gluetun"
volumes:
- /configs/qbt:/config
- /media:/media
environment:
- PUID=1000
- PGID=1000
- TZ=America/Los_Angeles
- UMASK_SET=000
restart: unless-stopped
jackett:
image: linuxserver/jackett
container_name: jackett
networks:
- frontend
ports:
- 9117:9117/tcp #Jackett
volumes:
- /configs/jackett:/config
environment:
- PUID=1000
- PGID=1000
- TZ=America/Los_Angeles
- UMASK_SET=000
restart: unless-stopped
alpine:
image: alpine
networks:
- frontend
container_name: alpine
command: tail -f /dev/null
networks:
frontend:
name: custom_net
ipam:
config:
- subnet: "172.18.0.0/16"
Host OS: Ubuntu 20.04 LTS
Hello, I am trying to setup my containers such that I can call them with names. My setup consists of Gluetun on the "frontend" network. Qbittorrent shares the network stack with Gluetun. 2 additional containers exist, Jackett and alpine. As you can see from the logs, from the alpine (test) container, I am able to resolve the names of jackett and gluetun containers.
I am however unable to do this the other way, i.e. resolve the names of jackett or alpine from the gluetun container. I am sure this has something to do with the DOT setup, but I have tried various things to no avail.
192.168.1.0/24 is my local lan. I left it in there so that traffic an talk to local LAN services. Any assistance would be appreciated.
Hello there! I'll dig more/test it myself tomorrow, but does it work when DOT=off
? And indeed, it's most likely due to the DNS over TLS interfering.
@qdm12 , yes, I have tried it with DOT=off and setting DNS_PLAINTEXT_ADDRESS=<local_lan_IP> to no avail. Thanks.
So I did some more digging. On the alpine container that is not sharing the stack with Gluetun, I checked its /etc/resolv.conf config. It points to docker's embedded dns server 127.0.0.11
$ docker exec -it alpine /bin/sh
/ # cat /etc/resolv.conf
search local
nameserver 127.0.0.11
options ndots:0
/ # host jackett
jackett has address 172.18.0.3
/ # exit
I then ran the same test on the container that shares the network stack with gluetun and the name resolutions worked. So it seems the DNS server change is causing this change in behavior
$ docker exec -it alpine_vpn /bin/sh
/ # host jackett 127.0.0.11
Using domain server:
Name: 127.0.0.11
Address: 127.0.0.11#53
Aliases:
jackett has address 172.18.0.3
#searching using DOT
/ # host jackett
Host jackett not found: 3(NXDOMAIN)
I am not that familiar with Go or the way the code for gluetun works, to help with code changes. Can you do any config in unbound to send non-FQDN queries towards the built-in dns servers and everything else to DOT ?
Thanks.
Hello @networkprogrammer sorry I went short on time; Anyway, thanks for digging.
The problem I think of is that if you use the Docker network DNS resolver, it will use it for resolving anything instead of Unbound (i.e. nslookup google.com 127.0.0.11
). Under the hood, the program still uses Unbound so any Unbound configuration option you can find (from here) can be added. I quickly searched through them but I'm not sure if there is a way to split DNS traffic i.e. depending on the hostname being resolved.
Let me know if you find anything, I'll be happy to add it to the Go code so you can use it through an env variable.
Hi @qdm12 ,
So I feel like I am very close, but seems like there are many moving parts. From this issue, it seems that we can use the dns
command to have queries forwarded from the embedded dns server to unbound.
services:
gluetun:
image: myvpn
container_name: gluetun
cap_add:
- NET_ADMIN
networks:
frontend:
ipv4_address: 172.18.0.100
dns: 172.18.0.100
ports:
- 8000:8000/tcp # Built-in HTTP control server
- 8080:8080/tcp #Qbittorrent
- 53:53/udp
- 53:53/tcp
Disabling systemd-resolved on ubuntu. This is needed as port 53 is used by the systemd resolver. I cannot map it to gluetun without disabling the resolver.
So /etc/resolv.conf
for gluetun would point to the embedded dns server 127.0.0.11
. Then based on the dns
option, it would query the embedded server first and the server in turn would point back to unbound for internet queries. I validated that from gluetun, I can query the embedded dns server for service names and can hit unbound at 127.0.0.1
for internet names.
However I am failing to get unbound respond to queries from anything outside localhost.
14:32:28.483746 IP 172.18.0.1.58826 > 172.18.0.100.53: 12465+ A? sonarr.local. (30)
14:32:28.483775 IP 172.18.0.1.58826 > 172.18.0.100.53: 12465+ A? sonarr.local. (30)
14:32:28.483868 IP 172.18.0.100.53 > 172.18.0.1.58826: 12465 Refused- [0q] 0/0/0 (12)
14:32:28.483868 IP 172.18.0.100.53 > 172.18.0.1.58826: 12465 Refused- [0q] 0/0/0 (12)
With the little Go knowledge and without digging too deep into Gluetun I keep failing when building the image manually. Unbound does not listen to outside queries by default, adding the access-control will allow it to respond to queries from outside.
Step 17/31 : RUN go test ./...
---> Running in 8dcc5058e905
? github.com/qdm12/gluetun [no test files]
? github.com/qdm12/gluetun/internal/alpine [no test files]
? github.com/qdm12/gluetun/internal/cli [no test files]
ok github.com/qdm12/gluetun/internal/constants 0.010s
--- FAIL: Test_generateUnboundConf (0.00s)
conf_test.go:93:
Error Trace: conf_test.go:93
Error: Not equal:
expected: "\nserver:\n cache-max-ttl: 9000\n cache-min-ttl: 3600\n do-ip4: yes\n do-ip6: yes\n harden-algo-downgrade: yes\n harden-below-nxdomain: yes\n harden-referral-path: yes\n hide-identity: yes\n hide-version: yes\n interface: 0.0.0.0\n key-cache-size: 16m\n key-cache-slabs: 4\n msg-cache-size: 4m\n msg-cache-slabs: 4\n num-threads: 1\n port: 53\n prefetch-key: yes\n prefetch: yes\n root-hints: \"/etc/unbound/root.hints\"\n rrset-cache-size: 4m\n rrset-cache-slabs: 4\n rrset-roundrobin: yes\n tls-cert-bundle: \"/etc/ssl/certs/ca-certificates.crt\"\n trust-anchor-file: \"/etc/unbound/root.key\"\n use-syslog: no\n username: \"nonrootuser\"\n val-log-level: 3\n verbosity: 2\n local-zone: \"b\" static\n local-zone: \"c\" static\n private-address: 9.9.9.9\n private-address: c\n private-address: d\nforward-zone:\n forward-no-cache: no\n forward-tls-upstream: yes\n name: \".\"\n forward-addr: 1.1.1.1@853#cloudflare-dns.com\n forward-addr: 1.0.0.1@853#cloudflare-dns.com\n forward-addr: 2606:4700:4700::1111@853#cloudflare-dns.com\n forward-addr: 2606:4700:4700::1001@853#cloudflare-dns.com\n forward-addr: 9.9.9.9@853#dns.quad9.net\n forward-addr: 149.112.112.112@853#dns.quad9.net\n forward-addr: 2620:fe::fe@853#dns.quad9.net\n forward-addr: 2620:fe::9@853#dns.quad9.net"
actual : "\nserver:\n access-control: 172.18.0.0/16\n cache-max-ttl: 9000\n cache-min-ttl: 3600\n do-ip4: yes\n do-ip6: yes\n harden-algo-downgrade: yes\n harden-below-nxdomain: yes\n harden-referral-path: yes\n hide-identity: yes\n hide-version: yes\n interface: 0.0.0.0\n key-cache-size: 16m\n key-cache-slabs: 4\n msg-cache-size: 4m\n msg-cache-slabs: 4\n num-threads: 1\n port: 53\n prefetch-key: yes\n prefetch: yes\n root-hints: \"/etc/unbound/root.hints\"\n rrset-cache-size: 4m\n rrset-cache-slabs: 4\n rrset-roundrobin: yes\n tls-cert-bundle: \"/etc/ssl/certs/ca-certificates.crt\"\n trust-anchor-file: \"/etc/unbound/root.key\"\n use-syslog: no\n username: \"nonrootuser\"\n val-log-level: 3\n verbosity: 2\n local-zone: \"b\" static\n local-zone: \"c\" static\n private-address: 9.9.9.9\n private-address: c\n private-address: d\nforward-zone:\n forward-no-cache: no\n forward-tls-upstream: yes\n name: \".\"\n forward-addr: 1.1.1.1@853#cloudflare-dns.com\n forward-addr: 1.0.0.1@853#cloudflare-dns.com\n forward-addr: 2606:4700:4700::1111@853#cloudflare-dns.com\n forward-addr: 2606:4700:4700::1001@853#cloudflare-dns.com\n forward-addr: 9.9.9.9@853#dns.quad9.net\n forward-addr: 149.112.112.112@853#dns.quad9.net\n forward-addr: 2620:fe::fe@853#dns.quad9.net\n forward-addr: 2620:fe::9@853#dns.quad9.net"
Diff:
--- Expected
+++ Actual
@@ -2,2 +2,3 @@
server:
+ access-control: 172.18.0.0/16
cache-max-ttl: 9000
Test: Test_generateUnboundConf
FAIL
FAIL github.com/qdm12/gluetun/internal/dns 0.008s
Any guidance on how to get past the test step in the image build? I also initially added multiple access-control
directives, one for 127.0.0.1/8
and one for the lan, but the testing complained about duplicate keys.
Thanks again.
If you feel like fiddling a bit with Go and gluetun:
- See https://github.com/qdm12/gluetun/wiki/Developement-setup#using-vscode-and-docker so you can easily have everything setup and throw it away too
- Modify https://github.com/qdm12/gluetun/blob/master/internal/dns/conf_test.go#L46 to match the actual configuration you get from running the test (you can click on run test above the Go test function in VSCode).
I'm afk right now, but I'll add you as maintainer so you can easily make a branch/PR and I can help fixing it up.
I'm still trying to find a zero-config change solution though.
For now the following should work right?.,..
- Specify the DNS with
dns: 172.18.0.100
(it should work without having to publish port 53 and conflict with the host) - Leave the /etc/resolv.conf of the container untouched so it relies on Docker to route the DNS queries back to Unbound
Although that requires to add a dns entry to your docker configuration. I can always add an env variable to enable this different behavior, but it's not ideal.
Maybe an alternative would be to tell Unbound to use the Docker network DNS only for private addresses, but I'm not sure that's possible. If it is, the Go program could detect the original DNS address (before overriding it) and set it in the Unbound configuration. That may solve #188 as well. I'll dig into the Unbound configuration options.
Hey @qdm12, I think I got it. I have very limited GIT and GO knowledge. But setting up the Dev Container seemed to have helped a lot. Here are the changes that helped me get this working.
$ git diff
diff --git a/internal/dns/conf.go b/internal/dns/conf.go
index 2156bc8..8c730e0 100644
--- a/internal/dns/conf.go
+++ b/internal/dns/conf.go
@@ -63,10 +63,11 @@ func generateUnboundConf(ctx context.Context, settings settings.DNS,
"harden-below-nxdomain": "yes",
"harden-referral-path": "yes",
"harden-algo-downgrade": "yes",
+ "access-control": "172.18.0.0/16 allow",
// Network
"do-ip4": "yes",
"do-ip6": doIPv6,
- "interface": "127.0.0.1",
+ "interface": "0.0.0.0",
"port": "53",
// Other
"username": "\"nonrootuser\"",
diff --git a/internal/dns/conf_test.go b/internal/dns/conf_test.go
index a166300..db955fe 100644
--- a/internal/dns/conf_test.go
+++ b/internal/dns/conf_test.go
@@ -45,6 +45,7 @@ func Test_generateUnboundConf(t *testing.T) {
require.Len(t, warnings, 0)
expected := `
server:
+ access-control: 172.18.0.0/16 allow
cache-max-ttl: 9000
cache-min-ttl: 3600
do-ip4: yes
@@ -54,7 +55,7 @@ server:
harden-referral-path: yes
hide-identity: yes
hide-version: yes
- interface: 127.0.0.1
+ interface: 0.0.0.0
key-cache-size: 16m
key-cache-slabs: 4
msg-cache-size: 4m
:
Then I build my docker image
docker build -t myvpn .
After that used the image in my dockerfile. Skipping all the irrelevant parts from the service definition.
version: "3.7"
services:
gluetun:
image: myvpn
container_name: gluetun
cap_add:
- NET_ADMIN
networks:
frontend:
ipv4_address: 172.18.0.100
dns: 172.18.0.100
environment:
- DNS_KEEP_NAMESERVER=on
So what that gives me is the ability to not only query the local services, but also use DOT
.
This is what my /etc/resolv.conf now looks like.
/ # cat /etc/resolv.conf
search local
nameserver 127.0.0.11
options ndots:0
nameserver 1.1.1.1
nameserver 127.0.0.1
So the big thing here is to allow query from the subnet tied to the main/default interface. In my case, I statically assigned the network in the conf.go
files. Ideally it would be nice to do it dynamically if possible, maybe get the ip/netmask from the docker container at runtime and update ubound?
I also noticed that depending on where I place the access-control directive, the build-test failed.
Thanks, Let me know if this is helpful in anyway. I tried looking at the code itself, but it looked nothing like the Python that I am familiar with.
No problem, thanks a ton for stretching this out in all the directions! I can definitely test it myself too, so it should be easy to integrate nicely. Allow me 1 to 2 days so I can get to it, I'm a bit over-busy currently unfortunately, but I can't wait to fix this up! Plus this should be how it is behaves natively imo.
Thank you for looking into this.
I'm still testing things out, I would ideally like it to work without having to specify the DNS at the Docker configuration level.
Plus, since Unbound blocks i.e. malicious hostnames, I cannot just add the local DNS below Unbound as this would resolve blocked hostnames.
Maybe I'm asking for too much 😅 I'll let you know what I find.
So the way I am thinking of solving this for myself is to just allow unbound to listen on the default interface and localhost. This is the key to get this working. Ideally this would be done programmatically during run-time.
The rest of the config is already provided by Gluetun's env variables or docker-compose directives.
networks:
frontend:
ipv4_address: 172.18.0.100
dns: 172.18.0.100
environment:
- DNS_KEEP_NAMESERVER=on
For those who are OK with the way things are, nothing needs to change.
If users need local services + name resolution via unbound, set gluetun to have a static IP, and assign the dns
directive manually with that same IP. So no code-change in gluetun needed for this part.
To add this as a feature, we can provide users with some sort of env variable based switch. This would be a code change.
That is all we need to get local and internet names resolve.
The additional feature enabled with this config would be that other containers and hosts on the network (not just the docker network) can now use gluetun as a DOT resolution host. So Gluetun also becomes a local DNS server that provides DOT over VPN :)
Let me know if this helps.
I have a (convoluted) solution in mind which relies 'less' on the OS:
- Detect the Docker DNS address at start, i.e.
127.0.0.11
- Run a DNS UDP proxy (coded from scratch in Go) listening on port 53 so that it can hook into the queries and:
- resolve local hostnames (no dot
.
) using127.0.0.11
(and also check the returned address is private) - otherwise proxy the query to unbound listening on port 1053 for example
- resolve local hostnames (no dot
I'm still playing around with /etc/resolv.conf
and options, as well as searching through Unbound's configuration options, for now though. But otherwise the solution above solves the problems, and could be a first step towards moving away from Unbound (#137)
@networkprogrammer Thanks for the suggestions! Let me change that interface Unbound is listening on to the default interface, having a DNS over TLS server through the tunnel is definitely interesting 😄
2\. Run a DNS UDP proxy (coded from scratch in Go) listening on port 53 so that it can hook into the queries and:
Nice work! Very nice. Let me run a test and will let you know. Awesome work.
So I tested and everything seems to work as expected. To get this to work, I have to set the - DNS_KEEP_NAMESERVER=on
environment variable in the Gluetun service definition.
I'm ok with closing this issue.
Thank you again, for the resolution and also getting this awesome project going!
@qdm12 , btw where is the code you did for the DNS server? I am interested is learning go so wanted to see what the code looks like.
I'm ok with closing this issue.
Let me finish (and start haha) that DNS proxy to solve the issue properly. It's good we have workarounds for now, but I would definitely like to fix it properly.
btw where is the code you did for the DNS server
Nowhere yet! I'll get to it in the coming days, I'll tag you and comment here once I have a start of a branch going if you want to pull request review/ask questions 😉 Although that will likely just be a UDP proxy inspecting DNS queries and routing them accordingly (I did a UDP proxy but never fiddled with DNS either).
This is blocked by #289 I think. Do you guys manage to reach other containers from Gluetun in the same Docker network using their IP addresses?
I did a quick test. My setup involves Gluetun and qBittorrent(qbt) sharing the network stack. All other containers are on the same network, but do not share network stacks.
From sonarr/radarr etc I can connect to qbt as expected. From qbt I could not connect to jackett
So I got on the Gluetun container and as a quick test, flushed iptables and then set the default policy to accept
. This let me talk from Jackett to qbt. So iptables is stopping communications.
So Gluetun/qbt -> other containers is not working. Iptables is blocking Other containers -> qbt is working.
OK, so the problem is with Chain OUTPUT (policy DROP)
. I understand that we want this to block traffic if there is no VPN and we should keep it that way.
I added the following line iptables -A OUTPUT -d 172.18.0.0/16 -j ACCEPT
since 172.18.0.0/16
is my local docker network. That fixed my issue.
Now Gluetun/qbt can talk to other containers on the network. So we need to allow traffic to the local network.
Nice thanks! That did the trick. I'll get to the DNS proxy this weekend, will let you know.
I asked on Reddit's HomeNetworking subreddit here to see if there is a way to do this natively. Let's wait if there is a solution coming up in the next few hours / days before adding (yet another) server to Gluetun haha (we have 5 so far: http proxy, shadowsocks, control server, unbound and healthcheck server).
So I pulled the latest image and see that DNS has stopped working. Something must have changed.
2020-11-07T13:46:36.191-0700 INFO dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:46:41.193-0700 ERROR port forwarding: cannot bind port: Get "https://10.63.112.1:19999/bindPort?payload=eyJ0b2tlbiI6Ild3eFZqVExMSnhhYlAwVkVGdzgraFEzMnVsNXZQZjVmOWwwNG1kUGpjNkJidDVUTzlia0JHYkdieXFTRW84UmtUaFBOSXhxSjZ2ZDdKdmV5bzFkVUFDMUNubGk0Z0VNVzhHeDJSRWlPdnF1QjVobThNMFd4VmEyMXdnTT0iLCJwb3J0Ijo1Mjk2MywiZXhwaXJlc19hdCI6IjIwMjEtMDEtMDNUMDk6NDQ6MTAuMjg0MDM3MTU5WiJ9&signature=slra9KxY4fyEBgWJKYxGT3841HdSgNdCDDQQB%2BdFRzcNyC4GHY%2FYElap8kxvFQ5CkYXjaMaROdkURatI28L8Dg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:46:41.193-0700 INFO port forwarding: Trying again in 10s
2020-11-07T13:46:51.192-0700 WARN dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:46:51.192-0700 INFO dns over tls: attempting restart in 10 seconds
2020-11-07T13:47:01.195-0700 INFO dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:47:16.198-0700 WARN dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:47:16.198-0700 INFO dns over tls: attempting restart in 10 seconds
2020-11-07T13:47:21.199-0700 ERROR port forwarding: cannot bind port: Get "https://10.63.112.1:19999/bindPort?payload=eyJ0b2tlbiI6Ild3eFZqVExMSnhhYlAwVkVGdzgraFEzMnVsNXZQZjVmOWwwNG1kUGpjNkJidDVUTzlia0JHYkdieXFTRW84UmtUaFBOSXhxSjZ2ZDdKdmV5bzFkVUFDMUNubGk0Z0VNVzhHeDJSRWlPdnF1QjVobThNMFd4VmEyMXdnTT0iLCJwb3J0Ijo1Mjk2MywiZXhwaXJlc19hdCI6IjIwMjEtMDEtMDNUMDk6NDQ6MTAuMjg0MDM3MTU5WiJ9&signature=slra9KxY4fyEBgWJKYxGT3841HdSgNdCDDQQB%2BdFRzcNyC4GHY%2FYElap8kxvFQ5CkYXjaMaROdkURatI28L8Dg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:47:21.199-0700 INFO port forwarding: Trying again in 10s
2020-11-07T13:47:26.199-0700 INFO dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:47:41.201-0700 WARN dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:47:41.201-0700 INFO dns over tls: attempting restart in 10 seconds
2020-11-07T13:47:51.204-0700 INFO dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:48:01.207-0700 ERROR port forwarding: cannot bind port: Get "https://10.63.112.1:19999/bindPort?payload=eyJ0b2tlbiI6Ild3eFZqVExMSnhhYlAwVkVGdzgraFEzMnVsNXZQZjVmOWwwNG1kUGpjNkJidDVUTzlia0JHYkdieXFTRW84UmtUaFBOSXhxSjZ2ZDdKdmV5bzFkVUFDMUNubGk0Z0VNVzhHeDJSRWlPdnF1QjVobThNMFd4VmEyMXdnTT0iLCJwb3J0Ijo1Mjk2MywiZXhwaXJlc19hdCI6IjIwMjEtMDEtMDNUMDk6NDQ6MTAuMjg0MDM3MTU5WiJ9&signature=slra9KxY4fyEBgWJKYxGT3841HdSgNdCDDQQB%2BdFRzcNyC4GHY%2FYElap8kxvFQ5CkYXjaMaROdkURatI28L8Dg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:01.207-0700 INFO port forwarding: Trying again in 10s
2020-11-07T13:48:06.210-0700 WARN dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:06.210-0700 INFO dns over tls: attempting restart in 10 seconds
2020-11-07T13:48:16.213-0700 INFO dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:48:31.219-0700 WARN dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:31.219-0700 INFO dns over tls: attempting restart in 10 seconds
2020-11-07T13:48:41.226-0700 INFO dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:48:41.226-0700 ERROR port forwarding: cannot bind port: Get "https://10.63.112.1:19999/bindPort?payload=eyJ0b2tlbiI6Ild3eFZqVExMSnhhYlAwVkVGdzgraFEzMnVsNXZQZjVmOWwwNG1kUGpjNkJidDVUTzlia0JHYkdieXFTRW84UmtUaFBOSXhxSjZ2ZDdKdmV5bzFkVUFDMUNubGk0Z0VNVzhHeDJSRWlPdnF1QjVobThNMFd4VmEyMXdnTT0iLCJwb3J0Ijo1Mjk2MywiZXhwaXJlc19hdCI6IjIwMjEtMDEtMDNUMDk6NDQ6MTAuMjg0MDM3MTU5WiJ9&signature=slra9KxY4fyEBgWJKYxGT3841HdSgNdCDDQQB%2BdFRzcNyC4GHY%2FYElap8kxvFQ5CkYXjaMaROdkURatI28L8Dg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:41.226-0700 INFO port forwarding: Trying again in 10s
2020-11-07T13:48:56.229-0700 WARN dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:56.230-0700 INFO dns over tls: attempting restart in 10 seconds
Top used to show unbound. so looks like unbound stopped running.
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
1 0 root S 699m 18% 4 0% /entrypoint
32 1 nonrootu S 5048 0% 7 0% openvpn --config /etc/openvpn/target.ovpn
50 0 root S 1648 0% 6 0% /bin/sh
55 50 root R 1576 0% 7 0% top
Did I pull a docker image that was in development?
Running version unknown built on an unknown date (commit unknown)
I get this when I try running unbound manually.
/ # unbound
[1604782560] unbound[79:0] error: Could not open /etc/unbound/unbound.conf: No such file or directory
[1604782560] unbound[79:0] warning: Continuing with default config settings
[1604782560] unbound[79:0] error: can't bind socket: Address not available for ::1 port 53
[1604782560] unbound[79:0] fatal error: could not open ports
~Strange, I also pulled the Docker image and it shows an unknown commit / build date, I got scared for a minute someone manually pushed it to Docker Hub. Looking at the Github actions logs and Docker Hub, the digest matches so this is the genuine Docker image 😅 I'm investigating why the build date etc wasn't set correctly by the Github docker build pipeline.~
EDIT: Found it, fixed in ~https://github.com/qdm12/gluetun/commit/b708d10cade80843b4a665349942549a62df48a9~ 0423388b524fe5213a3c96b4c1474412f7c8368a
On my end, DNS still works fine though. Maybe a problem on the VPN side? Try another region? Can you create another issue and share the full logs? If DNS over TLS fails the first time, it should still use the plaintext DNS (you can check with cat /etc/resolv.conf
) I think.
Running unbound manually will most likely conflict with Gluetun trying to start Unbound, that seems to be what happens Address not available for ::1 port 53
Hey @qdm12 , Wanted to provide an update. I pulled the latest image this morning and everything works as expected. Local containers can be resolved and connected to from the gluetun network stack.
Validated that DNS does not seem to leak over local LAN. I validated using wireshark on my local lan.
Wait what? But you use KEEP_NAMESERVER right? If so your DNS requests aren't going through Unbound. Let me know, maybe I'm missing something 😅
Yeah, so my local LAN is 192.168.1.0/24 and 192.168.1.119 is my local dhcp server. But based on the routing table, that packet gets put on the vpn tunnel. I'm not sure what happens at that point, cause you can see that the next packet is re-written to point to 1.1.1.1. So is Unbound hijacking the packet?
Maybe things are working by mistake. LOL. Here is some output.
/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.63.112.1 128.0.0.0 UG 0 0 0 tun0
0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
10.63.112.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0
128.0.0.0 10.63.112.1 128.0.0.0 UG 0 0 0 tun0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
172.98.89.70 172.18.0.1 255.255.255.255 UGH 0 0 0 eth0
/ # tcpdump -nnnei tun0 "udp and port 53"
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun0, link-type RAW (Raw IP), capture size 262144 bytes
10:39:25.643103 ip: 10.63.112.4.53217 > 192.168.1.119.53: 49066+ A? google.com. (28)
10:39:26.640893 ip: 10.63.112.4.56270 > 1.1.1.1.53: 49066+ A? google.com. (28)
10:39:26.691034 ip: 1.1.1.1.53 > 10.63.112.4.56270: 49066 1/0/0 A 172.217.14.238 (44)
10:39:26.691958 ip: 10.63.112.4.38660 > 192.168.1.119.53: 60842+ AAAA? google.com. (28)
10:39:27.689391 ip: 10.63.112.4.55918 > 1.1.1.1.53: 60842+ AAAA? google.com. (28)
10:39:27.739308 ip: 1.1.1.1.53 > 10.63.112.4.55918: 60842 1/0/0 AAAA 2607:f8b0:400a:801::200e (56)
10:39:27.740149 ip: 10.63.112.4.55308 > 192.168.1.119.53: 62709+ MX? google.com. (28)
10:39:28.738886 ip: 10.63.112.4.37955 > 1.1.1.1.53: 62709+ MX? google.com. (28)
10:39:28.789013 ip: 1.1.1.1.53 > 10.63.112.4.37955: 62709 5/0/0 MX alt3.aspmx.l.google.com. 40, MX alt4.aspmx.l.google.com. 50, MX alt1.aspmx.l.google.com. 20, MX alt2.aspmx.l.google.com. 30, MX aspmx.l.google.com. 10 (136)
^C
version: "3.7"
services:
gluetun:
image: qmcgaw/private-internet-access
container_name: gluetun
cap_add:
- NET_ADMIN
networks:
- frontend
ports:
- 8000:8000/tcp # Built-in HTTP control server
- 8080:8080/tcp #Qbittorrent
# command:
volumes:
- /configs/vpn:/gluetun
environment:
# More variables are available, see the readme table
- VPNSP=private internet access
# Timezone for accurate logs times
- TZ=America/Los_Angeles
# All VPN providers
- USER=aaaaaa
# All VPN providers but Mullvad
- PASSWORD=bbbbbbbb
# All VPN providers but Mullvad
- REGION=CA Vancouver
- PORT_FORWARDING=on
- PORT_FORWARDING_STATUS_FILE="/gluetun/forwarded_port"
- PIA_ENCRYPTION=normal
- GID=1000
- UDI=1000
- DNS_KEEP_NAMESERVER=on
restart: always
networks:
frontend:
name: custom_net
ipam:
config:
- subnet: "172.18.0.0/16"
That also brings be to the other point, is DoT working? Cause traffic to 1.1.1.1 is UDP/53.
So DOT is not working, but DNS goes out the vpn tunnel.
That ~makes~ might make sense. If you set DNS_KEEP_NAMESERVER=on
, as it states it, the nameserver in /etc/resolv.conf is kept (code).
Because by default (since a recent commit indeed), gluetun is allowed to communicate to its local Docker network subnet, it is allowed to reach the Docker network DNS.
Maybe your host DNS is set to 1.1.1.1
? You can also exec in the container and check with cat /etc/resolv.conf
what's in there 👀 ?
Back to the topic, someone on the Reddit post replied, it's apparently possible to do what I wanted to do with bind
so I'll dig into that... Not in the coming 2-3 days though, as my day job is getting intense this week and I have some Ikea drawers to assemble, which is far more complex than networking and routing 😅
So I figured, this would help in understanding the overall situation. I should add that
Can resolve internet names using UDP/53 – NO
is a good thing.
Hey, I wanted to check to see if you can make 1 change that can get all this resolved. See this link.
Currently unbound does not let you query from remote hosts i.e. containers that are on the same network as gluetun. If that is allowed, I can close this issue as that is the only thing pending.
Currently unbound listens on all interfaces but will refuse queries from other containers.
I wanted to check to see if you can make 1 change that can get all this resolved. See this link.
I think I've done that, it listens on all interfaces now.
For the main issue, I just thought about it (in my bed haha), isn't DOT_PRIVATE_ADDRESSES causing the issue? Try setting it to a blank string? That prevents Unbound from resolving private IPs... which is what we want here! I'll check it out later today.
Oh, take your time.
So in that link, what I wanted to point out was to add the access-control allow x.x.x.x/x
rule. That does not exist. Can you see if you can add the allow from localhost ( I think this is allowed by default) and also allow from the local docker network.
That should resolve my issues.
Hey @networkprogrammer I just pushed an image :access-control
with what you requested please let me know if it solves it. Although I am doubtful to be honest 😅
If that doesn't solve it, I'll get to the DNS proxy thing quite soon now as I noticed #188 is also blocked by this, so this is a long lasting problem. I'll let you know once I have some minimal starting code for it if you want to review the pull request / ask question / contribute etc. 👍
EDIT: Note to my future self, that looks like a good base to start from
Hey @qdm12 , Thank you. I am not seeing that config if I pull the docker hub image or build from source. Am I missing something?
Hello, sorry for the delay. The latest docker image should now have access-control: 0.0.0.0/0 allow
so any IP address (that can go through the firewall) can use the DNS server. You can try see if it reolves local hostnames but I don't think it will 😢
Hey @qdm12 , Happy new year.
Thank you for this. So this seems to work. However I did find that in the latest docker image the DNS_KEEP_NAMESERVER
setting does not seem to work. My /etc/resolv.conf
is stuck at 127.0.0.1
which is where unbound is listening. So I manually edited the file to use 127.0.0.11
, which is docker's internal dns server and everything works.
$ docker exec -it gluetun /bin/sh
/ # host sonarr
sonarr has address 172.18.0.8
/ # host radarr
radarr has address 172.18.0.7
/ # host ebay.com
ebay.com has address 66.135.196.249
ebay.com has address 66.135.195.175
ebay.com mail is handled by 10 mx2.hc2186-24.iphmx.com.
ebay.com mail is handled by 10 mx1.hc2186-24.iphmx.com.
I also verified that none of the DNS traffic goes out my local-lan. I'll perform some more testing to see if there are leaks anywhere in this setup, but I don't think there is.
Thanks!
Also, here is the key bits of the docker-compose.yml config. The only things that does not seem to work is DNS_KEEP_NAMESERVER=on
. So I have to edit the /etc/resolv.conf
file manually.
version: "3.7"
services:
gluetun:
image: qmcgaw/private-internet-access
container_name: gluetun
cap_add:
- NET_ADMIN
networks:
frontend:
ipv4_address: 172.18.0.100
dns: 172.18.0.100
ports:
- 8000:8000/tcp # Built-in HTTP control server
- 8080:8080/tcp #Qbittorrent
# command:
volumes:
- /home/bobin/configs/vpn:/gluetun
environment:
# More variables are available, see the readme table
- VPNSP=private internet access
# Timezone for accurate logs times
- TZ=America/Los_Angeles
# All VPN providers
- USER=12345
# All VPN providers but Mullvad
- PASSWORD=12345
# All VPN providers but Mullvad
- REGION=CA Vancouver
- PORT_FORWARDING=on
- PORT_FORWARDING_STATUS_FILE="/gluetun/forwarded_port"
- PIA_ENCRYPTION=normal
- GID=1000
- UDI=1000
- DNS_KEEP_NAMESERVER=on
#- FIREWALL_OUTBOUND_SUBNETS=172.18.0.0/16
restart: always
networks:
frontend:
name: custom_net
ipam:
config:
- subnet: "172.18.0.0/16"
I'll have a look at DNS_KEEP_NAMESERVER, probably make yet another patch 😄 Is this just happening on latest or for qmcgaw/gluetun:v3.10.1
as well? (i can try it myself no worry if you're afk 👍).
If you add the Docker DNS to /etc/resolv.conf, queries will indeed go to the first DNS specified (unbound). However if it fails, it'll go through the second (Docker). I believe that means a blocked hostname (malicious/ads/surveillance) will be resolved through the docker DNS, and even worst that's likely going out of the vpn.
As a side note, now that there is access control allowed to all IPs, that means you can use the container as a DNS over TLS over VPN (lol) for other hosts in your network. It can get slow depending on the VPN server location though 😄
Anyway, I'm not forgetting this issue, I'm quite deep in the DNS rabbit hole recently, so I should get to it soon.
I am using the latest pulled image from docker hub.
In my config I am specifying the use of gluten itself as the dns server. When I set /etc/resolv.conf
to use 127.0.0.11
, dns queries first goes to docker-dns, that is then redirected to gluetun based on the config below.
networks:
frontend:
ipv4_address: 172.18.0.100
dns: 172.18.0.100
So docker resolves internal hostnames and gluetun/DOT resolves the rest.
The DNS_KEEP_NAMESERVER got fixed in v3.10.2 and latest thanks for the tip.
~Edit: CI Build for 3.10.2 is failing, I'll fix it tomorrow!~
I'll perform some more testing to see if there are leaks anywhere in this setup
Have a go at it! Maybe try with block malicious on and try resolving some hostnames from https://github.com/qdm12/files/blob/master/malicious-hostnames.updated to see if it is still blocked or is allowed through the Docker DNS. If I'm wrong, I'm happy to be wrong, less work for me 😄
So docker resolves internal hostnames and gluetun/DOT resolves the rest.
Interesting! I'll have a go myself 👍 That would be nice. Although you still have to assign a fixed IP address to gluetun, so it requires an extra setup step. A DNS proxy inside gluetun could handle that for the user, although it might be quite some code to add 😕
A DNS proxy inside gluetun could handle that for the user, although it might be quite some code to add 😕
Yup. For my setup, the static IP works. I agree that it is not ideal for all deployments. The DNS proxy would need to figure out internal vs external names.
Hey @qdm12 , I just pulled the latest docker image and it looks like DNS_KEEP_NAMESERVER=on
env variable is still not honored
The dns server is /etc/resolv.conf is set to 127.0.0.1 and not 127.0.0.11 if we kept the nameserver.
Indeed, re-fixed for the nth time haha! v3.10.3, v3.11.1 and latest should all contain the fix 😉
Hey @qdm12 , Long time....
I just pulled the latest image. I have set the option DNS_KEEP_NAMESERVER=on
but my resolv.conf look like this.
Any way to ensure that 127.0.0.11 stays on top?
$ docker exec -it gluetun /bin/sh
/ # cat /etc/resolv.conf
nameserver 127.0.0.1
nameserver 1.1.1.1
search local
nameserver 127.0.0.11
options edns0 trust-ad ndots:0
Just to update. I worked around this issue by mapping a file locally.
volumes:
- /home/ubuntu/docker/gluetun-resolv.conf:/etc/resolv.conf:ro
This is my file
nameserver 127.0.0.11
nameserver 127.0.0.1
nameserver 1.1.1.1
search local
options edns0 trust-ad ndots:0
Hello @networkprogrammer ! You should prefer to use Unbound (127.0.0.1
) first and then use 1.1.1.1
, 127.0.0.11
if the resolution doesn't work with Unbound. That way you use DNS over TLS through the vpn for most queries, and only rely on other DNS servers if the query fails (=local Docker hostnames, blocked hostnames). Is there a use case to have Unbound below your Docker DNS?
On a side note, I'm experimenting with DNS/DNS over HTTPS on a branch, I'll tag you as reviewer once I have something (unstable) ready. That way it can be adjusted to fix this issue. Might take a few days still though!
Just a tiny update, I fiddled with miekg/dns this weekend and am now writing a DNS over HTTPS upstream server for another repository, but it should be imported into Gluetun and replace Unbound soon. When that's done, I should be able to 'hack into it' and capture container hostnames somehow and send them to another DNS as we wanted to do (so a bit of a DNS proxy). Might still be a few weeks away but it's in progress.
@networkprogrammer if you want to ask questions and/or propose changes, I have https://github.com/qdm12/dns/pull/58 which implements a DoT and DoH DNS servers for a bunch of providers to replace Unbound completely. It's working and can already be imported from a Go project to another, but I want to finish a few things first like caching, dns blocking and add more unit tests.
For a more gentle introduction, I propose to you my < 300 lines DoH DNS server gist with its Reddit post
Anyway, I'll ping you once I do another PR to address this issue here. I'm thinking of an Option
to map 0-dot record queries (example qbitorrent
instead of github.com
) to the default DNS server. Possibly also add a check on the result to verify it's a private IP address, otherwise fall back on the DoT/DoH server route.
This is awesome! I'll take a look at it. Great work
Slightly related to this issue, FIREWALL_OUTBOUND_SUBNETS
operation is now fixed (with added IP rules) so you can now reach subnets of your choice and not just the Docker network.
I have that Go DoT/DoH server ready but it's missing DNSSEC validation which is required to be on par with unbound
, I'll integrate it sometime soon in gluetun when I'm done adding DNSSEC. And I'll also add that option I mentioned above.
Really good and interesting thread, I have a similar issue but have solved it in a slightly different way. Sorry if this is slightly off topic but figure it would solve this problem in a roundabout way.
Initially I wanted to split some services out but actually my VPN provider is so fast it doesn't seem I needed to do that. I put them all behind gluetun then and all seemed good only I had an issue with NZBHydra in my case. When it's pointing to indexers on Jackett these are on http://localhost:port but when you click download from a different machine it goes to http://localhost:port which fails as it's not my computers localhost. Hence for this to work I need NZBHydra to hit Jackett on the same jackett.domain.com that I'm using from my client.
So now with this approach I've a Traefik reverse proxy (Others would work as well of course) and am adding host entries on the gluetun container for things like jackett.domain.com to point to that reverse proxy. I then dual home the traefik container into my network so I only have to publish the HTTPS endpoints into the LAN. Basically this means both inside the gluetun container and on my LAN I can hit jackett.domain.com and it works. So no need for different URL's, this solves the issue where some services forward you onto a URL they are configured to which if it's localhost fails.
I've done it via the following process, the full configuration isn't here but just to give an idea of how this works. This might help you @networkprogrammer as another way to do what you've been doing also.
version: "3.5"
services:
traefik:
container_name: downloads-traefik
image: traefik:2.5
restart: always
networks:
qnet-static-eth4-59aaa9:
ipv4_address: 172.28.28.101
download:
ipv4_address: 172.18.0.200
#ports:
#- 10080:80
#- 10443:443
#- 18080:8080
volumes:
- ./traefik.toml:/traefik.toml
- ./traefik-dynamic.toml:/traefik-dynamic.toml
- ./cert.pem:/certs/cert.crt
- ./cert.key:/certs/cert.key
- /var/run/docker.sock:/var/run/docker.sock:ro
labels:
# Enable Traefik
- "my.zone=downloads"
- "traefik.enable=true"
# Dashboard
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
- "traefik.http.routers.api.rule=Host("downloads-proxy.domain.com")"
- "traefik.http.routers.api.service=api@internal"
- "traefik.http.routers.api.tls"
- "traefik.http.routers.api.tls.options=default"
- "traefik.http.routers.api.entrypoints=https"
- "traefik.http.routers.api.middlewares=authapi"
# Middleware Redirect
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
# Turn on watchtower
- "com.centurylinklabs.watchtower.enable=true"
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
depends_on:
- traefik
restart: always
networks:
download:
ipv4_address: 172.18.0.201
cap_add:
- NET_ADMIN
ports:
- 8888:8888/tcp # HTTP proxy
- 8388:8388/tcp # Shadowsocks
- 8388:8388/udp # Shadowsocks
#- 9117:9117/tcp # Jackett
#- 5076:5076/tcp # NZBHydra2
expose:
- 9117/tcp # Jackett
- 5076/tcp # NZBHydra2
extra_hosts:
- "jackett.domain.com:172.18.0.200"
- "nzbhydra.domain.com:172.18.0.200"
volumes:
- /share/docker/Data/gluetun:/gluetun
environment:
- VPNSP=mullvad
- VPN_TYPE=wireguard
- OWNED=no
# OpenVPN:
# - OPENVPN_USER=
# - OPENVPN_PASSWORD=
# - OPENVPN_IPV6=
# Wireguard:
- WIREGUARD_PRIVATE_KEY=
- WIREGUARD_ADDRESS=
- WIREGUARD_ENDPOINT_PORT=
# Country to connect to
- COUNTRY=
- CITY=
# DNS Configuration
- DOT_PROVIDERS=cloudflare,google,quad9
- DOT_PRIVATE_ADDRESS=
# Allow connectivity to the rest of the docker network
- FIREWALL_OUTBOUND_SUBNETS=172.18.0.0/24
# Healthcheck
- HEALTH_ADDRESS_TO_PING=193.78.240.12
# Timezone for accurate log times
- TZ=Europe/London
labels:
# Traefik configuration
- "my.zone=downloads"
- "traefik.enable=true"
# Traefik Jackett
- "traefik.http.services.jackett.loadbalancer.server.port=9117"
- "traefik.http.routers.jackett.service=jackett"
- "traefik.http.routers.jackett.entrypoints=https"
- "traefik.http.routers.jackett.rule=Host("jackett.domain.com")"
- "traefik.http.routers.jackett.tls"
- "traefik.http.routers.jackett.tls.options=default"
- "traefik.http.routers.jackett.middlewares=security-headers@file"
# Traefik NZBHydra2
- "traefik.http.services.nzbhydra2.loadbalancer.server.port=5076"
- "traefik.http.routers.nzbhydra2.service=nzbhydra2"
- "traefik.http.routers.nzbhydra2.entrypoints=https"
- "traefik.http.routers.nzbhydra2.rule=Host("nzbhydra.domain.com")"
- "traefik.http.routers.nzbhydra2.tls"
- "traefik.http.routers.nzbhydra2.tls.options=default"
- "traefik.http.routers.nzbhydra2.middlewares=security-headers@file"
# Turn on watchtower
- "com.centurylinklabs.watchtower.enable=true"
jackett:
image: linuxserver/jackett
container_name: jackett
depends_on:
- gluetun
restart: always
network_mode: "service:gluetun"
volumes:
- /share/docker/Data/jackett:/config
- /share/downloads/Drop/Torrent:/downloads
environment:
- PUID=1000
- PGID=100
- TZ=Europe/London
labels:
# Turn on watchtower
- "com.centurylinklabs.watchtower.enable=true"
nzbhydra2:
image: linuxserver/nzbhydra2
container_name: nzbhydra2
depends_on:
- gluetun
- jackett
restart: always
network_mode: "service:gluetun"
volumes:
- /share/docker/Data/nzbhydra:/config
- /share/downloads/Drop:/downloads
environment:
- PUID=1000
- PGID=100
- TZ=Europe/London
labels:
# Turn on watchtower
- "com.centurylinklabs.watchtower.enable=true"
networks:
download:
name: download-stack-network
ipam:
config:
- subnet: "172.18.0.0/24"
qnet-static-eth4-59aaa9:
external: true
Soooo... how do I do this :-)
I have container SECRET using the Gluetun container as its network. (It works GREAT, thanks for the AWESOME work!) Container SECRET needs to talk to Container HELLO, which is on the same Docker host, but using the default network.
Is there a simple guide to configure this setup? (to allow SECRET to contact HELLO)
Such a good thread, learned so much. Fantastic work @qdm12
@bhullIT:
- your gluetun container (not the secret one) and the hello container need to be on the same bridge network (default or otherwise)
- You need to open the gluetun firewall to connections to the bridge network e.g.
FIREWALL_OUTBOUND_SUBNETS=172.18.0.0/24
-
DNS_KEEP_NAMESERVER
needs to be set toon
in gluetun, This sets the Docker DNS (127.0.0.1) as backup for the one integrated in gluetun (Unbound). You keep Unbound's main feature (DNS over TLS) but you lose its secondary feature (DNS blocking for malicious hosts (as specified in the wiki)
There a a few other alternatives to still have DNS blocking:
- Provided the host has a static local IP e.g. 192.168.0.1, you could open that IP on the gluetun firewall (
FIREWALL_OUTBOUND_SUBNETS=192.168.0.1
) and export the port you need in HELLO onto the host (-p 1234:1234) this exposes that endpoint to everything inside your host, but you can then reach it from within SECRET with your host IP e.g. 192.168.0.1:1234 - Similar but different, you can use the IP HELLO has on the bridge network, but you'll need to first create a docker bridge network (can't use the default one for that), then assign a static IP for hello on that bridge e,g, 172.18.0.10 (see here). You then need to open that IP in the gluetun firewall (
FIREWALL_OUTBOUND_SUBNETS=127.18.0.10
) and you can reach it from SECRET using 172.18.0.10 - You can wait for this feature to be complete
I suspect there are some more inventive solutions out there running your own DNS container tied to your gluetun container e.g. pihole but let's not go there.
For the life of me I can't get any of this to work. I feel like I've tried every combination that supposedly should work, but it just doesn't. Maybe I misunderstood the problem entirely?
I can't seem to ping the shady
container from the clean
container. The other way around works. To be honest, I'm not even sure how this even could work.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker-compose up -d
Creating network "frontend" with the default driver
Creating vpn ... done
Creating clean ... done
Creating shady ... done
$ docker exec -it shady sh -c "ping clean"
PING clean (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.112 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.214 ms
64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.265 ms
^C
--- clean ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.112/0.197/0.265 ms
$ docker exec -it clean sh -c "ping shady"
ping: bad address 'shady'
version: "3.7"
services:
vpn:
image: qmcgaw/gluetun:latest
container_name: vpn
cap_add:
- NET_ADMIN
environment:
- VPN_SERVICE_PROVIDER=mullvad
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=REDACTED
- WIREGUARD_ADDRESSES=REDACTED
- SERVER_CITIES=REDACTED
- FIREWALL_OUTBOUND_SUBNETS=172.18.0.0/16
- DNS_KEEP_NAMESERVER=on
- DOT=off
restart: unless-stopped
networks:
- frontend
shady:
image: alpine
container_name: shady
command: tail -f /dev/null
network_mode: "service:vpn"
clean:
image: alpine
container_name: clean
command: tail -f /dev/null
networks:
- frontend
networks:
frontend:
name: frontend
ipam:
config:
- subnet: "172.18.0.0/16"
Hi @denizdogan, so your shady
container does not really have a network of its own. It is sharing the network of the vpn
container. vpn
and shady
can talk to each other on localhost
. clean
should use the name vpn
to ping shady
Here is what I mean. vpn
and shady
have the same IP. (I changed the network since I have 17.18/16 in use)
$ docker exec -it vpn
ip a show dev eth0
119: eth0@if120: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.20.0.3/16
brd 172.20.255.255 scope global eth0
valid_lft forever preferred_lft forever
$ docker exec -it shady
ip a show dev eth0
119: eth0@if120: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.3/16
brd 172.20.255.255 scope global eth0
valid_lft forever preferred_lft forever
clean
has a different IP
$ docker exec -it clean
ip a show dev eth0
117: eth0@if118: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.2/16
brd 172.20.255.255 scope global eth0
valid_lft forever preferred_lft forever
$ docker exec -it clean
ping vpn
PING vpn (172.20.0.3
): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=63 time=0.123 ms
64 bytes from 172.20.0.3: seq=1 ttl=63 time=0.078 ms
To further validate this, I installed curl on clean
and ran a shady-nginx
webserver
$ docker exec -it clean sh -c apk update && apk add curl
shady-nginx:
image: nginx:alpine
container_name: shady-nginx
network_mode: "service:vpn"
Then from clean
I can get to the webserver using the name vpn
$ docker exec -it clean curl -Ik vpn HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Sat, 11 Jun 2022 04:33:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:26:06 GMT Connection: keep-alive ETag: "61f0168e-267" Accept-Ranges: bytes
Hope that helps
Thank you so much for the explanation, @networkprogrammer, that clarifies a lot.
Closing this issue as I feel it has served its purpose. Will open a new case if needed.
Thanks for closing that dusty issue!
Although actually I'm working actively on this at https://github.com/qdm12/dns/tree/v2.0.0-beta which will soon replace Unbound and allow for such resolution 😉 It should close a bunch of related issues as well, let's keep it opened!
Hi @qdm12.
I don't really understand how I can achieve container name resolution with the dns
container. Is it possible to do so already?
Sorry if I'm asking this too early and thanks for your great work.
I still dont get it 😅 I have two containers with network_mode: "service:vpn". In the same stack like gluetun. How can I reach container1 from container2? I tried with DNS_KEEP_NAMESERVER and also with the own Network but no luck…
Hi @Loader23, since both are using service:vpn, you should be able to just use localhost as the name. They all share the same localhost network.
Ah, yes of course. I think I understand it better now😅. Thanks alot, working as expected now. 😊