docker-Jackettvpn icon indicating copy to clipboard operation
docker-Jackettvpn copied to clipboard

Unable To Access Endpoint With VPN Enabled

Open jason-idk opened this issue 5 years ago • 7 comments

Hello, just trying to get this setup properly and running into an odd issue...

I am unable to access the application ui when I have VPN_ENABLED set to true. I may be misunderstanding, but I thought there was a route added for the LAN_NETWORK which would allow me to access the UI over my LAN and the other traffic would use the alternate route (VPN).

Additional information -

I use docker-compose and I have the following environment variables:

environment:
      - LAN_NETWORK=192.168.1.0/24
      - VPN_ENABLED=yes
      - VPN_USERNAME=${OPENVPN_USERNAME}
      - VPN_PASSWORD=${OPENVPN_PASSWORD}
      - NAME_SERVERS=8.8.8.8,8.8.4.4
      - HEALTH_CHECK_HOST=<ip-of-jackett-container>
      - DISABLE_IPV6=1
      - UMASK=002
      - PUID
      - PGID
      - TZ

The username and password are populated with an .env file and work just fine. If I want to access the UI, I have to change the VPN variable to no. I also notice that the DISABLE IPV6 setting doesnt seem to work as expected. I have mine set to 1 and I see the following:

With VPN_ENABLED=no:

root@juniper:/opt# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.11:45929        0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::9117                 :::*                    LISTEN      57/jackett          
udp        0      0 127.0.0.11:52076        0.0.0.0:*                           -                   
root@juniper:/opt# env | grep -i ipv6
DISABLE_IPV6=1
root@juniper:/opt# grep disable_ipv6 /etc/sysctl.conf 
root@juniper:/opt# ip route
default via 192.168.1.1 dev eth0 
192.168.1.0/24 dev eth0 proto kernel scope link src <ip-of-jackett-container>

With VPN_ENABLED=yes

root@juniper:/opt# netstat -tupln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.11:45241        0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::9117                 :::*                    LISTEN      216/jackett         
udp        0      0 127.0.0.11:41072        0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:48698           0.0.0.0:*                           91/openvpn          
root@juniper:/opt# env | grep -i ipv6
DISABLE_IPV6=1
root@juniper:/opt# grep disable_ipv6 /etc/sysctl.conf
root@juniper:/opt# ip route
0.0.0.0/1 via 10.x.x.5 dev tun0 
default via 192.168.1.1 dev eth0 
10.x.x.1 via 10.x.x.5 dev tun0 
10.x.x.5 dev tun0 proto kernel scope link src 10.x.x.6 
128.0.0.0/1 via 10.x.x.5 dev tun0 
1xx.xxx.xx.xxx via 192.168.1.1 dev eth0 
192.168.1.0/24 dev eth0 proto kernel scope link src <ip-of-jackett-container>

Just thought I would reach out to see if I am doing something wrong here. Let me know if you need any more information and I am glad to get it for you.

Thanks!

jason-idk avatar Jul 08 '20 22:07 jason-idk

I guess it would be beneficial to have this as well. 👍

root@juniper:/opt# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
8: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq state UNKNOWN group default qlen 100
    link/none 
    inet 10.x.x.6 peer 10.x.x.5/32 scope global tun0
       valid_lft forever preferred_lft forever
2161: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c0:a8:01:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet <ip-of-jackett-container>/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever

Also a quick note - I am using bridged networking and using an IP on my local network (same as LAN).

Thanks

jason-idk avatar Jul 08 '20 23:07 jason-idk

Your issue is caused by bridging your network and allowing your container to get an IP from within your local network. If your container has an IP from your local network, it does not work indeed. I think this can be fixed with some iptables but I looked at it for 20 minutes but weren't able to figure it out that quickly. I will look at it more in depth if I have more time. I recommend running the Docker on your Docker Network and accesing it from the Docker Host IP with the exposed port. Example; Docker Host IP: 192.168.0.200 Container IP: 172.1670.5 Container Port: 9117 Access it via http://192.168.0.200:9117/

DyonR avatar Jul 08 '20 23:07 DyonR

Gotcha - Makes sense.

It would be a nice to have, but not necessarily a need to have. I would assume most don't use bridged networking for docker containers as I do in this environment.

If this comes up another time, I might suggest adding a NETWORK_TYPE var and allow values like bridged etc, changing the behaviour of the iptable rules setup. I might play around with it and see what I can come up with here later on. If I find a solution I will reach back out and let you know.

Thanks for taking a look!

jason-idk avatar Jul 08 '20 23:07 jason-idk

If I ever decide to update the container again, I will for sure look into the ability of using an IP from the same network 😄 Also, the DISABLE_IPV6 is an experimental feature, trying to combat an issue people have in #14 and #19. I do not have IPv6 myself, so I am not able to test it properly myself. It sets the value of net.ipv6.conf.all.disable_ipv6 to 1 (with 1 disabling IPv6), however it does not work as expected really. This value can be verified by running sysctl net.ipv6.conf.all.disable_ipv6.

DyonR avatar Jul 08 '20 23:07 DyonR

Looked into this because I use bridged interfaces to isolate docker containers behind specific firewall subnets. The issue of web interfaces not working in any of your docker containers (I am trying sabnzbdvpn) is the mangle table in IP Tables.

The outbound traffic is messed with below - I'll further diagnose it tomorrow and work out specifically the point of the changes below and how to maintain them if required in a bridged interface environment. Flushed the mangle table to confirm temporary fix, but will work on something more permanent when I get a chance and assist. (iptables -t mangle -F)

Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   72 12979 CONNMARK   udp  --  any    any     anywhere             anywhere             /* wg-quick(8) rule for wg0 */ CONNMARK restore

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    6   336 MARK       tcp  --  any    any     anywhere             anywhere             tcp dpt:http-alt MARK set 0x1
    4   192 MARK       tcp  --  any    any     anywhere             anywhere             tcp spt:http-alt MARK set 0x1
    6   336 MARK       tcp  --  any    any     anywhere             anywhere             tcp dpt:8443 MARK set 0x1
   30  1752 MARK       tcp  --  any    any     anywhere             anywhere             tcp spt:8443 MARK set 0x1

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   30  4428 CONNMARK   udp  --  any    any     anywhere             anywhere             mark match 0xca6c /* wg-quick(8) rule for wg0 */ CONNMARK save

https://github.com/DyonR/docker-sabnzbdvpn/blob/540a7336fa6b0f1a33a346016de5d07f1bf708b5/sabnzbd/iptables.sh#L161

tknz avatar Sep 07 '20 10:09 tknz

Just to confirm, the problem with bridged networking is due to the marking of packets, it should be okay to drop this.

Is it something you'd be interested in fixing or fork?

tknz avatar Sep 14 '20 04:09 tknz

Just to confirm, the problem with bridged networking is due to the marking of packets, it should be okay to drop this.

Is it something you'd be interested in fixing or fork?

Indeed, removing anything related to mangle table makes it is possible to connect to the container from a bridged network and with using a custom IP. Well figured out! I reached out to Binhex to confirm (the original creator of the iptables.sh script), and that is indeed true.
However Binhex told me, if you remove mangle, it wouldn't be possible to connect to the Docker from outside the LAN network.
To me personally it wouldn't be a big deal to remove this, but I do not know how many people have port-forwarded ports to access the containers from outside, and changing the container so that this wouldn't be possible anymore couldn't be a welcome change then. A VPN to access your LAN to then access your containers would obviously be better, but that aside. Unless, you have another fix for this, I would gladly take it.

DyonR avatar Sep 14 '20 16:09 DyonR