synology-docker
synology-docker copied to clipboard
Cannot reach containers attached to user-defined bridge network
Containers attached to the default bridge network work as expected. The following command should spin up portainer
correctly.
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce
However, attaching it to a user-defined bridge network doesn't work correctly yet. Steps to reproduce:
docker network create my-net
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --network my-net --restart=always -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce
Docker provides extensive documentation about bridge networking. Especially the section Enable forwarding from Docker containers to the outside world seems relevant.
- Configure the Linux kernel to allow IP forwarding.
sysctl net.ipv4.conf.all.forwarding=1
- Change the policy for the iptables FORWARD policy from DROP to ACCEPT.
sudo iptables -P FORWARD ACCEPT
Step 2 has been addressed in version v1.2.0 of the script. The first step doesn't work on Synology DSM yet.
hey @markdumay , I appreciate your tenacity 👍
Yesterday I spent some time getting everything "back to normal". Today I got your email so I re-tried the update and a simple portainer launch in bridge mode.
I can confirm it doesn't work for me. not even with the sysctl and the uptables policy change. Btw, I attach below the extract of my iptables in case that can help:
Rui@DiskStation:/var/packages/Docker/scripts$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
DOS_PROTECT all -- anywhere anywhere
INPUT_FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
FORWARD_FIREWALL all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (0 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:9000
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:8000
Chain DOCKER-ISOLATION-STAGE-1 (0 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-OVERLAY (0 references)
target prot opt source destination
vkgtsukawk22 all -- anywhere anywhere
RETURN all -- anywhere anywhere
restoring the previous version allows me to reconnect to portainer and all my containers.
So I was able to successfully use the iptables command above that you mentioned to get successful connectivity on my docker custom network. I'm not sure if it has to do with the fact that I created the bridge prior, but I wouldn't think it should.
That said - my bigger question/concern - is the impact of that policy change really something we want here? Especially if hosting something such as say a web page and/or some sort of front end load balancer like nginx or traefik?
I agree with your concern, it wouldn't be great to suddenly have your NAS fully exposed to the public internet. It's quite a challenging subject, as Docker messes with the iptables quite a bit under the hood. I found this discussion on stack overflow with some in-depth observations about Docker's networking configuration.
For now I'm reverting to the official supported docker. I followed a tutorial around Traefik setup that highlighted being able to just drop in replace the docker-compose binary - which is the core of what I needed. Would it be possible to make a modification to this script that could do this work instead? If left to my own devices, I might be able to augment your script and/or make a PR. But I figure if you're intimately familiar it might take a few moments to plug in another option to ONLY modify the docker-compose binary. I like your script overall, so an elegant option to potentially only replace the compose binary would be fantastic.
Just my .02 - thanks for the script as is though. The backup alone is a nice function.
Hi @1activegeek, I added preliminary support for your request in the develop
branch. I haven't thoroughly tested it yet (and I'm about to go offline for a few days ;-) ) - especially the execute_restore_bin()
function requires testing. Care to give it a spin? Curious to hear your thoughts!
The general approach I followed was to support a new flag --target
. If set, you can specify to target either engine
, compose
, or driver
updates only. This acts as a global flag, and as such affects all commands - except for backup. For now, I decided to always backup all relevant files, regardless of the target.
For those interested, I ran into this gist by @pedrolamas. I haven't tested it myself yet, but it seems worth looking into.
Hey @markdumay, not sure if this will help but I wrote a post on the reason behind that gist and what it helps to fix: https://www.pedrolamas.com/2020/11/04/exposing-the-client-ips-to-docker-containers-on-synology-nas/
Thanks for the heads up @pedrolamas! I was indeed curious about the reason behind your script. I'm running Pi-Hole on my NAS - eager to find out if your script fixes the missing client IPs. I like the WOMM certification by the way. ;-)
Ah, if the problem you are experiencing is the missing client IPs in Pi-hole, then yes, that script will indeed fix that! 🙂
Cool! It probably won't solve the networking issues mentioned in this issue thread then - I'll create a new issue in this repository instead ;-).
Wow. I wish I had read my mail earlier than trying to trailblaze around. I just spent about the last 4 hours trying to troubleshoot this exact issue, and JUST stumbled upon your script @pedrolamas. I'm pulling up a man iptables
now to decipher for myself what the rule is actually doing in the grand scheme. If it doesn't seem to terrible, I'll roll with it. So thank you in advance for the script, I only wish I had found it earlier.
UPDATE: Ya I probably need to dive into the iptables rules on my syno, which my brain hurts too much for. But it does seem to be working now. Proper IPs all around on my containers. Thank you SO SO SO MUCH!
Shawn, did @pedrolamas script solve the "bridge" issue for you?
On Wed, Dec 30, 2020, 04:10 Shawn Mix [email protected] wrote:
Wow. I wish I had read my mail earlier than trying to trailblaze around. I just spent about the last 4 hours trying to troubleshoot this exact issue, and JUST stumbled upon your script @pedrolamas https://github.com/pedrolamas. I'm pulling up a man iptables now to decipher for myself what the rule is actually doing in the grand scheme. If it doesn't seem to terrible, I'll roll with it. So thank you in advance for the script, I only wish I had found it earlier.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/markdumay/synology-docker/issues/35#issuecomment-752311699, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACP6EX2SWDBYWLHEGUYYKXDSXKKZPANCNFSM4T4MK6SA .
It is working for me for this specific use case around the SourceIP being passed properly (aka the x-forwarded-for header being present and showing the correct SourceIP). I have not tested this as a resolution to the bridge issue faced when replacing the docker binary on the syno. I'm hesitant to test that again as I had a major pain last time - I think because I forgot to manually spin down a bunch of docker-compose items I had running, so there was some sort of overwritten conflict that went unnoticed in the docker setup.
So again to answer your question - no this has not been tested to resolve the "no internet" issue faced with the bridge networks.
Hello, I have same issue on DS716II+. It work on my other nas DS718+. I do not manage to find a solution yet. I try to debug and found something strange. If I try to ping a container it failed but if I start a tcpdump on the interface, it start working. So to get it work i use command ifconfig docker0 promisc ifconfig hassio promisc (I use hassio)
I have another issue :( with interface hassio I have no NAT so no internet.
I know this is an older post and an ongoing issue
Has anyone figured out a working fix?
I don't know if this is 'noise' or if it's worth trying for the folks having an issue with custom bridge networks... On my DS218+, I have run into issues even in the stock Docker/ContainerManager package with bridge mode if you have "Enable Multiple Gateways" enabled in the Advanced Network Settings on the Synology. You might try disabling that setting (I think it might be enabled by default) and see if it changes how things are working with custom networks.
Again... not sure if this is noise or if it's something worth trying, but... might be worth a shot.