lukemarsden
lukemarsden
to see what was going on, I moved `/opt/cni/bin/bridge` to `/opt/cni/bin/bridge.real` and dropped this debug script into `/opt/cni/bin/bridge` ``` ubuntu@ns1003380:/opt/cni/bin$ cat bridge #!/bin/bash myvar=`cat` (echo "Run with $@:" env |grep...
this seems to be operating correctly, so my assumption is now that ignite is doing something with iptables rules itself that it's failing to clean up. I'm not sure though....
possibly related, i am running `ignite run --runtime docker`, i.e. using the legacy docker runtime (so that i can use docker images that are built locally by docker)
i guess actually we are using the firewall plugin in CNI to create the iptables rules that aren't being cleared up? and host-local plugin for IPAM?
adding instrumentation to `firewall` and `host-local`, looks like they all think they are succeeding, so why are we leaking IPs and iptables rules?? ``` firewall: CNI_CONTAINERID=ignite-4b79b9c398095e46 CNI_IFNAME=eth0 CNI_NETNS=/proc/743882/ns/net CNI_COMMAND=DEL CNI_PATH=/opt/cni/bin...
for reference: ``` ubuntu@ns1003380:/opt/cni/bin$ cat bridge #!/bin/bash myvar=`cat` me=`basename "$0"` (echo "$me:" env |grep CNI echo "$myvar" ) >> /tmp/log ret=$(echo "$myvar" | /opt/cni/bin/$me.real "$@" 2>&1) exitcode=$? (echo "exit $exitcode"...
I've worked around this for now by writing my own code which interacts with `iptables` and `/var/lib/cni` to do the cleanup that ignite + docker + CNI fails to do.
--runtime docker together with --network-plugin docker-bridge worked when I ran it manually but mysteriously failed (couldn't ping VMs) when running it from the code which wraps ignite in our project...
thanks @networkop good spot. Any chance we could get this fix into a release please?
I guess this made it into https://github.com/weaveworks/ignite/releases/tag/v0.10.0?