docker-consul
docker-consul copied to clipboard
ARP Cache purging docoumentation
in the README.me it mentions "You can also manually reset the cache." -> If there is a reproducible solution, can you please supply the exact command line that will perform this? Is this something that will need to only be run on the host that went through a consul restart, or any other hosts in the cluster? does it need to be run in the docker container networking namespace (e.g using nsenter --net) or simply on the host? I've been running into this issue and trying the following things on the host doesn't seem to help (either after stopping docker-consul and/or the docker daemon, or while either one of them is still running). only thing that works is waiting a few minutes, but thats not an acceptable solution for my environment atm.
ip -s -s neigh flush all
or
nsenter --target ${CONSUL_DOCKER_PID} --net ip -s -s neigh flush all
or
arp -i docker0 -d <docker0_ip_addr_of_consul_container>
I'm experiencing the same issue. Running Ubuntu on EC2 and nothing other than waiting seems to do the trick. Some pointers on what you found that worked would be great.
the documentation refer to gratuitous ARP reply sent to the local network segment ( of consul ). That would allow all neighbour hosts to update there ARP table with the new MAC, restoring connectivity.
Try running that from the container ( through docker exec, or as per OP using nsenter ). arping -U -c2 -A -s {IP_OF_YOUR_CONTAINER} -I eth0 0.0.0.0
Also, it seem this ARP cache issue had been addressed directly upstream in docker :
- https://github.com/docker/docker/issues/5737
- https://github.com/docker/docker/pull/8371
FWIW, I just ran into this in our deployed system and unfortunately the arping didn't do the trick. Of course there could be other factors, but just thought I'd let folks know.
So...what's the right way to work around this?
I can't seem to make any of the above workarounds work. The only thing that works for me is to shut the container down for 5 minutes and then start it again.
Docker: 1.5.0 Progrium/Consul: 53a7b829dd6f
@johnrengelman. wondering, can your try to hard code the mac of the container with --mac-address or maybe using --net=host in your docker run for consul ?
I'll give those a try hopefully today...off on something else at the moment.
@johnrengelman were you able to try suggested mitigations? @pfcarrier to clear up the issues you referenced, were those fixes a part of docker 1.5.0? Meaning @johnrengelman issue still persists despite them?
Seems like we've been running into this issue, initially opened a ticket on consul: https://github.com/hashicorp/consul/issues/738
What I really need to know is if there is any valid workaround or if this issue has been addressed in a newer version of docker (we're running 1.4.1). This has been a persistent problem for us, making it difficult to try and use our consul cluster for some more interesting applications, where uptime is more critical. In the end we're probably going to be forced to go another route.
I am still seeing this, even with docker 1.5. Using --net=host, while not ideal, does appear to sidestep this issue.
This works for me as a temprorary workaround: just remove The (all) containers with:
docker rm $(docker ps -a -q)
then rebuild and run.
I spent a large chunk of the day digging into this. I flushed every ARP cache from here to Timbuktu with no improvement.
However, this worked for me: https://github.com/hashicorp/consul/issues/352#issuecomment-57216840
Try conntrack -F on the docker host where you want to quickly bounce a consul container (after docker stop but before the next docker run). The new container synced up with the cluster after that.
If anyone else is on CoreOS like me, you can use my Docker image to do this:
docker run --net=host --privileged --rm cap10morgan/conntrack -F
upvote. This thing is really annoying. conntrack -F not helping at all on ubuntu14lts hosts.
:+1: conntrack -F doesn't work for me also on ubuntu 12.04 LTS...
+1 had the same issue running centos. Ended up dropping docker-consul for now and install things the old way. Happy to help fixing the issue although *nix networks is not my cup of tea.
I'm on docker 1.7.
--net=host --privileged=true works for me on CentOS7.
I'm on docker 1.9.0 with CentOS 7, running consul with --net=host --privileged=true hasn't helped. Also, running "conntract -F" hasn't helped.
I'd like to point out the best working solution I could find was removing the docker completely
Kill all containers
sudo docker kill $(sudo docker ps -a -q)
Delete all containers
sudo docker rm $(sudo docker ps -a -q)
Delete all images
sudo docker rmi $(sudo docker images -q)
Then I run sudo conntrack -F sleep 60
then bring the docker back up.
Hope this helps
Can someone explain what the actionable item is for this issue (is it still an issue?) and give an updated way to reproduce the issue (docker-compose.yml)?
Removing the data dir on host worked for me... it also deleted all my vault secrets :joy: Which I think might be why @twhart might've got it working.
This is still relevant. The best (and about the only working) workaround so far is to manually clear the UDP cache of conntrack before re-starting (or re-running) the Consul container after it was stopped (for redeploy, for example).
I've added the following to my deployment script after tearing down the old Consul server container and before starting the new container:
sudo docker run --net=host --privileged --rm cap10morgan/conntrack -D -p udp
I've faced this issue running a small cluster of ambari/hdp services. Ambari has using consul as nameserver to coordinate the members of the cluster. And even cluster members has shutted down, they can't be startup correctly due consul neither have other ip or is unable to register new members. To make possiblie to all cluster members to find it, i run consul exposing the dns port on the ip of docker0 (-p 172.17.0.1:53:53) and set the nameserver on resolv.conf on the ambari agents to this ip. And for restore the consul normal behaviour, i just stopped, removed and start it again before restart all cluster members. It's not the better solution but it's working.