weave
weave copied to clipboard
add support for `ips` in ipam config
This PR is adding an additional field in the IPAM config for Weave - ips.
It allows for specifying an IP address through the CNI plugin, similar to weave attach and claim that IP.
-
I am not sure if it makes sense to configure also
checkAliveor to set it totrueat all times - currently it is set tofalse. -
Since the IPAM configuration is specific to each plugin, I think this is allowed per the CNI spec. I am not sure if the choice for
ipstype is right though - this type is used in theResult, but it seemed appropriate for the input values of the IPAM as well.
TODO:
- [x] fix
880_cni_plugin_ip_address_assignment_test.sh.shtests to test new functionality
Thanks for the PR! Would you consider adding a test, e.g. extending https://github.com/weaveworks/weave/blob/master/test/830_cni_plugin_test.sh
(the integration tests at CircleCI always fail for 3rd-party branches, but I've pushed your branch to this repo so they will run this time)
I am not sure if it makes sense to configure also
checkAlive
In a Kubernetes context it won't make any difference - that parameter is only used when Weave Net is configured to talk to Docker.
@bboreham thank you for the review. I will make sure I add a test and address all your comments and questions in the coming days!
Thanks for the test; there appear to be a couple of points still outstanding from my first review - could you respond to those?
@bboreham sorry for the delay :( I am trying to prioritise this alongside other work. I will definitely address the rest of the issues, and ping you, when this is ready for another review!
Hey, no apology required! I've re-pushed the branch so your new test should run in CI.
Thanks for this; sorry I didn't get back to you. Some small notes below.
No worries!
I think I addressed all comments. Let me know if you think this PR should include anything else.
I'd be very happy if this gets into Weave at some point, so that we can stop using a fork on Testground!
It looks like your test/831_cni_plugin_ip_address_assignment_test.sh is identical to the existing test 830; it should exercise the new feature.
It looks like your
test/831_cni_plugin_ip_address_assignment_test.shis identical to the existing test 830; it should exercise the new feature.
Any chance we can allow this PR to run these tests on CI? I didn't manage to get the test setup to run locally last time, and it seems like it will be fairly easy to update the code and test on CI.
We have no automated system to allow 3rd-party PRs access to the secrets; what I generally do is manually pull your branch and push it to the weaveworks/weave repo.
Note that your test doesn't need to duplicate all the twists and turns of test 830; it just needs to make one or two calls with the IPs set and check the result.
The tests do assume they can ssh into another host where Docker is running with open TCP socket. To run completely locally you can edit out that assumption:
diff --git test/830_cni_plugin_test.sh test/830_cni_plugin_test.sh
index e53e988c..a8d93cc0 100755
--- test/830_cni_plugin_test.sh
+++ test/830_cni_plugin_test.sh
@@ -131,9 +131,9 @@ assert_raises "exec_on $HOST1 c1 $PING $C3IP"
# CH0_HAIRPIN=$($SSH $HOST1 cat /sys/devices/virtual/net/weave/brif/$CH0_PEER/hairpin_mode)
# assert_raises "[ $CH0_HAIRPIN == 1 ]"
#
-assert "$SSH $HOST1 cat /sys/devices/virtual/net/weave/brif/vethwepl${CH0:0:7}/hairpin_mode" "1"
-assert "$SSH $HOST1 cat /sys/devices/virtual/net/weave/brif/vethwepl${CH1:0:7}/hairpin_mode" "0"
-assert "$SSH $HOST1 cat /sys/devices/virtual/net/weave/brif/vethwepl${CH2:0:7}/hairpin_mode" "1"
+assert "cat /sys/devices/virtual/net/weave/brif/vethwepl${CH0:0:7}/hairpin_mode" "1"
+assert "cat /sys/devices/virtual/net/weave/brif/vethwepl${CH1:0:7}/hairpin_mode" "0"
+assert "cat /sys/devices/virtual/net/weave/brif/vethwepl${CH2:0:7}/hairpin_mode" "1"
@@ -141,13 +141,13 @@ assert "$SSH $HOST1 cat /sys/devices/virtual/net/weave/brif/vethwepl${CH2:0:7}/h
stop_weave_on $HOST1
# Ensure no warning is printed to the standard error:
-ACTUAL_OUTPUT=$(CHECKPOINT_DISABLE="$CHECKPOINT_DISABLE" DOCKER_HOST=tcp://$HOST1:$DOCKER_PORT $WEAVE launch 2>&1)
-EXPECTED_OUTPUT=$($SSH $HOST1 docker inspect --format="{{.Id}}" weave)
+ACTUAL_OUTPUT=$(CHECKPOINT_DISABLE="$CHECKPOINT_DISABLE" $WEAVE launch 2>&1)
+EXPECTED_OUTPUT=$(docker inspect --format="{{.Id}}" weave)
assert_raises "[ $EXPECTED_OUTPUT == $ACTUAL_OUTPUT ]"
-assert "$SSH $HOST1 \"curl -s -X GET 127.0.0.1:6784/ip/$C1 | grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'\"" "$C1IP"
-assert "$SSH $HOST1 \"curl -s -X GET 127.0.0.1:6784/ip/$C3 | grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'\"" "$C3IP"
+assert "curl -s -X GET 127.0.0.1:6784/ip/$C1 | grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'" "$C1IP"
+assert "curl -s -X GET 127.0.0.1:6784/ip/$C3 | grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'" "$C3IP"
end_suite
diff --git test/config.sh test/config.sh
index 3b599ffa..76483877 100644
--- test/config.sh
+++ test/config.sh
@@ -101,7 +101,7 @@ run_on() {
host=$1
shift 1
[ -z "$DEBUG" ] || greyly echo "Running on $host: $@" >&2
- remote $host $SSH $host "$@"
+ "$@"
}
get_command_output_on() {
@@ -115,7 +115,7 @@ docker_on() {
host=$1
shift 1
[ -z "$DEBUG" ] || greyly echo "Docker on $host:$DOCKER_PORT: $@" >&2
- docker -H tcp://$host:$DOCKER_PORT "$@"
+ docker "$@"
}
docker_api_on() {
@@ -136,7 +136,7 @@ weave_on() {
host=$1
shift 1
[ -z "$DEBUG" ] || greyly echo "Weave on $host:$DOCKER_PORT: $@" >&2
- CHECKPOINT_DISABLE="$CHECKPOINT_DISABLE" DOCKER_HOST=tcp://$host:$DOCKER_PORT $WEAVE "$@"
+ CHECKPOINT_DISABLE="$CHECKPOINT_DISABLE" $WEAVE "$@"
}
stop_weave_on() {
@@ -151,7 +151,7 @@ exec_on() {
host=$1
container=$2
shift 2
- docker -H tcp://$host:$DOCKER_PORT exec $container "$@"
+ docker exec $container "$@"
}
# Look through 'docker run' args and try to make the hostname match the name
Then the run looks a little untidy because the CNI results are going to stdout, but you can see it worked:
$ HOSTS=localhost ./test/830_cni_plugin_test.sh -v
Cleaning up on localhost: removing all containers and resetting weave
Docker on localhost:2375: rm -f -v 8adc08a80d77 097012a50dad 5bb8f188cfbf 87388592180a 2505ab650754 00ae9ed80765 fa9597b14449 2dbfc8cf44e3 e8c6965ffe8f
Running on localhost: sudo rm -f /opt/cni/bin/weave-plugin-latest /opt/cni/bin/weave-net /opt/cni/bin/weave-ipam /etc/cni/net.d/10-weave.conflist
Test CNI plugin
Running on localhost: sudo mkdir -p /opt/cni/bin
Weave on localhost:2375: setup-cni
Weave on localhost:2375: launch
7ddcd31e8a63473b58d83c09a8918c5bbc0feee7f64040e9a6700631257dbba2
Docker on localhost:2375: run --net=none --name=c0 -dt alpine /bin/sh
Docker on localhost:2375: run --net=none --privileged --name=c1 -dt alpine /bin/sh
Docker on localhost:2375: run --net=none --name=c2 -dt alpine /bin/sh
Docker on localhost:2375: run --net=none --name=ch0 -dt alpine /bin/sh
Docker on localhost:2375: run --net=none --name=ch1 -dt alpine /bin/sh
Docker on localhost:2375: run --net=none --name=ch2 -dt alpine /bin/sh
net.ipv4.conf.all.arp_accept = 1
Docker on localhost:2375: inspect -f {{.State.Pid}} c0
Docker on localhost:2375: inspect -f {{.Id}} c0
Running on localhost: sudo CNI_COMMAND=ADD CNI_CONTAINERID=84f55149129a23980253b5b7e34ca7070be7611d24e264d6178461848a0d9b17 CNI_IFNAME=eth0 CNI_NETNS=/proc/3320277/ns/net CNI_PATH=/opt/cni/bin /opt/cni/bin/weave-net
{
"ip4": {
"ip": "10.32.0.1/12",
"gateway": "10.32.0.2"
},
"dns": {}
}Docker on localhost:2375: inspect -f {{.State.Pid}} c1
Docker on localhost:2375: inspect -f {{.Id}} c1
Running on localhost: sudo CNI_COMMAND=ADD CNI_CONTAINERID=d36df517afbe134c6b3b16a54c5bee33482825f939b5168fb560d289a8a60fdd CNI_IFNAME=eth0 CNI_NETNS=/proc/3320334/ns/net CNI_PATH=/opt/cni/bin /opt/cni/bin/weave-net
{
"ip4": {
"ip": "10.32.0.3/12",
"gateway": "10.32.0.2"
},
"dns": {}
}Docker on localhost:2375: inspect -f {{.State.Pid}} c2
Docker on localhost:2375: inspect -f {{.Id}} c2
Running on localhost: sudo CNI_COMMAND=ADD CNI_CONTAINERID=2421f061c5d7cd8c1165affa9cd6de1a4934cad2e5a30aab47381044a10083bf CNI_IFNAME=eth0 CNI_NETNS=/proc/3320402/ns/net CNI_PATH=/opt/cni/bin /opt/cni/bin/weave-net
{
"ips": [
{
"version": "4",
"address": "10.32.0.4/12"
}
],
"routes": [
{
"dst": "10.32.0.0/12"
}
],
"dns": {}
}Docker on localhost:2375: inspect -f {{.State.Pid}} ch0
Docker on localhost:2375: inspect -f {{.Id}} ch0
Running on localhost: sudo CNI_COMMAND=ADD CNI_CONTAINERID=2cb4dba09753e6169a7f2abdf96ebefadad83cabb4e462c45fd5b931cbee9a05 CNI_IFNAME=eth0 CNI_NETNS=/proc/3320463/ns/net CNI_PATH=/opt/cni/bin /opt/cni/bin/weave-net
{
"ip4": {
"ip": "10.32.0.5/12",
"gateway": "10.32.0.2"
},
"dns": {}
}Docker on localhost:2375: inspect -f {{.State.Pid}} ch1
Docker on localhost:2375: inspect -f {{.Id}} ch1
Running on localhost: sudo CNI_COMMAND=ADD CNI_CONTAINERID=86bc28f08ca70653c81bb2b86c598999a0f2fddf062ad76af5696df0ce404974 CNI_IFNAME=eth0 CNI_NETNS=/proc/3320520/ns/net CNI_PATH=/opt/cni/bin /opt/cni/bin/weave-net
{
"ip4": {
"ip": "10.32.0.6/12",
"gateway": "10.32.0.2"
},
"dns": {}
}Docker on localhost:2375: inspect -f {{.State.Pid}} ch2
Docker on localhost:2375: inspect -f {{.Id}} ch2
Running on localhost: sudo CNI_COMMAND=ADD CNI_CONTAINERID=a0f9b4b1a3580cbb6025cb48d549a067803c1f57f7df7b45a32f9f4d770a1e1c CNI_IFNAME=eth0 CNI_NETNS=/proc/3320577/ns/net CNI_PATH=/opt/cni/bin /opt/cni/bin/weave-net
{
"ip4": {
"ip": "10.32.0.7/12",
"gateway": "10.32.0.2"
},
"dns": {}
}Weave on localhost:2375: ps c0
Weave on localhost:2375: ps c1
Weave on localhost:2375: ps c2
Weave on localhost:2375: ps weave:expose
.......Docker on localhost:2375: rm -f c2
c2
Docker on localhost:2375: run --net=none --name=c3 -dt alpine /bin/sh
Docker on localhost:2375: inspect -f {{.State.Pid}} c3
Docker on localhost:2375: inspect -f {{.Id}} c3
Running on localhost: sudo CNI_COMMAND=ADD CNI_CONTAINERID=e21ed597f48ba3aa3f323fea344621d594c502c84f341dadecdb8966811d52b6 CNI_IFNAME=eth0 CNI_NETNS=/proc/3321708/ns/net CNI_PATH=/opt/cni/bin /opt/cni/bin/weave-net
{
"ip4": {
"ip": "10.32.0.8/12",
"gateway": "10.32.0.2"
},
"dns": {}
}Weave on localhost:2375: ps c3
..........
all 17 tests passed in 14.710s.```
Also note the first thing each test suite does is `rm -f` every container on the host.
@bboreham OK, thanks for the pointers, will do that and ping you when ready.
@bboreham thanks to the tips on how to run tests locally, I am able to run the test suite now! I started adding a test for this functionality, but when running HOSTS=localhost ./test/880_cni_plugin_ip_address_assignment_test.sh -v, it errors with:
Running on localhost: sudo CNI_COMMAND=ADD CNI_CONTAINERID=d682738721b9f73e39482be6486164bb3a42d4b7d219fe1d5d91bf8d26a8ae18 CNI_IFNAME=eth0 CNI_NETNS=/proc/145243/ns/net CNI_PATH=/opt/cni/bin /opt/cni/bin/weave-net
{
"code": 100,
"msg": "unable to allocate IP address for bridge: 400 Bad Request: Unable to claim: address 10.32.1.30/32 is already owned by d682738721b9f73e39482be6486164bb3a42d4b7d219fe1d5d91bf8d26a8ae18\n"
}
Not sure why we are not able to claim that address. I checked the following places:
docker logs weavedoesn't explain the errorc0,c1andc2have onlyloiface.weave status ipamshows all IPs are available:
➜ weave git:(3d2524c6) ✗ ./weave status ipam
32:7d:23:67:8a:ff(ip-172-31-6-138) 1048576 IPs (100.0% of total) (1 active)
I also rebased the PR onto latest master as it was quite outdated.
weave status ipam shows all IPs are available:
It shows that all but 1 are available ("1 active").
weave ps should tell you where that IP is in use, or curl 127.0.0.1:6784/ip
I've added a test and it is passing locally. @bboreham could you also trigger the remote CI, just to make sure that I haven't messed up the setup phase while testing locally?
The test creates 3 containers with static IPs, and makes sure that these containers can see each other, and that they can't access the open internet.
For some reason weave:expose is calling claim.Try() (discussed above in a comment), and I imagine there is a proper way to handle that case, so this might need adjustment.
I think your main problem is you left in all sorts of logic from the other test, including BRIP=$(container_ip $HOST1 weave:expose) which is allocating an address for the bridge.
Strip it down to just what you mean to test.
Sorry, changed my mind, that line wouldn't do a PUT with a specific IP.
I do recommend you strip down the test, but you'd need to find out where that PUT is coming from.
Could you check if you have a pre-existing address on the weave bridge?
Like ip addr show dev weave.
You may want to do weave reset --force before running the test, to clear all that down.
(This is probably a bug in test/config.sh)
Could you check if you have a pre-existing address on the
weavebridge?Like
ip addr show dev weave.You may want to do
weave reset --forcebefore running the test, to clear all that down. (This is probably a bug intest/config.sh)
I am getting the same behaviour after doing a weave reset --force - weave:expose still tries to claim the IP that is owned by a container.
This happens at:
cni_connect $HOST1 c0 <<EOF
...
EOF
It is reproducible only with one container and no bridge.
I suspect the problem is happening here: https://github.com/weaveworks/weave/blob/34de0b10a69c2fa11f2314bfe0e449f739a96cd8/plugin/net/cni.go#L98
This part of the CNI plugin wants to set up a gateway address on the bridge so that egress traffic can be routed back into the Weave network. So it calls the IP allocator, with the same CNI config, so it will get an address in the correct subnet. Your new code sees ips field and hands back the IP that you wanted to be allocated for the first container.
I cannot see a simple fix retaining all existing behaviour; that code is deliberately not parsing the config because it wants to be independent of whatever ipam the caller has configured. Maybe we could add another ipam config to be used for gateway address allocation?
I can get your test to otherwise pass by adding weave_on $HOST1 expose before any CNI calls, so the bridge already has an IP when the plugin looks for it.