whereabouts
whereabouts copied to clipboard
Same IP address for different pods
I am getting the same issue that was 'fixed' in #97
I have tried using the two suggested images that have the latest fix:
- quay.io/s1061123/whereabouts:fix_size
- ctrahey/whereabouts:fix-97
Here are my logs from two separate nodes, you can see two pod id's reserving the same IP address.
1
litec@litec-vm-thinkpad2:~$ tail -f /tmp/whereabouts.log
2021-04-22T04:51:14Z [debug] Used defaults from parsed flat file config @ /var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.conf
2021-04-22T04:51:14Z [debug] ADD - IPAM configuration successfully read: {Name:public-nad Type:whereabouts Routes:[] Datastore:kubernetes Addresses:[] OmitRanges:[] DNS:{Nameservers:[] Domain: Search:[] Options:[]} Range:192.168.1.0/24 RangeStart:192.168.1.0 RangeEnd:<nil> GatewayStr: EtcdHost: EtcdUsername: EtcdPassword:********* EtcdKeyFile: EtcdCertFile: EtcdCACertFile: LogFile:/tmp/whereabouts.log LogLevel: Gateway:<nil> Kubernetes:{KubeConfigPath:/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig K8sAPIRoot:} ConfigurationPath:/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.conf}
2021-04-22T04:51:14Z [debug] Beginning IPAM for ContainerID: f2dbae8598c5a34eeae2e911837edd5a1ec636a0f7bc711bbc1d9c56f1b96a96
2021-04-22T04:51:14Z [debug] IPManagement -- mode: 0 / host: / containerID: f2dbae8598c5a34eeae2e911837edd5a1ec636a0f7bc711bbc1d9c56f1b96a96 / podRef: rook-ceph/samplepod2
2021-04-22T04:51:15Z [debug] IterateForAssignment input >> ip: 192.168.1.0 | ipnet: {192.168.1.0 ffffff00} | first IP: 192.168.1.1 | last IP: 192.168.1.254
2021-04-22T04:51:15Z [debug] Reserving IP: |192.168.1.1 f2dbae8598c5a34eeae2e911837edd5a1ec636a0f7bc711bbc1d9c56f1b96a96|
2
litec@litec-vm-thinkpad2:~$ tail -f /tmp/whereabouts.log
2021-04-22T04:51:14Z [debug] Used defaults from parsed flat file config @ /var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.conf
2021-04-22T04:51:14Z [debug] ADD - IPAM configuration successfully read: {Name:public-nad Type:whereabouts Routes:[] Datastore:kubernetes Addresses:[] OmitRanges:[] DNS:{Nameservers:[] Domain: Search:[] Options:[]} Range:192.168.1.0/24 RangeStart:192.168.1.0 RangeEnd:<nil> GatewayStr: EtcdHost: EtcdUsername: EtcdPassword:********* EtcdKeyFile: EtcdCertFile: EtcdCACertFile: LogFile:/tmp/whereabouts.log LogLevel: Gateway:<nil> Kubernetes:{KubeConfigPath:/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig K8sAPIRoot:} ConfigurationPath:/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.conf}
2021-04-22T04:51:14Z [debug] Beginning IPAM for ContainerID: f2dbae8598c5a34eeae2e911837edd5a1ec636a0f7bc711bbc1d9c56f1b96a96
2021-04-22T04:51:14Z [debug] IPManagement -- mode: 0 / host: / containerID: f2dbae8598c5a34eeae2e911837edd5a1ec636a0f7bc711bbc1d9c56f1b96a96 / podRef: rook-ceph/samplepod2
2021-04-22T04:51:15Z [debug] IterateForAssignment input >> ip: 192.168.1.0 | ipnet: {192.168.1.0 ffffff00} | first IP: 192.168.1.1 | last IP: 192.168.1.254
2021-04-22T04:51:15Z [debug] Reserving IP: |192.168.1.1 f2dbae8598c5a34eeae2e911837edd5a1ec636a0f7bc711bbc1d9c56f1b96a96|
The ippool is still clobbering and only showing one allocation
Spec:
Allocations:
0:
Id: f2dbae8598c5a34eeae2e911837edd5a1ec636a0f7bc711bbc1d9c56f1b96a96
Podref: rook-ceph/samplepod2
Range: 192.168.1.0/24
Events: <none>
I have also just rebuilt from master and am still having the same problem
Here is my NetworkAttachmentDefinition
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: public-conf
namespace: rook-multus
spec:
config: '{
"cniVersion": "0.3.1",
"name": "public-nad",
"type": "macvlan",
"master": "enp0s3",
"mode": "bridge",
"ipam": {
"type": "whereabouts",
"configuration_path": "/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.conf",
"log_file" : "/tmp/whereabouts.log",
"range": "192.168.1.1/24"
}
}'
My whereabouts conf
{
"datastore": "kubernetes",
"kubernetes": {
"kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig"
}
}
and my sample pod ( I'm creating 2 of these)
apiVersion: v1
kind: Pod
metadata:
name: samplepod1
annotations:
k8s.v1.cni.cncf.io/networks: rook-multus/public-conf
spec:
containers:
- name: samplepod1c
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
WELL - I have fixed it, although I feel like it is still a bug...
I have added range_start
and range_end
to my config making it (otherwise range_end is nil):
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: public-conf
namespace: rook-multus
spec:
config: '{
"cniVersion": "0.3.1",
"name": "public-nad",
"type": "macvlan",
"master": "enp0s3",
"mode": "bridge",
"ipam": {
"type": "whereabouts",
"configuration_path": "/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.conf",
"log_file" : "/tmp/whereabouts.log",
"range": "192.168.1.1/24",
"range_start": "192.168.1.10",
"range_end": "192.168.1.100"
}
}'
Resulting in:
Spec:
Allocations:
10:
Id: d68937f38aed1f0dad36bba8b0164e7d49b0d36fb12bffb8b1ccfbdf6fbda489
Podref: rook-ceph/samplepod1
11:
Id: e1bca4fb86730b7101fe2341f91dc0602b33230ff9a722369edd959810473a79
Podref: rook-ceph/samplepod2
Range: 192.168.1.0/24
Events: <none>
@kevinglasson -- major thanks for the report and all the details!
...I think there's a likelihood that a fix that we implemented for an IPv6 problem may have caused some regressions, that we started to address in #97, but.... apparently has caused some other woes.
As a stop-gap measure, I've gone ahead and tagged a version prior to those fixes, it's v0.4 -- https://github.com/k8snetworkplumbingwg/whereabouts/releases/tag/v0.4
If you're not using IPv6, this should have some stability compared to master.
I also pushed a ~github~ dockerhub* image, you can find it at dougbtv/whereabouts:v0.4
We're looking into the regression, major thanks!
@kevinglasson -- looks like there's a possibility that our :latest
tagged image wasn't built with the most recent fixes in the repository, thanks to a review of our images from @s1061123 (thanks Tomo!) we changed our GH repository namespace (from dougbtv to k8snetworkplumbingwg) and apparently the docker build process was broken.
I have an updated image available to latest, so you should be able to retest with the dougbtv/whereabouts:latest
image.
If you wouldn't mind testing with that, it'd be much appreciated. Thanks!
@dougbtv Yep, I can arrange that - I will let you know the outcome when I've tested it