sfc-controller
sfc-controller copied to clipboard
The sfc-controller example in Contiv-vpp project can not work as description
Hi, I am doing some work about sfc-controller on K8s with Contiv-vpp. I deploy a sfc-controller environment for testing based on following instructions https://github.com/contiv/vpp/tree/master/k8s/examples/sfc-controller. But it can not work. Is there anyone can help me about it?
$ sudo vppctl show interface
[sudo] password for jingzhao:
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
TenGigabitEthernet8d/0/0 1 up 9000/0/0/0 rx packets 65703
rx bytes 14742727
tx packets 14309
tx bytes 1998286
drops 51608
punt 151
ip4 43323
ip6 8321
tx-error 1
local0 0 down 0/0/0/0
loop0 2 up 9000/0/0/0
loop1 6 up 9000/0/0/0 rx packets 13897
rx bytes 999350
tx packets 28424
tx bytes 2565708
drops 1
ip4 13897
tap0 3 up 1450/0/0/0 rx packets 102237
rx bytes 18021778
tx packets 107254
tx bytes 11483714
drops 142
ip4 102124
ip6 39
tap2 7 up 1450/0/0/0 rx packets 11861
rx bytes 958503
tx packets 13513
tx bytes 5306964
drops 57
ip4 11844
ip6 17
tap3 8 up 1450/0/0/0 rx packets 11845
rx bytes 957094
tx packets 13639
tx bytes 5319322
drops 57
ip4 11828
ip6 17
tap4 9 up 1450/0/0/0 rx packets 17
rx bytes 1286
drops 17
ip6 17
tap5 10 up 1450/0/0/0 rx packets 34795
rx bytes 4190643
tx packets 30314
tx bytes 3049807
drops 17
ip4 34778
ip6 17
tap6 11 up 1450/0/0/0 rx packets 34927
rx bytes 4193220
tx packets 30413
tx bytes 3051571
drops 18
ip4 34909
ip6 18
vxlan_tunnel0 5 up 0/0/0/0 rx packets 13897
rx bytes 1193908
tx packets 14212
tx bytes 1794486
$
Firstly, i do not find any new created vxlan tunnel on host network Secondly, I also find that there is no memif interface created in vnf pod.
@Jingzhao123 Hi did you try master or dev branch?
On master branch
Please try to combine contiv/vpp master branch and ligato/sfc-controller dev branch.
IBut please wait for PR https://github.com/ligato/sfc-controller/pull/31 to be merged
Or change the Dockerfiles as you did before for alpine versions....
@Jingzhao123 Hi I am trying to examine the situation. I found out that you need compatible version of sfc-controller and vpp-agent. I am going to try these: https://cloud.docker.com/u/ligato/repository/docker/ligato/vpp-agent-arm64 I will take the docker image v2.0.0-beta-190-g19f3215d https://cloud.docker.com/u/ligato/repository/docker/ligato/sfc-controller-arm64 I will take the docker image built on dev branch docker pull ligato/sfc-controller-arm64:dev
About these two I now that they work together: http://147.75.83.101:8080/view/05SFC/job/05SFCIPV6_____vswitch_1x_vpp_1x_novppIPv6_job/22/ http://147.75.83.101:8080/view/05SFC/job/05SFCIPV6_____vswitch_1x_vpp_1x_novppIPv6_job/22/parameters/ http://147.75.83.101:8080/view/05SFC/job/05SFCIPV6_____vswitch_1x_vpp_1x_novppIPv6_job/22/console
Actually it is only prerequisite to our aim to make work that contiv k8s sfc example It can be found other problems
@stanislav-chlebec Hi Is it OK on dev version? Actually, i still do not verify it on dev version. When i using the latest version, i found that the vnf pod can not create memif interface. Is the issue same with yours? Thanks.
Hi What I tried: 1 I prepared contiv/vpp images – branch dev (included https://github.com/contiv/vpp/pull/1464): https://cloud.docker.com/u/contivvpp/repository/list?name=arm64&namespace=contivvpp&page=1 docker pull contivvpp/ui-arm64:v2.1.3-98-g1e8734b51 docker pull contivvpp/stn-arm64:v2.1.3-98-g1e8734b51 docker pull contivvpp/ksr-arm64:v2.1.3-98-g1e8734b51 docker pull contivvpp/crd-arm64:v2.1.3-98-g1e8734b51 docker pull contivvpp/cni-arm64:v2.1.3-98-g1e8734b51 docker pull contivvpp/vswitch-arm64:v2.1.3-98-g1e8734b51 docker pull contivvpp/dev-vswitch-arm64:v2.1.3-98-g1e8734b51 docker pull contivvpp/dev-vswitch-arm64:v2.1.3-98-g1e8734b51-13f5dcf9152287e06b9b5d67774b9f4b576ebaa7 docker pull contivvpp/vpp-binaries-arm64:13f5dcf9152287e06b9b5d67774b9f4b576ebaa7
2 I prepared manifest-arm64.yaml3_v2.1.3-98-g1e8734b51 (the file k8s/contiv-vpp/values-latest.yaml – were changed tags from latest to v2.1.3-98-g1e8734b51) helm template --name my-release ../contiv-vpp -f ./values-latest.yaml,./values-arm64.yaml,./values.yaml --set vswitch.defineMemoryLimits=true --set vswitch.hugePages1giLimit=8Gi --set vswitch.memoryLimit=8Gi > manifest-arm64.yaml3
3 I fixed the configuration of /etc/vpp/contiv-vswitch.conf – added clause – on both of my servers: socksvr { default }
I started kubernetes with contiv/vpp network plugin – two nodes sudo kubeadm init --token-ttl 0 --pod-network-cidr=10.1.0.0/16 ... kubectl apply -f manifest-arm64.yaml3_v2.1.3-98-g1e8734b51
5 I modified k8s/example/sfc-controller files according of my setup
6 I followed the README guide at https://github.com/contiv/vpp/tree/master/k8s/examples/sfc-controller: set-node-labels kubectl apply -f sfc-controller.yaml
kubectl apply -f configMaps.yaml
kubectl apply -f vnf1.yaml
kubectl apply -f vnf2.yaml
kubectl apply -f vnf3.yaml
kubectl apply -f vnf4.yaml
stanislav@contivvpp:~/contivppnetwork/vpp$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE contiv-crd-4lrs7 1/1 Running 0 3h51m contiv-etcd-0 1/1 Running 0 3h51m contiv-ksr-hkjph 1/1 Running 0 3h51m contiv-sfc-controller-6rb4x 1/1 Running 0 72m contiv-vswitch-sr8ck 1/1 Running 0 3h51m contiv-vswitch-v9jsb 1/1 Running 0 3h51m coredns-86c58d9df4-4cbgp 1/1 Running 0 11h coredns-86c58d9df4-pp6q6 1/1 Running 0 11h etcd-contivvpp 1/1 Running 0 11h kube-apiserver-contivvpp 1/1 Running 0 11h kube-controller-manager-contivvpp 1/1 Running 0 11h kube-proxy-c68pg 1/1 Running 0 11h kube-proxy-hcmqv 1/1 Running 0 11h kube-scheduler-contivvpp 1/1 Running 0 11h stanislav@contivvpp:~/contivppnetwork/vpp$
7 I had an issue when the vnfX pods were pending I had to issue commands kubectl label nodes contivvpp role=affinity --overwrite=true kubectl label nodes vppagent role=no-affinity --overwrite=true to fix this
8 In spite of this the vnf pods are crashing stanislav@contivvpp:~/contivppnetwork/vpp$ kubectl get pods NAME READY STATUS RESTARTS AGE vnf1 0/1 Pending 0 45m vnf2 0/1 CrashLoopBackOff 13 45m vnf3 0/1 Pending 0 45m vnf4 0/1 CrashLoopBackOff 13 44m stanislav@contivvpp:~/contivppnetwork/vpp$
stanislav@contivvpp:~/contivppnetwork/vpp$ cat /etc/vpp/contiv-vswitch.conf unix { nodaemon cli-listen /run/vpp/cli.sock cli-no-pager coredump-size unlimited full-coredump } nat { endpoint-dependent translation hash buckets 1048576 translation hash memory 268435456 user hash buckets 1024 max translations per user 10000 } api-trace { on nitems 500 } dpdk { dev 0002:01:00.2 uio-driver vfio-pci } acl-plugin { use tuple merge 0 } socksvr { default } stanislav@contivvpp:~/contivppnetwork/vpp$ stanislav@contivvpp:~/contivppnetwork/vpp$ cat k8s/contiv-vpp/values-latest.yaml
vswitch: image: tag: v2.1.3-98-g1e8734b51
cni: image: tag: v2.1.3-98-g1e8734b51
ksr: image: tag: v2.1.3-98-g1e8734b51
crd: image: tag: v2.1.3-98-g1e8734b51 stanislav@contivvpp:~/contivppnetwork/vpp$
From: Jingzhao123 [email protected] Sent: Monday, March 18, 2019 10:54 AM To: ligato/sfc-controller [email protected] Cc: Stanislav Chlebec [email protected]; Mention [email protected] Subject: Re: [ligato/sfc-controller] The sfc-controller example in Contiv-vpp project can not work as description (#29)
@stanislav-chlebechttps://github.com/stanislav-chlebec Hi Is it OK on dev version? Actually, i still do not verify it on dev version. When i using the latest version, i found that the vnf pod can not create memif interface. Is the issue same with yours? Thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/ligato/sfc-controller/issues/29#issuecomment-473841672, or mute the threadhttps://github.com/notifications/unsubscribe-auth/Aek0u_WZ-WBsX2FGtXbm9KL04GpBYT9tks5vX2IggaJpZM4bYmlv.
Used ligato/sfc-controller-arm64 built on dev branch Used ligato/vpp-agent-arm64 built on dev branch
From: Jingzhao123 [email protected] Sent: Monday, March 18, 2019 10:54 AM To: ligato/sfc-controller [email protected] Cc: Stanislav Chlebec [email protected]; Mention [email protected] Subject: Re: [ligato/sfc-controller] The sfc-controller example in Contiv-vpp project can not work as description (#29)
@stanislav-chlebechttps://github.com/stanislav-chlebec Hi Is it OK on dev version? Actually, i still do not verify it on dev version. When i using the latest version, i found that the vnf pod can not create memif interface. Is the issue same with yours? Thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/ligato/sfc-controller/issues/29#issuecomment-473841672, or mute the threadhttps://github.com/notifications/unsubscribe-auth/Aek0u_WZ-WBsX2FGtXbm9KL04GpBYT9tks5vX2IggaJpZM4bYmlv.
The reasom why vxlan tunnels are not created is probably here: https://github.com/ligato/sfc-controller/issues/33
FYI https://github.com/contiv/vpp/tree/master/k8s/examples/sfc-controller/arm64 I fixed the errorneous outpout to etcd manually: Original content in etcd:
etcdcontainer=`kubectl get pods -o wide --no-headers=true --include-uninitialized -n kube-system | grep "contiv-etcd" | awk '{print $1}' | grep 'contiv-etcd'`
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl get --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem --prefix="true" ""
...
...
/vnf-agent/contivvpp/config/vpp/l2/v2/xconnect/IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain1_CONN_1_FROM_contivvpp_vnf2_port1_TO_vppagent_vnf1_port1_VNI_5002
{"receive_interface":"IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain1_CONN_1_FROM_contivvpp_vnf2_port1_TO_vppagent_vnf1_port1_VNI_5002","transmit_interface":"IF_MEMIF_VSWITCH_vnf2_port1"}
/vnf-agent/contivvpp/config/vpp/l2/v2/xconnect/IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_contivvpp_vnf4_port1_TO_vppagent_vnf3_port1_VNI_5003
{"receive_interface":"IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_contivvpp_vnf4_port1_TO_vppagent_vnf3_port1_VNI_5003","transmit_interface":"IF_MEMIF_VSWITCH_vnf4_port1"}
/vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_MEMIF_VSWITCH_vnf2_port1
{"name":"IF_MEMIF_VSWITCH_vnf2_port1","type":"MEMIF","enabled":true,"mtu":1500,"memif":{"master":true,"id":2,"socket_filename":"/var/run/contiv/memif_contivvpp.sock"}}
/vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_MEMIF_VSWITCH_vnf4_port1
{"name":"IF_MEMIF_VSWITCH_vnf4_port1","type":"MEMIF","enabled":true,"mtu":1500,"memif":{"master":true,"id":4,"socket_filename":"/var/run/contiv/memif_contivvpp.sock"}}
/vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain1_CONN_1_FROM_contivvpp_vnf2_port1_TO_vppagent_vnf1_port1_VNI_5002
{"name":"IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain1_CONN_1_FROM_contivvpp_vnf2_port1_TO_vppagent_vnf1_port1_VNI_5002","type":"VXLAN_TUNNEL","enabled":true,"vxlan":{"src_address":"192.168.40.31","dst_address":"192.168.40.30","vni":5002}}
/vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_contivvpp_vnf4_port1_TO_vppagent_vnf3_port1_VNI_5003
{"name":"IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_contivvpp_vnf4_port1_TO_vppagent_vnf3_port1_VNI_5003","type":"VXLAN_TUNNEL","enabled":true,"vxlan":{"src_address":"192.168.40.31","dst_address":"192.168.40.30","vni":5003}}
/vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_VXLAN_LOOPBACK_contivvpp
{"name":"IF_VXLAN_LOOPBACK_contivvpp","type":"SOFTWARE_LOOPBACK","enabled":true,"ip_addresses":["192.168.40.31/24"],"mtu":1500}
/vnf-agent/contivvpp/config/vpp/v2/route/vrf/0/dst/192.168.40.30/32/gw/192.168.16.1
{"dst_network":"192.168.40.30/32","next_hop_addr":"192.168.16.1","outgoing_interface":"VirtualFunctionEthernet1/0/2","preference":5}
/vnf-agent/vnf1/config/vpp/v2/interfaces/port1
{"name":"port1","type":"MEMIF","enabled":true,"phys_address":"02:00:00:00:00:01","ip_addresses":["10.0.1.1/24"],"mtu":1500,"memif":{"id":1,"socket_filename":"/var/run/contiv/memif_vppagent.sock"}}
/vnf-agent/vnf2/config/vpp/v2/interfaces/port1
{"name":"port1","type":"MEMIF","enabled":true,"phys_address":"02:00:00:00:00:02","ip_addresses":["10.0.1.2/24"],"mtu":1500,"memif":{"id":2,"socket_filename":"/var/run/contiv/memif_contivvpp.sock"}}
/vnf-agent/vnf3/config/vpp/v2/interfaces/port1
{"name":"port1","type":"MEMIF","enabled":true,"phys_address":"02:00:00:00:00:03","ip_addresses":["10.0.1.1/24"],"mtu":1500,"memif":{"id":3,"socket_filename":"/var/run/contiv/memif_vppagent.sock"}}
/vnf-agent/vnf4/config/vpp/v2/interfaces/port1
{"name":"port1","type":"MEMIF","enabled":true,"phys_address":"02:00:00:00:00:04","ip_addresses":["10.0.1.2/24"],"mtu":1500,"memif":{"id":4,"socket_filename":"/var/run/contiv/memif_contivvpp.sock"}}
/vnf-agent/vppagent/config/vpp/l2/v2/xconnect/IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain1_CONN_1_FROM_vppagent_vnf1_port1_TO_contivvpp_vnf2_port1_VNI_5002
{"receive_interface":"IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain1_CONN_1_FROM_vppagent_vnf1_port1_TO_contivvpp_vnf2_port1_VNI_5002","transmit_interface":"IF_MEMIF_VSWITCH_vnf1_port1"}
/vnf-agent/vppagent/config/vpp/l2/v2/xconnect/IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_vppagent_vnf3_port1_TO_contivvpp_vnf4_port1_VNI_5003
{"receive_interface":"IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_vppagent_vnf3_port1_TO_contivvpp_vnf4_port1_VNI_5003","transmit_interface":"IF_MEMIF_VSWITCH_vnf3_port1"}
/vnf-agent/vppagent/config/vpp/v2/interfaces/IF_MEMIF_VSWITCH_vnf1_port1
{"name":"IF_MEMIF_VSWITCH_vnf1_port1","type":"MEMIF","enabled":true,"mtu":1500,"memif":{"master":true,"id":1,"socket_filename":"/var/run/contiv/memif_vppagent.sock"}}
/vnf-agent/vppagent/config/vpp/v2/interfaces/IF_MEMIF_VSWITCH_vnf3_port1
{"name":"IF_MEMIF_VSWITCH_vnf3_port1","type":"MEMIF","enabled":true,"mtu":1500,"memif":{"master":true,"id":3,"socket_filename":"/var/run/contiv/memif_vppagent.sock"}}
/vnf-agent/vppagent/config/vpp/v2/interfaces/IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain1_CONN_1_FROM_vppagent_vnf1_port1_TO_contivvpp_vnf2_port1_VNI_5002
{"name":"IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain1_CONN_1_FROM_vppagent_vnf1_port1_TO_contivvpp_vnf2_port1_VNI_5002","type":"VXLAN_TUNNEL","enabled":true,"vxlan":{"src_address":"192.168.40.30","dst_address":"192.168.40.31","vni":5002}}
/vnf-agent/vppagent/config/vpp/v2/interfaces/IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_vppagent_vnf3_port1_TO_contivvpp_vnf4_port1_VNI_5003
{"name":"IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_vppagent_vnf3_port1_TO_contivvpp_vnf4_port1_VNI_5003","type":"VXLAN_TUNNEL","enabled":true,"vxlan":{"src_address":"192.168.40.30","dst_address":"192.168.40.31","vni":5003}}
/vnf-agent/vppagent/config/vpp/v2/interfaces/IF_VXLAN_LOOPBACK_vppagent
{"name":"IF_VXLAN_LOOPBACK_vppagent","type":"SOFTWARE_LOOPBACK","enabled":true,"ip_addresses":["192.168.40.30/24"],"mtu":1500}
/vnf-agent/vppagent/config/vpp/v2/route/vrf/0/dst/192.168.40.31/32/gw/192.168.16.2
{"dst_network":"192.168.40.31/32","next_hop_addr":"192.168.16.2","outgoing_interface":"VirtualFunctionEthernet1/0/2","preference":5}
...
...
By a correct one:
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/l2/v2/xconnect/IF_MEMIF_VSWITCH_vnf2_port1 '{"receive_interface":"IF_MEMIF_VSWITCH_vnf2_port1","transmit_interface":"C1FW21TO11_5002"}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/l2/v2/xconnect/IF_MEMIF_VSWITCH_vnf4_port1 '{"receive_interface":"IF_MEMIF_VSWITCH_vnf4_port1","transmit_interface":"C1FW41TO31_5003"}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/l2/v2/xconnect/C1FW21TO11_5002 '{"receive_interface":"C1FW21TO11_5002","transmit_interface":"IF_MEMIF_VSWITCH_vnf2_port1"}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/l2/v2/xconnect/C1FW41TO31_5003 '{"receive_interface":"C1FW41TO31_5003","transmit_interface":"IF_MEMIF_VSWITCH_vnf4_port1"}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_MEMIF_VSWITCH_vnf2_port1 '{"name":"IF_MEMIF_VSWITCH_vnf2_port1","type":"MEMIF","enabled":true,"mtu":1500,"memif":{"master":true,"id":2,"socket_filename":"/var/run/contiv/memif_contivvpp.sock"}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_MEMIF_VSWITCH_vnf4_port1 '{"name":"IF_MEMIF_VSWITCH_vnf4_port1","type":"MEMIF","enabled":true,"mtu":1500,"memif":{"master":true,"id":4,"socket_filename":"/var/run/contiv/memif_contivvpp.sock"}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/v2/interfaces/C1FW21TO11_5002 '{"name":"C1FW21TO11_5002","type":"VXLAN_TUNNEL","enabled":true,"vxlan":{"src_address":"192.168.40.31","dst_address":"192.168.40.30","vni":5002}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/v2/interfaces/C1FW41TO31_5003 '{"name":"C1FW41TO31_5003","type":"VXLAN_TUNNEL","enabled":true,"vxlan":{"src_address":"192.168.40.31","dst_address":"192.168.40.30","vni":5003}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_VXLAN_LOOPBACK_contivvpp '{"name":"IF_VXLAN_LOOPBACK_contivvpp","type":"SOFTWARE_LOOPBACK","enabled":true,"ip_addresses":["192.168.40.31/24"],"mtu":1500}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/contivvpp/config/vpp/v2/route/vrf/0/dst/192.168.40.30/32/gw/192.168.16.1 '{"dst_network":"192.168.40.30/32","next_hop_addr":"192.168.16.1","outgoing_interface":"VirtualFunctionEthernet1/0/2","preference":5}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vnf1/config/vpp/v2/interfaces/port1 '{"name":"port1","type":"MEMIF","enabled":true,"phys_address":"02:00:00:00:00:01","ip_addresses":["10.0.1.1/24"],"mtu":1500,"memif":{"id":1,"socket_filename":"/var/run/contiv/memif_vppagent.sock"}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vnf2/config/vpp/v2/interfaces/port1 '{"name":"port1","type":"MEMIF","enabled":true,"phys_address":"02:00:00:00:00:02","ip_addresses":["10.0.1.2/24"],"mtu":1500,"memif":{"id":2,"socket_filename":"/var/run/contiv/memif_contivvpp.sock"}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vnf3/config/vpp/v2/interfaces/port1 '{"name":"port1","type":"MEMIF","enabled":true,"phys_address":"02:00:00:00:00:03","ip_addresses":["10.0.1.1/24"],"mtu":1500,"memif":{"id":3,"socket_filename":"/var/run/contiv/memif_vppagent.sock"}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vnf4/config/vpp/v2/interfaces/port1 '{"name":"port1","type":"MEMIF","enabled":true,"phys_address":"02:00:00:00:00:04","ip_addresses":["10.0.1.2/24"],"mtu":1500,"memif":{"id":4,"socket_filename":"/var/run/contiv/memif_contivvpp.sock"}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/l2/v2/xconnect/IF_MEMIF_VSWITCH_vnf1_port1 '{"receive_interface":"IF_MEMIF_VSWITCH_vnf1_port1","transmit_interface":"C1FM11TO21_5002"}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/l2/v2/xconnect/IF_MEMIF_VSWITCH_vnf3_port1 '{"receive_interface":"IF_MEMIF_VSWITCH_vnf3_port1","transmit_interface":"C1FM31TO41_5003"}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/l2/v2/xconnect/C1FM11TO21_5002 '{"receive_interface":"C1FM11TO21_5002","transmit_interface":"IF_MEMIF_VSWITCH_vnf1_port1"}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/l2/v2/xconnect/C1FM31TO41_5003 '{"receive_interface":"C1FM31TO41_5003","transmit_interface":"IF_MEMIF_VSWITCH_vnf3_port1"}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/v2/interfaces/IF_MEMIF_VSWITCH_vnf1_port1 '{"name":"IF_MEMIF_VSWITCH_vnf1_port1","type":"MEMIF","enabled":true,"mtu":1500,"memif":{"master":true,"id":1,"socket_filename":"/var/run/contiv/memif_vppagent.sock"}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/v2/interfaces/IF_MEMIF_VSWITCH_vnf3_port1 '{"name":"IF_MEMIF_VSWITCH_vnf3_port1","type":"MEMIF","enabled":true,"mtu":1500,"memif":{"master":true,"id":3,"socket_filename":"/var/run/contiv/memif_vppagent.sock"}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/v2/interfaces/C1FM11TO21_5002 '{"name":"C1FM11TO21_5002","type":"VXLAN_TUNNEL","enabled":true,"vxlan":{"src_address":"192.168.40.30","dst_address":"192.168.40.31","vni":5002}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/v2/interfaces/C1FM31TO41_5003 '{"name":"C1FM31TO41_5003","type":"VXLAN_TUNNEL","enabled":true,"vxlan":{"src_address":"192.168.40.30","dst_address":"192.168.40.31","vni":5003}}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/v2/interfaces/IF_VXLAN_LOOPBACK_vppagent '{"name":"IF_VXLAN_LOOPBACK_vppagent","type":"SOFTWARE_LOOPBACK","enabled":true,"ip_addresses":["192.168.40.30/24"],"mtu":1500}'
kubectl exec -i --tty -n kube-system $etcdcontainer -- etcdctl put --endpoints=127.0.0.1:12379 --key /var/contiv/etcd-secrets/client-key.pem --cert /var/contiv/etcd-secrets/client.pem --cacert=/var/contiv/etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/v2/route/vrf/0/dst/192.168.40.31/32/gw/192.168.16.2 '{"dst_network":"192.168.40.31/32","next_hop_addr":"192.168.16.2","outgoing_interface":"VirtualFunctionEthernet1/0/2","preference":5}'
Corrected errors 1. Long names of vxlan such as IF_VXLAN_L2PP_NET_SRVC_l2pp_service_chain2_CONN_1_FROM_vppagent_vnf3_port1_TO_contivvpp_vnf4_port1_VNI_5003 were replaced by C1FM31TO41_5003 (where FM=master, FW=worker, 31=vnf3_port1 etc...) 2. Prefixes were omited in definitions - see keys: /etcd-secrets/ca.pem /vnf-agent/vppagent/config/vpp/v2/route/vrf/0/dst/192.168.40.31/32/gw/192.168.16.2 /vnf-agent/vppagent/config/vpp/v2/interfaces/IF_VXLAN_LOOPBACK_vppagent /vnf-agent/contivvpp/config/vpp/v2/interfaces/IF_VXLAN_LOOPBACK_contivvpp /vnf-agent/contivvpp/config/vpp/v2/route/vrf/0/dst/192.168.40.30/32/gw/192.168.16.1 /vnf-agent/vppagent/config/vpp/v2/route/vrf/0/dst/192.168.40.31/32/gw/192.168.16.2
"dst_network":"192.168.40.31" should be "dst_network":"192.168.40.31/32" "ip_addresses":["192.168.40.30"], should be "ip_addresses":["192.168.40.30/24"],
I have the same problem as @Jingzhao123. Have you please any suggestions because i have changed the sfc-controller.yaml as indicated in https://github.com/ligato/sfc-controller/issues/33 but no good result
Is there any solution to fix the problem in the SFC controller source. I have done the manual solution of @stanislav-chlebec but i have always long vxlan names and the VNFs crash (CrashLoopBackOff). Which file from the SFC source code is about the generated VXlan names please?
Hi, we are slowly moving towards implementing service chaining for CNFs in Contiv-VPP directly, which means that an external SFC Controller will not be needed anymore. If you wanted to contribute, feel free to join Contiv slack channel and ping me there:
https://join.slack.com/t/contivvpp/shared_invite/enQtNTc3OTE5ODkwODk3LWQxZDQ1MGQ3MzE4MDI3OGVkMDU4MjliMDcxODYwYjliMDZhMGFlY2MxMDA5MWQwZDRlMzJjZDBlMWYzNWJhNWY
SFC controller demo is now superseded by SFC functionality built into Contiv-VPP CNI itself. Still work in progress, but already does something. Take a look at https://github.com/contiv/vpp/tree/master/k8s/examples/sfc