deployments-k8s
deployments-k8s copied to clipboard
Having issue while trying nse-composition example (using NSM v1.5.0 and Kind version 0.14.0)
I am trying to run nse-composition example (v1.5.0) , but have issue with the alpine pod stuck in Init state. Looks like a permission issue. Please let me know if I'm missing anything.
test@test-virtual-machine:~/source/nsm/tests/compos/1$ kubectl apply -k .
configmap/nginx-config-b9f75kh6cm created
configmap/vppagent-firewall-config-file created
deployment.apps/nse-firewall-vpp created
deployment.apps/nse-kernel created
deployment.apps/nse-passthrough-1 created
deployment.apps/nse-passthrough-2 created
deployment.apps/nse-passthrough-3 created
networkservice.networkservicemesh.io/nse-composition created
pod/alpine created
test@test-virtual-machine:~/source/nsm/tests/compos/1$ ls
client.yaml kustomization.yaml nse-composition-ns.yaml patch-nse.yaml
test@test-virtual-machine:~/source/nsm/tests/compos/1$ ls -la
total 24
drwxrwxr-x 2 test test 4096 Oct 5 00:43 .
drwxrwxr-x 3 test test 4096 Oct 4 23:33 ..
-rw-rw-r-- 1 test test 307 Oct 5 00:44 client.yaml
-rw-rw-r-- 1 test test 1201 Oct 5 00:44 kustomization.yaml
-rw-rw-r-- 1 test test 702 Oct 4 23:40 nse-composition-ns.yaml
-rw-rw-r-- 1 test test 892 Oct 5 00:44 patch-nse.yaml
test@test-virtual-machine:~/source/nsm/tests/compos/1$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-84xqw 1/1 Running 0 104m
kube-system coredns-6d4b75cb6d-mvkr2 1/1 Running 0 104m
kube-system etcd-kind-control-plane 1/1 Running 0 104m
kube-system kindnet-2bmnk 1/1 Running 0 104m
kube-system kindnet-645l4 1/1 Running 0 104m
kube-system kindnet-gvd8b 1/1 Running 0 104m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 104m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 104m
kube-system kube-proxy-2tmq7 1/1 Running 0 104m
kube-system kube-proxy-8fwv5 1/1 Running 0 104m
kube-system kube-proxy-mmstk 1/1 Running 0 104m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 104m
local-path-storage local-path-provisioner-9cd9bd544-xkx5p 1/1 Running 0 104m
ns-k88mw alpine 0/2 Init:0/1 0 73s
ns-k88mw nse-firewall-vpp-5c78d5c885-c4p95 1/1 Running 0 74s
ns-k88mw nse-kernel-55c487ff76-gllsf 2/2 Running 0 74s
ns-k88mw nse-passthrough-1-85d46bf654-47jft 1/1 Running 0 73s
ns-k88mw nse-passthrough-2-845775f7c5-4fw2v 1/1 Running 0 73s
ns-k88mw nse-passthrough-3-86c7fff64-gj89k 1/1 Running 0 73s
nsm-system admission-webhook-k8s-8595d6df88-72jp4 1/1 Running 0 74m
nsm-system forwarder-vpp-68whc 1/1 Running 0 74m
nsm-system forwarder-vpp-zjhk4 1/1 Running 0 74m
nsm-system nsmgr-9762d 2/2 Running 0 74m
nsm-system nsmgr-mvjmc 2/2 Running 0 74m
nsm-system registry-8477565b8d-6rcq7 1/1 Running 0 74m
spire spire-agent-65x5c 1/1 Running 0 103m
spire spire-agent-nhnzc 1/1 Running 0 103m
spire spire-server-0 2/2 Running 0 103m
test@test-virtual-machine:~/source/nsm/tests/compos/1$ kind --version
kind version 0.14.0
test@test-virtual-machine:~/source/nsm/tests/compos/1$ kubectl describe pods alpine -n ns-k88mw
Name: alpine
Namespace: ns-k88mw
Priority: 0
Node: kind-worker/172.18.0.4
Start Time: Wed, 05 Oct 2022 00:45:10 +0530
Labels: app=alpine
spiffe.io/spiffe-id=true
Annotations: networkservicemesh.io: kernel://nse-composition/nsm-1
Status: Pending
IP: 10.244.2.16
IPs:
IP: 10.244.2.16
Init Containers:
cmd-nsc-init:
Container ID: containerd://31086f02b4239da6da7d419662c4282328f7b7402908292d1fc9d1768b93388d
Image: ghcr.io/networkservicemesh/cmd-nsc-init:v1.5.0
Image ID: ghcr.io/networkservicemesh/cmd-nsc-init@sha256:b0e9e2b40a82a68a0c510e7865817ee66913a54b14b56c758cac31c545abd264
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 05 Oct 2022 00:45:12 +0530
Ready: False
Restart Count: 0
Environment:
NSM_LOG_LEVEL: TRACE
SPIFFE_ENDPOINT_SOCKET: unix:///run/spire/sockets/agent.sock
NSM_NAME: alpine (v1:metadata.name)
POD_NAME: alpine (v1:metadata.name)
NSM_NETWORK_SERVICES: kernel://nse-composition/nsm-1
Mounts:
/run/spire/sockets from spire-agent-socket (ro)
/var/lib/networkservicemesh from nsm-socket (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9mb6 (ro)
Containers:
alpine:
Container ID:
Image: alpine:3.15.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9mb6 (ro)
cmd-nsc:
Container ID:
Image: ghcr.io/networkservicemesh/cmd-nsc:v1.5.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
NSM_LOG_LEVEL: TRACE
SPIFFE_ENDPOINT_SOCKET: unix:///run/spire/sockets/agent.sock
NSM_NAME: alpine (v1:metadata.name)
POD_NAME: alpine (v1:metadata.name)
NSM_NETWORK_SERVICES: kernel://nse-composition/nsm-1
Mounts:
/run/spire/sockets from spire-agent-socket (ro)
/var/lib/networkservicemesh from nsm-socket (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9mb6 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-f9mb6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
spire-agent-socket:
Type: HostPath (bare host directory volume)
Path: /run/spire/sockets
HostPathType: Directory
nsm-socket:
Type: HostPath (bare host directory volume)
Path: /var/lib/networkservicemesh
HostPathType: Directory
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 14m kubelet Container image "ghcr.io/networkservicemesh/cmd-nsc-init:v1.5.0" already present on machine
Normal Created 14m kubelet Created container cmd-nsc-init
Normal Started 14m kubelet Started container cmd-nsc-init
test@test-virtual-machine:~/source/nsm/tests/compos/1$ kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 143m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 143m
nsm-system admission-webhook-svc ClusterIP 10.96.166.209 <none> 443/TCP 112m
nsm-system registry LoadBalancer 10.96.237.47 <pending> 5002:30040/TCP 112m
spire k8s-workload-registrar ClusterIP 10.96.135.118 <none> 443/TCP 141m
spire spire-server LoadBalancer 10.96.69.82 <pending> 8081:30360/TCP,8443:30287/TCP 141m
test@test-virtual-machine:~/source/nsm/tests/compos/1$ kubectl logs alpine -c cmd-nsc-init -n ns-k88mw | more
Oct 5 02:24:43.726^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(1) ⎆ sdk/pkg/networkservice/common/updatepath/updatePathClient.Request()
Oct 5 02:24:43.726^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(1.1) request={"connection":{"id":"alpine-0","network_service":"nse-composition"},"mechanism_preferences":[{"cls":"LOCAL","type":"KERNEL","parameters":{"name":"nsm-1"}}]}
Oct 5 02:24:43.726^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(1.2) request-diff={"connection":{"path":{"path_segments":{"+0":{"name":"alpine","id":"alpine-0"}}}}}
Oct 5 02:24:43.727^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(2) ⎆ sdk/pkg/networkservice/common/begin/beginClient.Request()
Oct 5 02:24:43.727^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(3) ⎆ sdk/pkg/networkservice/utils/metadata/metaDataClient.Request()
Oct 5 02:24:43.727^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(4) ⎆ sdk/pkg/networkservice/common/clientinfo/clientInfo.Request()
Oct 5 02:24:43.727^[[33m [WARN] [id:alpine-0] [type:networkService] ^[[0m(4.1) Environment variable CLUSTER_NAME is not set. Skipping.
Oct 5 02:24:43.728^[[33m [WARN] [id:alpine-0] [type:networkService] ^[[0m(4.2) Environment variable NODE_NAME is not set. Skipping.
Oct 5 02:24:43.728^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(4.3) request-diff={"connection":{"labels":{"+podName":"alpine"}}}
Oct 5 02:24:43.728^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(5) ⎆ sdk-sriov/pkg/networkservice/common/token/multitoken/tokenClient.Request()
Oct 5 02:24:43.728^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(6) ⎆ sdk/pkg/networkservice/common/mechanisms/mechanismsClient.Request()
Oct 5 02:24:43.728^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(7) ⎆ sdk/pkg/networkservice/core/next/nextClient.Request()
Oct 5 02:24:43.729^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(8) ⎆ sdk/pkg/networkservice/common/mechanisms/kernel/kernelMechanismClient.Request()
Oct 5 02:24:43.729^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(8.1) request-diff={"mechanism_preferences":{"0":{"parameters":{"+inodeURL":"file:///proc/thread-self/ns/net"}}}}
Oct 5 02:24:43.729^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(9) ⎆ sdk/pkg/networkservice/common/authorize/authorizeClient.Request()
Oct 5 02:24:43.729^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(10) ⎆ sdk/pkg/networkservice/common/mechanisms/sendfd/sendFDClient.Request()
Oct 5 02:24:43.730^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(10.1) request-diff={"mechanism_preferences":{"0":{"parameters":{"inodeURL":"inode://4/4026534295"}}}}
Oct 5 02:24:43.730^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(11) ⎆ sdk/pkg/networkservice/common/excludedprefixes/excludedPrefixesClient.Request()
Oct 5 02:24:43.730^[[37m [TRAC] [id:alpine-0] [type:networkService] ^[[0m(12) ⎆ api/pkg/api/networkservice/networkServiceClient.Request()
Oct 5 02:24:43.738^[[31m [ERRO] [id:alpine-0] [type:networkService] ^[[0m(12.1) rpc error: code = Unknown desc = Error returned from sdk/pkg/networkservice/common/authorize/authorizeServer.Request: rpc error: code = PermissionDenied desc = no sufficient privileges; Error returned from api/pkg/api/networkservice/networkServiceClient.Request; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.logError; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/common.go:206; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*beginTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:57; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*endTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:81; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/common/excludedprefixes.(*excludedPrefixesClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/common/excludedprefixes/client.go:113; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*beginTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:55; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*endTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:81; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/common/mechanisms/sendfd.(*sendFDClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/common/mechanisms/sendfd/client.go:55; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*beginTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:55; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*endTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:81; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/common/authorize.(*authorizeClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/common/authorize/client.go:68; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*beginTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:55; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*endTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:81; github.com/networkservicemesh/sdk/pkg/networkservice/core/next.(*nextClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/next/client.go:60; github.com/networkservicemesh/sdk/pkg/networkservice/core/trace.(*endTraceClient).Request; /go/pkg/mod/github.com/networkservicemesh/[email protected]/pkg/networkservice/core/trace/client.go:81; github.c
test@test-virtual-machine:~/source/nsm/tests/compos/1$ cat client.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: alpine
labels:
app: alpine
annotations:
networkservicemesh.io: kernel://nse-composition/nsm-1
spec:
containers:
- name: alpine
image: alpine:3.15.0
imagePullPolicy: IfNotPresent
stdin: true
tty: true
nodeName: kind-worker
@srini38 Thank for the reported issue! A couple of questions:
- How often do you see it? Had you run any other tests before?
- If you have a chance to repeat this again, could you also attach the logs from the forwarders and nsmgrs?
@glazychev-art Thank you for the response. I always see this issue. I even tried deleting the kind cluster and recreating it without any success. I had done basic tests before. (eg: https://github.com/networkservicemesh/deployments-k8s/blob/main/examples/use-cases/Memif2Memif)
Attaching the logs from forwarder (kind-worker), alpine and nsmgr (kind-worker) as requested. forwarder-vpp-zjhk4.log nsmgr-mvjmc.log alpine.log
Looks like there is something wrong with spire
.
Could you also add describe of the alpine
pod?
And kubectl version
output.
Thanks
Attaching describe of alpine pod (kubectl describe pods alpine -n ns-k88mw) as requested alpine-describe.log
test@test-virtual-machine:~/source/nsm/tests/compos/1$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.4
Kustomize Version: v4.5.4
Server Version: v1.24.0
test@test-virtual-machine:~/source/nsm/tests/compos/1$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.4", GitCommit:"95ee5ab382d64cfe6c28967f36b53970b8374491", GitTreeState:"clean", BuildDate:"2022-08-17T18:54:23Z", GoVersion:"go1.18.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-19T15:39:43Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
I see strange expires
values in the nsmgr logs:
Oct 6 12:08:24.301[37m [TRAC] [id:e111404c-e189-4ad7-85ea-83cc391bd05c] [type:networkService] [0m(3) ⎆ sdk/pkg/networkservice/common/updatetoken/updateTokenServer.Request()
Oct 6 12:08:24.301[37m [TRAC] [id:e111404c-e189-4ad7-85ea-83cc391bd05c] [type:networkService] [0m(3.1) request-diff={"connection":{"path":{"path_segments":{"0":{"expires":{"seconds":1664915538},"token":"eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJzcGlmZmU6Ly9leGFtcGxlLm9yZy9ucy9ucy1rODhtdy9wb2QvYWxwaW5lIiwiYXVkIjpbInNwaWZmZTovL2V4YW1wbGUub3JnL25zL25zbS1zeXN0ZW0vcG9kL25zbWdyLW12am1jIl0sImV4cCI6MTY2NTA1ODcwNH0.nYWrRc5OuCy_14H35hWyaKMyEhbTH_CpHOrH3keRPLd43zK3c-pH8mxNRqKeTLktyUma_z33AtUH7ZAs4XVebQ"},"1":{"expires":{"seconds":1664916368},"token":"eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJzcGlmZmU6Ly9leGFtcGxlLm9yZy9ucy9uc20tc3lzdGVtL3BvZC9uc21nci1tdmptYyIsImF1ZCI6WyJzcGlmZmU6Ly9leGFtcGxlLm9yZy9ucy9ucy1rODhtdy9wb2QvYWxwaW5lIl0sImV4cCI6MTY2NTA1ODcwNH0.tpopN56_r1LJ3N4d88zucK-OMlcW7ji1XTwQowxbJzP-gfyjzBdutjow1T6A6b_-gVwksoC2qtQ2dArH6whlyw"}}}}}
...
Oct 6 12:09:53.858[37m [TRAC] [id:a4022b63-0127-43b8-9841-571a8895d5e8] [type:networkService] [0m(3) ⎆ sdk/pkg/networkservice/common/updatetoken/updateTokenServer.Request()
Oct 6 12:09:53.858[37m [TRAC] [id:a4022b63-0127-43b8-9841-571a8895d5e8] [type:networkService] [0m(3.1) request-diff={"connection":{"path":{"path_segments":{"0":{"expires":{"seconds":1664915538},"token":"eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJzcGlmZmU6Ly9leGFtcGxlLm9yZy9ucy9ucy1rODhtdy9wb2QvYWxwaW5lIiwiYXVkIjpbInNwaWZmZTovL2V4YW1wbGUub3JnL25zL25zbS1zeXN0ZW0vcG9kL25zbWdyLW12am1jIl0sImV4cCI6MTY2NTA1ODc5M30.pPx7i9B8yGIqlVzeXwuOIQPI9uGuJFANh42ypfCvzqdGlsQBhkhyGPvrwDTBHMvHFIpcbFczOmdchv7PcqP2rQ"},"1":{"expires":{"seconds":1664916368},"token":"eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJzcGlmZmU6Ly9leGFtcGxlLm9yZy9ucy9uc20tc3lzdGVtL3BvZC9uc21nci1tdmptYyIsImF1ZCI6WyJzcGlmZmU6Ly9leGFtcGxlLm9yZy9ucy9ucy1rODhtdy9wb2QvYWxwaW5lIl0sImV4cCI6MTY2NTA1ODc5M30.RCU9itI_Hj2jTduJ3SkEqOyd9CTp3zodrG04deUtWfGlZohZmTRvSPwwHpUzkFYlmCmc9PRMR4S6Cnm6WjA1zA"}}}}}
...
- They don't change on every retry.
- They are too different for the client and the manager in the Path (
1664915538
and1664916368
)
Most likely this is due to the spire certificates..
@denis-tingaikin Do you think this parameter can affect it? https://github.com/networkservicemesh/deployments-k8s/blob/main/examples/spire/server.conf#L9
Any thoughts?
@srini38
Could you also recreate the kind cluster and deploy NSM v.1.6.0?
If the problem persists, please attach cluster-info dump
and pods describe
:
kubectl cluster-info dump -A --output-directory=/path/to/cluster-state
kubectl describe pods -A > describePods.log
@glazychev-art
Looks like tag v1.6.0 does not exist yet for integration-k8s-kind repository
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ curl https://raw.githubusercontent.com/networkservicemesh/integration-k8s-kind/v1.6.0/cluster-config.yaml
404: Not Found
@srini38
Thanks for pointing this out.
For now, you can use main
branch.
@glazychev-art The alpine pod is now no longer stuck in "Init" state when NSM v1.6.0 is used. But now ping between NSC and NSE is not working while testing nsm-composition example.
The nse-compos-* interface in NSE frequently disappears. NSC does not even have a nse-compos-* interface
Full list of steps used.
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ curl -L https://raw.githubusercontent.com/networkservicemesh/integration-k8s-kind/main/cluster-config.yaml | kind create cluster --config -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 118 100 118 0 0 637 0 --:--:-- --:--:-- --:--:-- 637
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-frwww 1/1 Running 0 21m
kube-system coredns-6d4b75cb6d-lr8t4 1/1 Running 0 21m
kube-system etcd-kind-control-plane 1/1 Running 0 21m
kube-system kindnet-6kr67 1/1 Running 0 20m
kube-system kindnet-jmsrx 1/1 Running 0 21m
kube-system kindnet-tbx7c 1/1 Running 0 20m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 21m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 21m
kube-system kube-proxy-49vhc 1/1 Running 0 21m
kube-system kube-proxy-g6xb8 1/1 Running 0 20m
kube-system kube-proxy-w89sj 1/1 Running 0 20m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 21m
local-path-storage local-path-provisioner-9cd9bd544-xcnjw 1/1 Running 0 20m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/spire?ref=v1.6.0
namespace/spire created
customresourcedefinition.apiextensions.k8s.io/spiffeids.spiffeid.spiffe.io created
serviceaccount/spire-agent created
serviceaccount/spire-server created
clusterrole.rbac.authorization.k8s.io/k8s-workload-registrar-role created
clusterrole.rbac.authorization.k8s.io/spire-agent-cluster-role created
clusterrole.rbac.authorization.k8s.io/spire-server-trust-role created
clusterrolebinding.rbac.authorization.k8s.io/k8s-workload-registrar-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-cluster-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-server-trust-role-binding created
configmap/k8s-workload-registrar created
configmap/spire-agent created
configmap/spire-bundle created
configmap/spire-server created
service/k8s-workload-registrar created
service/spire-server created
statefulset.apps/spire-server created
daemonset.apps/spire-agent created
validatingwebhookconfiguration.admissionregistration.k8s.io/k8s-workload-registrar created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-frwww 1/1 Running 0 32m
kube-system coredns-6d4b75cb6d-lr8t4 1/1 Running 0 32m
kube-system etcd-kind-control-plane 1/1 Running 0 32m
kube-system kindnet-6kr67 1/1 Running 0 31m
kube-system kindnet-jmsrx 1/1 Running 0 32m
kube-system kindnet-tbx7c 1/1 Running 0 31m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 32m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 32m
kube-system kube-proxy-49vhc 1/1 Running 0 32m
kube-system kube-proxy-g6xb8 1/1 Running 0 31m
kube-system kube-proxy-w89sj 1/1 Running 0 31m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 32m
local-path-storage local-path-provisioner-9cd9bd544-xcnjw 1/1 Running 0 31m
spire spire-agent-7rrmh 1/1 Running 0 65s
spire spire-agent-p8lw6 1/1 Running 0 65s
spire spire-server-0 2/2 Running 0 65s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl create ns nsm-system
namespace/nsm-system created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/basic?ref=v1.6.0
customresourcedefinition.apiextensions.k8s.io/networkserviceendpoints.networkservicemesh.io created
customresourcedefinition.apiextensions.k8s.io/networkservices.networkservicemesh.io created
serviceaccount/admission-webhook-sa created
serviceaccount/nsmgr-sa created
serviceaccount/registry-k8s-sa created
clusterrole.rbac.authorization.k8s.io/admission-webhook-role created
clusterrole.rbac.authorization.k8s.io/nsmgr-binding-role created
clusterrole.rbac.authorization.k8s.io/registry-k8s-role created
clusterrolebinding.rbac.authorization.k8s.io/admission-webhook-binding created
clusterrolebinding.rbac.authorization.k8s.io/nsmgr-binding created
clusterrolebinding.rbac.authorization.k8s.io/registry-k8s-role-binding created
service/admission-webhook-svc created
service/registry created
deployment.apps/admission-webhook-k8s created
deployment.apps/registry-k8s created
daemonset.apps/forwarder-vpp created
daemonset.apps/nsmgr created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-frwww 1/1 Running 0 37m
kube-system coredns-6d4b75cb6d-lr8t4 1/1 Running 0 37m
kube-system etcd-kind-control-plane 1/1 Running 0 37m
kube-system kindnet-6kr67 1/1 Running 0 36m
kube-system kindnet-jmsrx 1/1 Running 0 37m
kube-system kindnet-tbx7c 1/1 Running 0 36m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 37m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 37m
kube-system kube-proxy-49vhc 1/1 Running 0 37m
kube-system kube-proxy-g6xb8 1/1 Running 0 36m
kube-system kube-proxy-w89sj 1/1 Running 0 36m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 37m
local-path-storage local-path-provisioner-9cd9bd544-xcnjw 1/1 Running 0 37m
nsm-system admission-webhook-k8s-665d7dcd85-l6lw4 1/1 Running 0 3m54s
nsm-system forwarder-vpp-tvkch 1/1 Running 0 3m54s
nsm-system forwarder-vpp-z5cjq 1/1 Running 0 3m54s
nsm-system nsmgr-jwnz6 2/2 Running 0 3m54s
nsm-system nsmgr-q4qb5 2/2 Running 0 3m54s
nsm-system registry-k8s-77945786d5-zxqgk 1/1 Running 0 3m54s
spire spire-agent-7rrmh 1/1 Running 0 6m17s
spire spire-agent-p8lw6 1/1 Running 0 6m17s
spire spire-server-0 2/2 Running 0 6m17s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl create ns ns-nse-composition
namespace/ns-nse-composition created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0
configmap/nginx-config-b9f75kh6cm created
configmap/vppagent-firewall-config-file created
deployment.apps/nse-firewall-vpp created
deployment.apps/nse-kernel created
deployment.apps/nse-passthrough-1 created
deployment.apps/nse-passthrough-2 created
deployment.apps/nse-passthrough-3 created
networkservice.networkservicemesh.io/nse-composition created
pod/alpine created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-frwww 1/1 Running 0 41m
kube-system coredns-6d4b75cb6d-lr8t4 1/1 Running 0 41m
kube-system etcd-kind-control-plane 1/1 Running 0 41m
kube-system kindnet-6kr67 1/1 Running 0 40m
kube-system kindnet-jmsrx 1/1 Running 0 41m
kube-system kindnet-tbx7c 1/1 Running 0 40m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 41m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 41m
kube-system kube-proxy-49vhc 1/1 Running 0 41m
kube-system kube-proxy-g6xb8 1/1 Running 0 40m
kube-system kube-proxy-w89sj 1/1 Running 0 40m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 41m
local-path-storage local-path-provisioner-9cd9bd544-xcnjw 1/1 Running 0 41m
ns-nse-composition alpine 0/2 Init:0/1 0 105s
ns-nse-composition nse-firewall-vpp-54ccb78894-fbwv5 0/1 ContainerCreating 0 107s
ns-nse-composition nse-kernel-9fff9b7bd-9tnmj 0/2 ContainerCreating 0 107s
ns-nse-composition nse-passthrough-1-565d79c99f-94wqj 0/1 ContainerCreating 0 106s
ns-nse-composition nse-passthrough-2-59b9d8f9c8-7hhc2 0/1 ContainerCreating 0 106s
ns-nse-composition nse-passthrough-3-6b4b7bc445-pbzjm 0/1 ContainerCreating 0 106s
nsm-system admission-webhook-k8s-665d7dcd85-l6lw4 1/1 Running 0 7m52s
nsm-system forwarder-vpp-tvkch 1/1 Running 0 7m52s
nsm-system forwarder-vpp-z5cjq 1/1 Running 0 7m52s
nsm-system nsmgr-jwnz6 2/2 Running 0 7m52s
nsm-system nsmgr-q4qb5 2/2 Running 0 7m52s
nsm-system registry-k8s-77945786d5-zxqgk 1/1 Running 0 7m52s
spire spire-agent-7rrmh 1/1 Running 0 10m
spire spire-agent-p8lw6 1/1 Running 0 10m
spire spire-server-0 2/2 Running 0 10m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-frwww 1/1 Running 0 43m
kube-system coredns-6d4b75cb6d-lr8t4 1/1 Running 0 43m
kube-system etcd-kind-control-plane 1/1 Running 0 43m
kube-system kindnet-6kr67 1/1 Running 0 43m
kube-system kindnet-jmsrx 1/1 Running 0 43m
kube-system kindnet-tbx7c 1/1 Running 0 43m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 43m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 43m
kube-system kube-proxy-49vhc 1/1 Running 0 43m
kube-system kube-proxy-g6xb8 1/1 Running 0 43m
kube-system kube-proxy-w89sj 1/1 Running 0 43m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 43m
local-path-storage local-path-provisioner-9cd9bd544-xcnjw 1/1 Running 0 43m
ns-nse-composition alpine 0/2 Init:0/1 0 4m15s
ns-nse-composition nse-firewall-vpp-54ccb78894-fbwv5 1/1 Running 0 4m17s
ns-nse-composition nse-kernel-9fff9b7bd-9tnmj 0/2 ContainerCreating 0 4m17s
ns-nse-composition nse-passthrough-1-565d79c99f-94wqj 1/1 Running 0 4m16s
ns-nse-composition nse-passthrough-2-59b9d8f9c8-7hhc2 1/1 Running 0 4m16s
ns-nse-composition nse-passthrough-3-6b4b7bc445-pbzjm 1/1 Running 0 4m16s
nsm-system admission-webhook-k8s-665d7dcd85-l6lw4 1/1 Running 0 10m
nsm-system forwarder-vpp-tvkch 1/1 Running 0 10m
nsm-system forwarder-vpp-z5cjq 1/1 Running 0 10m
nsm-system nsmgr-jwnz6 2/2 Running 0 10m
nsm-system nsmgr-q4qb5 2/2 Running 0 10m
nsm-system registry-k8s-77945786d5-zxqgk 1/1 Running 0 10m
spire spire-agent-7rrmh 1/1 Running 0 12m
spire spire-agent-p8lw6 1/1 Running 0 12m
spire spire-server-0 2/2 Running 0 12m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-frwww 1/1 Running 0 44m
kube-system coredns-6d4b75cb6d-lr8t4 1/1 Running 0 44m
kube-system etcd-kind-control-plane 1/1 Running 0 44m
kube-system kindnet-6kr67 1/1 Running 0 44m
kube-system kindnet-jmsrx 1/1 Running 0 44m
kube-system kindnet-tbx7c 1/1 Running 0 44m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 44m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 44m
kube-system kube-proxy-49vhc 1/1 Running 0 44m
kube-system kube-proxy-g6xb8 1/1 Running 0 44m
kube-system kube-proxy-w89sj 1/1 Running 0 44m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 44m
local-path-storage local-path-provisioner-9cd9bd544-xcnjw 1/1 Running 0 44m
ns-nse-composition alpine 0/2 Init:0/1 0 5m19s
ns-nse-composition nse-firewall-vpp-54ccb78894-fbwv5 1/1 Running 0 5m21s
ns-nse-composition nse-kernel-9fff9b7bd-9tnmj 2/2 Running 0 5m21s
ns-nse-composition nse-passthrough-1-565d79c99f-94wqj 1/1 Running 0 5m20s
ns-nse-composition nse-passthrough-2-59b9d8f9c8-7hhc2 1/1 Running 0 5m20s
ns-nse-composition nse-passthrough-3-6b4b7bc445-pbzjm 1/1 Running 0 5m20s
nsm-system admission-webhook-k8s-665d7dcd85-l6lw4 1/1 Running 0 11m
nsm-system forwarder-vpp-tvkch 1/1 Running 0 11m
nsm-system forwarder-vpp-z5cjq 1/1 Running 0 11m
nsm-system nsmgr-jwnz6 2/2 Running 0 11m
nsm-system nsmgr-q4qb5 2/2 Running 0 11m
nsm-system registry-k8s-77945786d5-zxqgk 1/1 Running 0 11m
spire spire-agent-7rrmh 1/1 Running 0 13m
spire spire-agent-p8lw6 1/1 Running 0 13m
spire spire-server-0 2/2 Running 0 13m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-frwww 1/1 Running 0 48m
kube-system coredns-6d4b75cb6d-lr8t4 1/1 Running 0 48m
kube-system etcd-kind-control-plane 1/1 Running 0 48m
kube-system kindnet-6kr67 1/1 Running 0 47m
kube-system kindnet-jmsrx 1/1 Running 0 48m
kube-system kindnet-tbx7c 1/1 Running 0 47m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 48m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 48m
kube-system kube-proxy-49vhc 1/1 Running 0 48m
kube-system kube-proxy-g6xb8 1/1 Running 0 47m
kube-system kube-proxy-w89sj 1/1 Running 0 47m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 48m
local-path-storage local-path-provisioner-9cd9bd544-xcnjw 1/1 Running 0 48m
ns-nse-composition alpine 2/2 Running 0 8m40s
ns-nse-composition nse-firewall-vpp-54ccb78894-fbwv5 1/1 Running 0 8m42s
ns-nse-composition nse-kernel-9fff9b7bd-9tnmj 2/2 Running 0 8m42s
ns-nse-composition nse-passthrough-1-565d79c99f-94wqj 1/1 Running 0 8m41s
ns-nse-composition nse-passthrough-2-59b9d8f9c8-7hhc2 1/1 Running 0 8m41s
ns-nse-composition nse-passthrough-3-6b4b7bc445-pbzjm 1/1 Running 0 8m41s
nsm-system admission-webhook-k8s-665d7dcd85-l6lw4 1/1 Running 0 14m
nsm-system forwarder-vpp-tvkch 1/1 Running 0 14m
nsm-system forwarder-vpp-z5cjq 1/1 Running 0 14m
nsm-system nsmgr-jwnz6 2/2 Running 0 14m
nsm-system nsmgr-q4qb5 2/2 Running 0 14m
nsm-system registry-k8s-77945786d5-zxqgk 1/1 Running 0 14m
spire spire-agent-7rrmh 1/1 Running 0 17m
spire spire-agent-p8lw6 1/1 Running 0 17m
spire spire-server-0 2/2 Running 0 17m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ NSC=$(kubectl get pods -l app=alpine -n ns-nse-composition --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ NSE=$(kubectl get pods -l app=nse-kernel -n ns-nse-composition --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec ${NSC} -n ns-nse-composition -- ping -c 4 172.16.1.100
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
PING 172.16.1.100 (172.16.1.100): 56 data bytes
--- 172.16.1.100 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ echo $NSC
alpine
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ echo $NSE
nse-kernel-9fff9b7bd-9tnmj
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec ${NSC} -n ns-nse-composition -- wget -O /dev/null --timeout 5 "172.16.1.100:8080"
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
Connecting to 172.16.1.100:8080 (172.16.1.100:8080)
wget: download timed out
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec ${NSE} -n ns-nse-composition -- ping -c 4 172.16.1.101
Defaulted container "nse" out of: nse, nginx
PING 172.16.1.101 (172.16.1.101): 56 data bytes
--- 172.16.1.101 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec ${NSC} -n ns-nse-composition -- wget -O /dev/null --timeout 5 "172.16.1.100:80"
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
Connecting to 172.16.1.100:80 (172.16.1.100:80)
wget: download timed out
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ if [ 0 -eq $? ]; then
> echo "error: port :80 is available" >&2
> false
> else
> echo "success: port :80 is unavailable"
> fi
success: port :80 is unavailable
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-frwww 1/1 Running 0 72m
kube-system coredns-6d4b75cb6d-lr8t4 1/1 Running 0 72m
kube-system etcd-kind-control-plane 1/1 Running 0 72m
kube-system kindnet-6kr67 1/1 Running 0 71m
kube-system kindnet-jmsrx 1/1 Running 0 72m
kube-system kindnet-tbx7c 1/1 Running 0 71m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 72m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 72m
kube-system kube-proxy-49vhc 1/1 Running 0 72m
kube-system kube-proxy-g6xb8 1/1 Running 0 71m
kube-system kube-proxy-w89sj 1/1 Running 0 71m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 72m
local-path-storage local-path-provisioner-9cd9bd544-xcnjw 1/1 Running 0 72m
ns-nse-composition alpine 2/2 Running 0 32m
ns-nse-composition nse-firewall-vpp-54ccb78894-fbwv5 1/1 Running 0 32m
ns-nse-composition nse-kernel-9fff9b7bd-9tnmj 2/2 Running 0 32m
ns-nse-composition nse-passthrough-1-565d79c99f-94wqj 1/1 Running 0 32m
ns-nse-composition nse-passthrough-2-59b9d8f9c8-7hhc2 1/1 Running 1 (28s ago) 32m
ns-nse-composition nse-passthrough-3-6b4b7bc445-pbzjm 1/1 Running 0 32m
nsm-system admission-webhook-k8s-665d7dcd85-l6lw4 1/1 Running 0 38m
nsm-system forwarder-vpp-tvkch 1/1 Running 0 38m
nsm-system forwarder-vpp-z5cjq 1/1 Running 0 38m
nsm-system nsmgr-jwnz6 2/2 Running 0 38m
nsm-system nsmgr-q4qb5 2/2 Running 0 38m
nsm-system registry-k8s-77945786d5-zxqgk 1/1 Running 0 38m
spire spire-agent-7rrmh 1/1 Running 0 41m
spire spire-agent-p8lw6 1/1 Running 0 41m
spire spire-server-0 2/2 Running 0 41m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec ${NSE} -n ns-nse-composition -- ping -c 4 172.16.1.101 Defaulted container "nse" out of: nse, nginx
PING 172.16.1.101 (172.16.1.101): 56 data bytes
--- 172.16.1.101 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec ${NSC} -n ns-nse-composition -- ping -c 4 172.16.1.100
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
PING 172.16.1.100 (172.16.1.100): 56 data bytes
--- 172.16.1.100 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec ${NSC} -n ns-nse-composition -- wget -O /dev/null --timeout 5 "172.16.1.100:8080"
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
Connecting to 172.16.1.100:8080 (172.16.1.100:8080)
wget: download timed out
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec "alpine" -n "ns-nse-composition" -- ifconfig
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
eth0 Link encap:Ethernet HWaddr B6:96:6C:2D:9D:CF
inet addr:10.244.1.7 Bcast:10.244.1.255 Mask:255.255.255.0
inet6 addr: fe80::b496:6cff:fe2d:9dcf/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:168 (168.0 B) TX bytes:2764 (2.6 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec "nse-kernel-9fff9b7bd-9tnmj" -n "ns-nse-composition" -- ifconfi
g
Defaulted container "nse" out of: nse, nginx
eth0 Link encap:Ethernet HWaddr 1E:64:17:AF:FF:0B
inet addr:10.244.1.5 Bcast:10.244.1.255 Mask:255.255.255.0
inet6 addr: fe80::1c64:17ff:feaf:ff0b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:42 (42.0 B) TX bytes:1384 (1.3 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:448 (448.0 B) TX bytes:448 (448.0 B)
nse-compos-75d3 Link encap:Ethernet HWaddr 02:FE:1E:C7:68:06
inet addr:172.16.1.100 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::fe:1eff:fec7:6806/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1446 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:270 (270.0 B)
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$ kubectl exec "nse-kernel-9fff9b7bd-9tnmj" -n "ns-nse-composition" -- ifconfig
Defaulted container "nse" out of: nse, nginx
eth0 Link encap:Ethernet HWaddr 1E:64:17:AF:FF:0B
inet addr:10.244.1.5 Bcast:10.244.1.255 Mask:255.255.255.0
inet6 addr: fe80::1c64:17ff:feaf:ff0b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:42 (42.0 B) TX bytes:1384 (1.3 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:448 (448.0 B) TX bytes:448 (448.0 B)
test@test-virtual-machine:~/source/nsm/tests/nsm1.6$
@srini38 It is interesting.
NSC will have nsm-1
interface if you use this configuration
I have a suspicion what is going on. Apparently on your cluster the NSC has some latency when pinging, that it decides that the ping is not working at all and tries to heal the connection..
To test this, you need to change the webhook configuration a bit:
...
- name: NSM_ENVS
value: NSM_LOG_LEVEL=TRACE,NSM_LIVENESSCHECKENABLED=false
...
https://github.com/networkservicemesh/deployments-k8s/blob/main/apps/admission-webhook-k8s/admission-webhook.yaml#L48-L49
Could you please check it out?
So, you can:
- Deploy spire
- Deploy nsm-system
- Download this directory and:
- Add
NSM_ENVS
(as described above) - Change
namespace: default
->namespace: nsm-system
here -
kubectl apply -k .
(admission-webhook-k8s will be configured after that)
- Deploy nse-composition
@glazychev-art
Tried with a fresh kind cluster. But hit below error, while trying with the recommended changes
Error from server (InternalError): error when creating "https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0": Internal error occurred: failed calling webhook "admission-webhook-k8s-665d7dcd85-c55t7.networkservicemesh.io": failed to call webhook: Post "https://admission-webhook-svc.nsm-system.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "networkservicemesh.admission-webhook-svc-ca") Error from server (InternalError): error when creating "https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0": Internal error occurred: failed calling webhook "admission-webhook-k8s-665d7dcd85-c55t7.networkservicemesh.io": failed to call webhook: Post "https://admission-webhook-svc.nsm-system.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "networkservicemesh.admission-webhook-svc-ca") Error from server (Invalid): error when creating "https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0": Pod "alpine" is invalid: [spec.volumes[3].name: Duplicate value: "spire-agent-socket", spec.volumes[4].name: Duplicate value: "nsm-socket", spec.containers[2].name: Duplicate value: "cmd-nsc", spec.initContainers[1].name: Duplicate value: "cmd-nsc-init"]
Detailed steps
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ curl -L https://raw.githubusercontent.com/networkservicemesh/integration-k8s-kind/main/cluster-config.yaml | kind create cluster --config -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 118 100 118 0 0 205 0 --:--:-- --:--:-- --:--:-- 205
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 😊
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g55p6 1/1 Running 0 7m31s
kube-system coredns-6d4b75cb6d-z97rc 1/1 Running 0 7m31s
kube-system etcd-kind-control-plane 1/1 Running 0 7m44s
kube-system kindnet-dsjxc 1/1 Running 0 7m8s
kube-system kindnet-f2ghw 1/1 Running 0 7m8s
kube-system kindnet-ntlk4 1/1 Running 0 7m30s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 7m44s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 7m44s
kube-system kube-proxy-278t2 1/1 Running 0 7m8s
kube-system kube-proxy-jckt2 1/1 Running 0 7m8s
kube-system kube-proxy-ms52c 1/1 Running 0 7m31s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 7m44s
local-path-storage local-path-provisioner-9cd9bd544-h6mw8 1/1 Running 0 7m26s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/spire?ref=v1.6.0
namespace/spire created
customresourcedefinition.apiextensions.k8s.io/spiffeids.spiffeid.spiffe.io created
serviceaccount/spire-agent created
serviceaccount/spire-server created
clusterrole.rbac.authorization.k8s.io/k8s-workload-registrar-role created
clusterrole.rbac.authorization.k8s.io/spire-agent-cluster-role created
clusterrole.rbac.authorization.k8s.io/spire-server-trust-role created
clusterrolebinding.rbac.authorization.k8s.io/k8s-workload-registrar-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-cluster-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-server-trust-role-binding created
configmap/k8s-workload-registrar created
configmap/spire-agent created
configmap/spire-bundle created
configmap/spire-server created
service/k8s-workload-registrar created
service/spire-server created
statefulset.apps/spire-server created
daemonset.apps/spire-agent created
validatingwebhookconfiguration.admissionregistration.k8s.io/k8s-workload-registrar created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g55p6 1/1 Running 0 9m21s
kube-system coredns-6d4b75cb6d-z97rc 1/1 Running 0 9m21s
kube-system etcd-kind-control-plane 1/1 Running 0 9m34s
kube-system kindnet-dsjxc 1/1 Running 0 8m58s
kube-system kindnet-f2ghw 1/1 Running 0 8m58s
kube-system kindnet-ntlk4 1/1 Running 0 9m20s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 9m34s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 9m34s
kube-system kube-proxy-278t2 1/1 Running 0 8m58s
kube-system kube-proxy-jckt2 1/1 Running 0 8m58s
kube-system kube-proxy-ms52c 1/1 Running 0 9m21s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 9m34s
local-path-storage local-path-provisioner-9cd9bd544-h6mw8 1/1 Running 0 9m16s
spire spire-agent-6vl6q 0/1 Running 0 54s
spire spire-agent-qmpwt 0/1 Running 0 54s
spire spire-server-0 2/2 Running 0 54s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl create ns nsm-system
namespace/nsm-system created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/basic?ref=v1.6.0
customresourcedefinition.apiextensions.k8s.io/networkserviceendpoints.networkservicemesh.io created
customresourcedefinition.apiextensions.k8s.io/networkservices.networkservicemesh.io created
serviceaccount/admission-webhook-sa created
serviceaccount/nsmgr-sa created
serviceaccount/registry-k8s-sa created
clusterrole.rbac.authorization.k8s.io/admission-webhook-role created
clusterrole.rbac.authorization.k8s.io/nsmgr-binding-role created
clusterrole.rbac.authorization.k8s.io/registry-k8s-role created
clusterrolebinding.rbac.authorization.k8s.io/admission-webhook-binding created
clusterrolebinding.rbac.authorization.k8s.io/nsmgr-binding created
clusterrolebinding.rbac.authorization.k8s.io/registry-k8s-role-binding created
service/admission-webhook-svc created
service/registry created
deployment.apps/admission-webhook-k8s created
deployment.apps/registry-k8s created
daemonset.apps/forwarder-vpp created
daemonset.apps/nsmgr created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g55p6 1/1 Running 0 12m
kube-system coredns-6d4b75cb6d-z97rc 1/1 Running 0 12m
kube-system etcd-kind-control-plane 1/1 Running 0 12m
kube-system kindnet-dsjxc 1/1 Running 0 11m
kube-system kindnet-f2ghw 1/1 Running 0 11m
kube-system kindnet-ntlk4 1/1 Running 0 12m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 12m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 12m
kube-system kube-proxy-278t2 1/1 Running 0 11m
kube-system kube-proxy-jckt2 1/1 Running 0 11m
kube-system kube-proxy-ms52c 1/1 Running 0 12m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 12m
local-path-storage local-path-provisioner-9cd9bd544-h6mw8 1/1 Running 0 12m
nsm-system admission-webhook-k8s-665d7dcd85-c55t7 1/1 Running 0 2m4s
nsm-system forwarder-vpp-jwbbb 1/1 Running 0 2m3s
nsm-system forwarder-vpp-zt8cj 1/1 Running 0 2m3s
nsm-system nsmgr-72fj9 2/2 Running 0 2m3s
nsm-system nsmgr-lpvbt 2/2 Running 0 2m3s
nsm-system registry-k8s-77945786d5-hljxs 1/1 Running 0 2m4s
spire spire-agent-6vl6q 1/1 Running 0 3m54s
spire spire-agent-qmpwt 1/1 Running 0 3m54s
spire spire-server-0 2/2 Running 0 3m54s
Downloaded https://github.com/networkservicemesh/deployments-k8s/tree/v1.6.0/apps/admission-webhook-k8s folder
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ ls -ltrh
total 24K
-rw-rw-r-- 1 test test 210 Oct 19 16:24 service.yaml
-rw-rw-r-- 1 test test 79 Oct 19 16:24 sa.yaml
-rw-rw-r-- 1 test test 423 Oct 19 16:24 role.yaml
-rw-rw-r-- 1 test test 177 Oct 19 16:24 kustomization.yaml
-rw-rw-r-- 1 test test 278 Oct 19 16:24 binding.yaml
-rw-rw-r-- 1 test test 1.5K Oct 19 16:24 admission-webhook.yaml
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ diff -dub admission-webhook.yaml.orig admission-webhook.yaml
--- admission-webhook.yaml.orig 2022-10-19 21:57:19.884561417 +0530
+++ admission-webhook.yaml 2022-10-19 21:57:40.435996397 +0530
@@ -46,4 +46,4 @@
- name: NSM_LABELS
value: spiffe.io/spiffe-id:true
- name: NSM_ENVS
- value: NSM_LOG_LEVEL=TRACE
+ value: NSM_LOG_LEVEL=TRACE,NSM_LIVENESSCHECKENABLED=false
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ diff -dub kustomization.yaml.orig kustomization.yaml
--- kustomization.yaml.orig 2022-10-19 21:58:57.346015478 +0530
+++ kustomization.yaml 2022-10-19 21:59:20.337461145 +0530
@@ -9,4 +9,4 @@
- binding.yaml
- role.yaml
-namespace: default
+namespace: nsm-system
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ ls
admission-webhook.yaml binding.yaml kustomization.yaml.orig sa.yaml
admission-webhook.yaml.orig kustomization.yaml role.yaml service.yaml
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ rm *.orig
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ ls
admission-webhook.yaml binding.yaml kustomization.yaml role.yaml sa.yaml service.yaml
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl apply -k .
serviceaccount/admission-webhook-sa unchanged
clusterrole.rbac.authorization.k8s.io/admission-webhook-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/admission-webhook-binding unchanged
service/admission-webhook-svc unchanged
deployment.apps/admission-webhook-k8s configured
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g55p6 1/1 Running 0 19m
kube-system coredns-6d4b75cb6d-z97rc 1/1 Running 0 19m
kube-system etcd-kind-control-plane 1/1 Running 0 19m
kube-system kindnet-dsjxc 1/1 Running 0 19m
kube-system kindnet-f2ghw 1/1 Running 0 19m
kube-system kindnet-ntlk4 1/1 Running 0 19m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 19m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 19m
kube-system kube-proxy-278t2 1/1 Running 0 19m
kube-system kube-proxy-jckt2 1/1 Running 0 19m
kube-system kube-proxy-ms52c 1/1 Running 0 19m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 19m
local-path-storage local-path-provisioner-9cd9bd544-h6mw8 1/1 Running 0 19m
nsm-system admission-webhook-k8s-665d7dcd85-c55t7 1/1 Running 0 9m20s
nsm-system admission-webhook-k8s-6f674f4fc4-v6md8 1/1 Running 0 28s
nsm-system forwarder-vpp-jwbbb 1/1 Running 0 9m19s
nsm-system forwarder-vpp-zt8cj 1/1 Running 0 9m19s
nsm-system nsmgr-72fj9 2/2 Running 0 9m19s
nsm-system nsmgr-lpvbt 2/2 Running 0 9m19s
nsm-system registry-k8s-77945786d5-hljxs 1/1 Running 0 9m20s
spire spire-agent-6vl6q 1/1 Running 0 11m
spire spire-agent-qmpwt 1/1 Running 0 11m
spire spire-server-0 2/2 Running 0 11m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl create ns ns-nse-composition
namespace/ns-nse-composition created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0
configmap/nginx-config-b9f75kh6cm created
configmap/vppagent-firewall-config-file created
deployment.apps/nse-firewall-vpp created
deployment.apps/nse-passthrough-1 created
deployment.apps/nse-passthrough-3 created
networkservice.networkservicemesh.io/nse-composition created
Error from server (InternalError): error when creating "https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0": Internal error occurred: failed calling webhook "admission-webhook-k8s-665d7dcd85-c55t7.networkservicemesh.io": failed to call webhook: Post "https://admission-webhook-svc.nsm-system.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "networkservicemesh.admission-webhook-svc-ca")
Error from server (InternalError): error when creating "https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0": Internal error occurred: failed calling webhook "admission-webhook-k8s-665d7dcd85-c55t7.networkservicemesh.io": failed to call webhook: Post "https://admission-webhook-svc.nsm-system.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "networkservicemesh.admission-webhook-svc-ca")
Error from server (Invalid): error when creating "https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0": Pod "alpine" is invalid: [spec.volumes[3].name: Duplicate value: "spire-agent-socket", spec.volumes[4].name: Duplicate value: "nsm-socket", spec.containers[2].name: Duplicate value: "cmd-nsc", spec.initContainers[1].name: Duplicate value: "cmd-nsc-init"]
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$
@srini38 Sorry to keep you waiting Thanks for the detailed information, it helps a lot.
OK.. as I can see from the logs, you have 2 admission-webhooks. Could you delete the previous webhook before applying a new one?
So, please do one more thing::
...
2.A. Deploy nsm-system
2.B. kubectl delete deploy -n nsm-system admission-webhook-k8s
3. Download this directory and:
...
@glazychev-art Thank you for the support and patience. Sorry for the delay from my side. Post deleting admission-webhook-k8s and redeploying admission-webhook-k8s, the alpine pod is up. But the ping between NSE and NSC is still not working while testing nsm-composition example. NSM interface is not seen in alpine pod. In nse-kernel pod, the interface is still flapping.
Full list of steps used:
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ curl -L https://raw.githubusercontent.com/networkservicemesh/integration-k8s-kind/main/cluster-config.yaml | kind create cluster --config -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 118 100 118 0 0 230 0 --:--:-- --:--:-- --:--:-- 230
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g4gjf 1/1 Running 0 65s
kube-system coredns-6d4b75cb6d-n8n4r 1/1 Running 0 65s
kube-system etcd-kind-control-plane 1/1 Running 0 79s
kube-system kindnet-cs562 1/1 Running 0 44s
kube-system kindnet-kcj4m 1/1 Running 0 45s
kube-system kindnet-lz2qg 1/1 Running 0 65s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 79s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 78s
kube-system kube-proxy-2lbqv 1/1 Running 0 65s
kube-system kube-proxy-bpcx6 1/1 Running 0 45s
kube-system kube-proxy-txjnm 1/1 Running 0 44s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 79s
local-path-storage local-path-provisioner-9cd9bd544-spv77 1/1 Running 0 63s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/spire?ref=v1.6.0
namespace/spire created
customresourcedefinition.apiextensions.k8s.io/spiffeids.spiffeid.spiffe.io created
serviceaccount/spire-agent created
serviceaccount/spire-server created
clusterrole.rbac.authorization.k8s.io/k8s-workload-registrar-role created
clusterrole.rbac.authorization.k8s.io/spire-agent-cluster-role created
clusterrole.rbac.authorization.k8s.io/spire-server-trust-role created
clusterrolebinding.rbac.authorization.k8s.io/k8s-workload-registrar-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-cluster-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-server-trust-role-binding created
configmap/k8s-workload-registrar created
configmap/spire-agent created
configmap/spire-bundle created
configmap/spire-server created
service/k8s-workload-registrar created
service/spire-server created
statefulset.apps/spire-server created
daemonset.apps/spire-agent created
validatingwebhookconfiguration.admissionregistration.k8s.io/k8s-workload-registrar created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ namespace/spire created
bash: namespace/spire: No such file or directory
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g4gjf 1/1 Running 0 3m27s
kube-system coredns-6d4b75cb6d-n8n4r 1/1 Running 0 3m27s
kube-system etcd-kind-control-plane 1/1 Running 0 3m41s
kube-system kindnet-cs562 1/1 Running 0 3m6s
kube-system kindnet-kcj4m 1/1 Running 0 3m7s
kube-system kindnet-lz2qg 1/1 Running 0 3m27s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 3m41s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 3m40s
kube-system kube-proxy-2lbqv 1/1 Running 0 3m27s
kube-system kube-proxy-bpcx6 1/1 Running 0 3m7s
kube-system kube-proxy-txjnm 1/1 Running 0 3m6s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 3m41s
local-path-storage local-path-provisioner-9cd9bd544-spv77 1/1 Running 0 3m25s
spire spire-agent-2528l 1/1 Running 0 112s
spire spire-agent-mn99x 1/1 Running 0 112s
spire spire-server-0 2/2 Running 0 112s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl create ns nsm-system
namespace/nsm-system created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/basic?ref=v1.6.0
customresourcedefinition.apiextensions.k8s.io/networkserviceendpoints.networkservicemesh.io created
customresourcedefinition.apiextensions.k8s.io/networkservices.networkservicemesh.io created
serviceaccount/admission-webhook-sa created
serviceaccount/nsmgr-sa created
serviceaccount/registry-k8s-sa created
clusterrole.rbac.authorization.k8s.io/admission-webhook-role created
clusterrole.rbac.authorization.k8s.io/nsmgr-binding-role created
clusterrole.rbac.authorization.k8s.io/registry-k8s-role created
clusterrolebinding.rbac.authorization.k8s.io/admission-webhook-binding created
clusterrolebinding.rbac.authorization.k8s.io/nsmgr-binding created
clusterrolebinding.rbac.authorization.k8s.io/registry-k8s-role-binding created
service/admission-webhook-svc created
service/registry created
deployment.apps/admission-webhook-k8s created
deployment.apps/registry-k8s created
daemonset.apps/forwarder-vpp created
daemonset.apps/nsmgr created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g4gjf 1/1 Running 0 36m
kube-system coredns-6d4b75cb6d-n8n4r 1/1 Running 0 36m
kube-system etcd-kind-control-plane 1/1 Running 0 36m
kube-system kindnet-cs562 1/1 Running 0 35m
kube-system kindnet-kcj4m 1/1 Running 0 35m
kube-system kindnet-lz2qg 1/1 Running 0 36m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 36m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 36m
kube-system kube-proxy-2lbqv 1/1 Running 0 36m
kube-system kube-proxy-bpcx6 1/1 Running 0 35m
kube-system kube-proxy-txjnm 1/1 Running 0 35m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 36m
local-path-storage local-path-provisioner-9cd9bd544-spv77 1/1 Running 0 36m
nsm-system admission-webhook-k8s-665d7dcd85-7kxqr 1/1 Running 0 97s
nsm-system forwarder-vpp-f7gkt 1/1 Running 0 97s
nsm-system forwarder-vpp-smdgx 1/1 Running 0 97s
nsm-system nsmgr-qgqgv 2/2 Running 0 97s
nsm-system nsmgr-xl8sp 2/2 Running 0 97s
nsm-system registry-k8s-77945786d5-kqwh8 1/1 Running 0 97s
spire spire-agent-2528l 1/1 Running 0 34m
spire spire-agent-mn99x 1/1 Running 0 34m
spire spire-server-0 2/2 Running 0 34m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test$ cd admission-webhook-k8s/
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ ls
admission-webhook.yaml binding.yaml kustomization.yaml role.yaml sa.yaml service.yaml
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl delete deploy -n nsm-system admission-web
hook-k8s
deployment.apps "admission-webhook-k8s" deleted
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ cat admission-webhook.yaml | grep LEVEL
value: NSM_LOG_LEVEL=TRACE,NSM_LIVENESSCHECKENABLED=false
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ cat kustomization.yaml | grep names
namespace: nsm-system
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl apply -k .
serviceaccount/admission-webhook-sa unchanged
clusterrole.rbac.authorization.k8s.io/admission-webhook-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/admission-webhook-binding unchanged
service/admission-webhook-svc unchanged
deployment.apps/admission-webhook-k8s created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g4gjf 1/1 Running 0 37m
kube-system coredns-6d4b75cb6d-n8n4r 1/1 Running 0 37m
kube-system etcd-kind-control-plane 1/1 Running 0 38m
kube-system kindnet-cs562 1/1 Running 0 37m
kube-system kindnet-kcj4m 1/1 Running 0 37m
kube-system kindnet-lz2qg 1/1 Running 0 37m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 38m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 37m
kube-system kube-proxy-2lbqv 1/1 Running 0 37m
kube-system kube-proxy-bpcx6 1/1 Running 0 37m
kube-system kube-proxy-txjnm 1/1 Running 0 37m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 38m
local-path-storage local-path-provisioner-9cd9bd544-spv77 1/1 Running 0 37m
nsm-system admission-webhook-k8s-6f674f4fc4-tm72f 1/1 Running 0 6s
nsm-system forwarder-vpp-f7gkt 1/1 Running 0 3m15s
nsm-system forwarder-vpp-smdgx 1/1 Running 0 3m15s
nsm-system nsmgr-qgqgv 2/2 Running 0 3m15s
nsm-system nsmgr-xl8sp 2/2 Running 0 3m15s
nsm-system registry-k8s-77945786d5-kqwh8 1/1 Running 0 3m15s
spire spire-agent-2528l 1/1 Running 0 36m
spire spire-agent-mn99x 1/1 Running 0 36m
spire spire-server-0 2/2 Running 0 36m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl create ns ns-nse-composition
namespace/ns-nse-composition created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0
configmap/nginx-config-b9f75kh6cm created
configmap/vppagent-firewall-config-file created
deployment.apps/nse-firewall-vpp created
deployment.apps/nse-kernel created
deployment.apps/nse-passthrough-1 created
deployment.apps/nse-passthrough-2 created
deployment.apps/nse-passthrough-3 created
networkservice.networkservicemesh.io/nse-composition created
pod/alpine created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-g4gjf 1/1 Running 0 40m
kube-system coredns-6d4b75cb6d-n8n4r 1/1 Running 0 40m
kube-system etcd-kind-control-plane 1/1 Running 0 40m
kube-system kindnet-cs562 1/1 Running 0 40m
kube-system kindnet-kcj4m 1/1 Running 0 40m
kube-system kindnet-lz2qg 1/1 Running 0 40m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 40m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 40m
kube-system kube-proxy-2lbqv 1/1 Running 0 40m
kube-system kube-proxy-bpcx6 1/1 Running 0 40m
kube-system kube-proxy-txjnm 1/1 Running 0 40m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 40m
local-path-storage local-path-provisioner-9cd9bd544-spv77 1/1 Running 0 40m
ns-nse-composition alpine 2/2 Running 0 2m
ns-nse-composition nse-firewall-vpp-54ccb78894-8rr5g 1/1 Running 0 2m2s
ns-nse-composition nse-kernel-9fff9b7bd-chnfp 2/2 Running 0 2m2s
ns-nse-composition nse-passthrough-1-565d79c99f-2rg9p 1/1 Running 0 2m1s
ns-nse-composition nse-passthrough-2-59b9d8f9c8-rj4g2 1/1 Running 0 2m1s
ns-nse-composition nse-passthrough-3-6b4b7bc445-crdwf 1/1 Running 0 2m1s
nsm-system admission-webhook-k8s-6f674f4fc4-tm72f 1/1 Running 0 2m54s
nsm-system forwarder-vpp-f7gkt 1/1 Running 0 6m3s
nsm-system forwarder-vpp-smdgx 1/1 Running 0 6m3s
nsm-system nsmgr-qgqgv 2/2 Running 0 6m3s
nsm-system nsmgr-xl8sp 2/2 Running 0 6m3s
nsm-system registry-k8s-77945786d5-kqwh8 1/1 Running 0 6m3s
spire spire-agent-2528l 1/1 Running 0 38m
spire spire-agent-mn99x 1/1 Running 0 38m
spire spire-server-0 2/2 Running 0 38m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ NSC=$(kubectl get pods -l app=alpine -n ns-nse-composition --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ NSE=$(kubectl get pods -l app=nse-kernel -n ns-nse-composition --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ echo $NSC
alpine
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ echo $NSE
nse-kernel-9fff9b7bd-chnfp
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec ${NSC} -n ns-nse-composition -- ping -c 4 172.16.1.100
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
PING 172.16.1.100 (172.16.1.100): 56 data bytes
--- 172.16.1.100 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec ${NSE} -n ns-nse-composition -- ping -c 4 172.16.1.101
Defaulted container "nse" out of: nse, nginx
PING 172.16.1.101 (172.16.1.101): 56 data bytes
--- 172.16.1.101 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec "alpine" -n "ns-nse-composition" -- ifconfig
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
eth0 Link encap:Ethernet HWaddr F2:27:D5:41:C1:77
inet addr:10.244.2.7 Bcast:10.244.2.255 Mask:255.255.255.0
inet6 addr: fe80::f027:d5ff:fe41:c177/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:22 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:84 (84.0 B) TX bytes:1804 (1.7 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec "nse-kernel-9fff9b7bd-chnfp" -n "ns-nse-composition" -- ifconfig
Defaulted container "nse" out of: nse, nginx
eth0 Link encap:Ethernet HWaddr 86:78:BD:A3:8B:CE
inet addr:10.244.2.4 Bcast:10.244.2.255 Mask:255.255.255.0
inet6 addr: fe80::8478:bdff:fea3:8bce/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:42 (42.0 B) TX bytes:1370 (1.3 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec "nse-kernel-9fff9b7bd-chnfp" -n "ns-nse-composition" -- ifconfig
Defaulted container "nse" out of: nse, nginx
eth0 Link encap:Ethernet HWaddr 86:78:BD:A3:8B:CE
inet addr:10.244.2.4 Bcast:10.244.2.255 Mask:255.255.255.0
inet6 addr: fe80::8478:bdff:fea3:8bce/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:42 (42.0 B) TX bytes:1440 (1.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
nse-compos-bc59 Link encap:Ethernet HWaddr 02:FE:F5:FD:BE:F6
inet addr:172.16.1.100 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::fe:f5ff:fefd:bef6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1446 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:266 (266.0 B)
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 2297.339
BogoMIPS: 4594.67
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid xsaveopt arat
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ free -h
total used free shared buff/cache available
Mem: 6.8G 2.8G 147M 811M 3.8G 2.9G
Swap: 1.7G 170M 1.5G
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
@srini38 Could you attach the logs from alpine cmd-nsc container?
@glazychev-art alpine-cmd-nsc.log alpine-cmd-nsc-init.log nse-kernel-9fff9b7bd-chnfp-nse.log nse-kernel-9fff9b7bd-chnfp-describe.log alpine-describe.log
Command used to collect the logs
kubectl logs alpine -c cmd-nsc-init -n ns-nse-composition > /tmp/alpine-cmd-nsc-init.log
kubectl logs alpine -c cmd-nsc -n ns-nse-composition > /tmp/alpine-cmd-nsc.log
kubectl describe pods alpine -n ns-nse-composition > /tmp/alpine-describe.log
kubectl describe pods nse-kernel-9fff9b7bd-chnfp -n ns-nse-composition > /tmp/nse-kernel-9fff9b7bd-chnfp-describe.log
kubectl logs nse-kernel-9fff9b7bd-chnfp -c nse -n ns-nse-composition > /tmp/nse-kernel-9fff9b7bd-chnfp-nse.log
@srini38 As far as I can see, requests are slow for some reason.. Could you add another env to the admission-webhook? It will look:
...
- name: NSM_ENVS
value: NSM_LOG_LEVEL=TRACE,NSM_LIVENESSCHECKENABLED=false,NSM_REQUEST_TIMEOUT=60s
...
Either way, please dump the cluster afterwards and attach it here:
kubectl cluster-info dump -A --output-directory=/path/to/cluster-state
Thank you!
@glazychev-art Thank you!! it's working now with NSM_REQUEST_TIMEOUT=60s.
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ curl -L https://raw.githubusercontent.com/networkservicemesh/integration-k8s-kind/main/cluster-config.yaml | kind create cluster --config -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 118 100 118 0 0 212 0 --:--:-- --:--:-- --:--:-- 212
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 😊
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-7m7mp 1/1 Running 0 6m2s
kube-system coredns-6d4b75cb6d-s8nd6 1/1 Running 0 6m2s
kube-system etcd-kind-control-plane 1/1 Running 0 6m14s
kube-system kindnet-t8wch 1/1 Running 0 5m40s
kube-system kindnet-x9vlr 1/1 Running 0 5m39s
kube-system kindnet-xnlwv 1/1 Running 0 6m1s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 6m14s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 6m14s
kube-system kube-proxy-7txbk 1/1 Running 0 5m40s
kube-system kube-proxy-9kmrr 1/1 Running 0 6m1s
kube-system kube-proxy-j5rts 1/1 Running 0 5m39s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 6m14s
local-path-storage local-path-provisioner-9cd9bd544-6zldn 1/1 Running 0 5m58s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/spire?ref=v1.6.0
namespace/spire created
customresourcedefinition.apiextensions.k8s.io/spiffeids.spiffeid.spiffe.io created
serviceaccount/spire-agent created
serviceaccount/spire-server created
clusterrole.rbac.authorization.k8s.io/k8s-workload-registrar-role created
clusterrole.rbac.authorization.k8s.io/spire-agent-cluster-role created
clusterrole.rbac.authorization.k8s.io/spire-server-trust-role created
clusterrolebinding.rbac.authorization.k8s.io/k8s-workload-registrar-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-cluster-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-server-trust-role-binding created
configmap/k8s-workload-registrar created
configmap/spire-agent created
configmap/spire-bundle created
configmap/spire-server created
service/k8s-workload-registrar created
service/spire-server created
statefulset.apps/spire-server created
daemonset.apps/spire-agent created
validatingwebhookconfiguration.admissionregistration.k8s.io/k8s-workload-registrar created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-7m7mp 1/1 Running 0 8m6s
kube-system coredns-6d4b75cb6d-s8nd6 1/1 Running 0 8m6s
kube-system etcd-kind-control-plane 1/1 Running 0 8m18s
kube-system kindnet-t8wch 1/1 Running 0 7m44s
kube-system kindnet-x9vlr 1/1 Running 0 7m43s
kube-system kindnet-xnlwv 1/1 Running 0 8m5s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 8m18s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 8m18s
kube-system kube-proxy-7txbk 1/1 Running 0 7m44s
kube-system kube-proxy-9kmrr 1/1 Running 0 8m5s
kube-system kube-proxy-j5rts 1/1 Running 0 7m43s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 8m18s
local-path-storage local-path-provisioner-9cd9bd544-6zldn 1/1 Running 0 8m2s
spire spire-agent-9d5b9 1/1 Running 0 95s
spire spire-agent-jfr2g 1/1 Running 0 95s
spire spire-server-0 2/2 Running 0 95s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl create ns nsm-system
namespace/nsm-system created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/basic?ref=v1.6.0
customresourcedefinition.apiextensions.k8s.io/networkserviceendpoints.networkservicemesh.io created
customresourcedefinition.apiextensions.k8s.io/networkservices.networkservicemesh.io created
serviceaccount/admission-webhook-sa created
serviceaccount/nsmgr-sa created
serviceaccount/registry-k8s-sa created
clusterrole.rbac.authorization.k8s.io/admission-webhook-role created
clusterrole.rbac.authorization.k8s.io/nsmgr-binding-role created
clusterrole.rbac.authorization.k8s.io/registry-k8s-role created
clusterrolebinding.rbac.authorization.k8s.io/admission-webhook-binding created
clusterrolebinding.rbac.authorization.k8s.io/nsmgr-binding created
clusterrolebinding.rbac.authorization.k8s.io/registry-k8s-role-binding created
service/admission-webhook-svc created
service/registry created
deployment.apps/admission-webhook-k8s created
deployment.apps/registry-k8s created
daemonset.apps/forwarder-vpp created
daemonset.apps/nsmgr created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-7m7mp 1/1 Running 0 17m
kube-system coredns-6d4b75cb6d-s8nd6 1/1 Running 0 17m
kube-system etcd-kind-control-plane 1/1 Running 0 17m
kube-system kindnet-t8wch 1/1 Running 0 16m
kube-system kindnet-x9vlr 1/1 Running 0 16m
kube-system kindnet-xnlwv 1/1 Running 0 17m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 17m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 17m
kube-system kube-proxy-7txbk 1/1 Running 0 16m
kube-system kube-proxy-9kmrr 1/1 Running 0 17m
kube-system kube-proxy-j5rts 1/1 Running 0 16m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 17m
local-path-storage local-path-provisioner-9cd9bd544-6zldn 1/1 Running 0 17m
nsm-system admission-webhook-k8s-665d7dcd85-sbr2s 1/1 Running 0 8m3s
nsm-system forwarder-vpp-2hcbr 1/1 Running 0 8m2s
nsm-system forwarder-vpp-h2lwk 1/1 Running 0 8m2s
nsm-system nsmgr-8wrtk 2/2 Running 0 8m2s
nsm-system nsmgr-knqjx 2/2 Running 0 8m2s
nsm-system registry-k8s-77945786d5-4p57k 1/1 Running 0 8m3s
spire spire-agent-9d5b9 1/1 Running 0 10m
spire spire-agent-jfr2g 1/1 Running 0 10m
spire spire-server-0 2/2 Running 0 10m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl delete deploy -n nsm-system admission-webhook-k8s
deployment.apps "admission-webhook-k8s" deleted
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ ls
admission-webhook.yaml binding.yaml kustomization.yaml role.yaml sa.yaml service.yaml
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ cat admission-webhook.yaml | grep LEVEL
value: NSM_LOG_LEVEL=TRACE,NSM_LIVENESSCHECKENABLED=false,NSM_REQUEST_TIMEOUT=60s
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ cat kustomization.yaml | grep names
namespace: nsm-system
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl apply -k .
serviceaccount/admission-webhook-sa unchanged
clusterrole.rbac.authorization.k8s.io/admission-webhook-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/admission-webhook-binding unchanged
service/admission-webhook-svc unchanged
deployment.apps/admission-webhook-k8s created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-7m7mp 1/1 Running 0 19m
kube-system coredns-6d4b75cb6d-s8nd6 1/1 Running 0 19m
kube-system etcd-kind-control-plane 1/1 Running 0 20m
kube-system kindnet-t8wch 1/1 Running 0 19m
kube-system kindnet-x9vlr 1/1 Running 0 19m
kube-system kindnet-xnlwv 1/1 Running 0 19m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 20m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 20m
kube-system kube-proxy-7txbk 1/1 Running 0 19m
kube-system kube-proxy-9kmrr 1/1 Running 0 19m
kube-system kube-proxy-j5rts 1/1 Running 0 19m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 20m
local-path-storage local-path-provisioner-9cd9bd544-6zldn 1/1 Running 0 19m
nsm-system admission-webhook-k8s-6577f656d4-f45sr 1/1 Running 0 34s
nsm-system forwarder-vpp-2hcbr 1/1 Running 0 10m
nsm-system forwarder-vpp-h2lwk 1/1 Running 0 10m
nsm-system nsmgr-8wrtk 2/2 Running 0 10m
nsm-system nsmgr-knqjx 2/2 Running 0 10m
nsm-system registry-k8s-77945786d5-4p57k 1/1 Running 0 10m
spire spire-agent-9d5b9 1/1 Running 0 13m
spire spire-agent-jfr2g 1/1 Running 0 13m
spire spire-server-0 2/2 Running 0 13m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl create ns ns-nse-composition
namespace/ns-nse-composition created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/features/nse-composition?ref=v1.6.0
configmap/nginx-config-b9f75kh6cm created
configmap/vppagent-firewall-config-file created
deployment.apps/nse-firewall-vpp created
deployment.apps/nse-kernel created
deployment.apps/nse-passthrough-1 created
deployment.apps/nse-passthrough-2 created
deployment.apps/nse-passthrough-3 created
networkservice.networkservicemesh.io/nse-composition created
pod/alpine created
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-7m7mp 1/1 Running 0 22m
kube-system coredns-6d4b75cb6d-s8nd6 1/1 Running 0 22m
kube-system etcd-kind-control-plane 1/1 Running 0 22m
kube-system kindnet-t8wch 1/1 Running 0 22m
kube-system kindnet-x9vlr 1/1 Running 0 22m
kube-system kindnet-xnlwv 1/1 Running 0 22m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 22m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 22m
kube-system kube-proxy-7txbk 1/1 Running 0 22m
kube-system kube-proxy-9kmrr 1/1 Running 0 22m
kube-system kube-proxy-j5rts 1/1 Running 0 22m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 22m
local-path-storage local-path-provisioner-9cd9bd544-6zldn 1/1 Running 0 22m
ns-nse-composition alpine 2/2 Running 0 104s
ns-nse-composition nse-firewall-vpp-54ccb78894-ngzl2 1/1 Running 0 105s
ns-nse-composition nse-kernel-9fff9b7bd-ktfql 2/2 Running 0 105s
ns-nse-composition nse-passthrough-1-565d79c99f-z65bh 1/1 Running 0 105s
ns-nse-composition nse-passthrough-2-59b9d8f9c8-54c5h 1/1 Running 0 105s
ns-nse-composition nse-passthrough-3-6b4b7bc445-2tbmg 1/1 Running 0 104s
nsm-system admission-webhook-k8s-6577f656d4-f45sr 1/1 Running 0 3m31s
nsm-system forwarder-vpp-2hcbr 1/1 Running 0 13m
nsm-system forwarder-vpp-h2lwk 1/1 Running 0 13m
nsm-system nsmgr-8wrtk 2/2 Running 0 13m
nsm-system nsmgr-knqjx 2/2 Running 0 13m
nsm-system registry-k8s-77945786d5-4p57k 1/1 Running 0 13m
spire spire-agent-9d5b9 1/1 Running 0 16m
spire spire-agent-jfr2g 1/1 Running 0 16m
spire spire-server-0 2/2 Running 0 16m
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ NSC=$(kubectl get pods -l app=alpine -n ns-nse-composition --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ NSE=$(kubectl get pods -l app=nse-kernel -n ns-nse-composition --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ echo $NSC
alpine
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ echo $NSE
nse-kernel-9fff9b7bd-ktfql
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec ${NSC} -n ns-nse-composition -- ping -c 4 172.16.1.100
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
PING 172.16.1.100 (172.16.1.100): 56 data bytes
64 bytes from 172.16.1.100: seq=0 ttl=64 time=834.355 ms
64 bytes from 172.16.1.100: seq=1 ttl=64 time=332.796 ms
64 bytes from 172.16.1.100: seq=2 ttl=64 time=426.067 ms
64 bytes from 172.16.1.100: seq=3 ttl=64 time=317.402 ms
--- 172.16.1.100 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 317.402/477.655/834.355 ms
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec ${NSE} -n ns-nse-composition -- ping -c 4 172.16.1.101
Defaulted container "nse" out of: nse, nginx
PING 172.16.1.101 (172.16.1.101): 56 data bytes
64 bytes from 172.16.1.101: seq=0 ttl=64 time=521.490 ms
64 bytes from 172.16.1.101: seq=1 ttl=64 time=553.423 ms
64 bytes from 172.16.1.101: seq=2 ttl=64 time=529.107 ms
64 bytes from 172.16.1.101: seq=3 ttl=64 time=328.905 ms
--- 172.16.1.101 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 328.905/483.231/553.423 ms
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec ${NSC} -n ns-nse-composition -- wget -O /dev/null --timeout 5 "172.16.1.100:8080"
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
Connecting to 172.16.1.100:8080 (172.16.1.100:8080)
saving to '/dev/null'
'/dev/null' saved
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec ${NSC} -n ns-nse-composition -- wget -O /dev/null --timeout 5 "172.16.1.100:80"
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
Connecting to 172.16.1.100:80 (172.16.1.100:80)
wget: download timed out
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ if [ 0 -eq $? ]; then
> echo "error: port :80 is available" >&2
> false
> else
> echo "success: port :80 is unavailable"
> fi
success: port :80 is unavailable
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec ${NSC} -n ns-nse-composition -- wget -O /dev/null --timeout 5 "172.16.1.100:80"
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
Connecting to 172.16.1.100:80 (172.16.1.100:80)
wget: download timed out
command terminated with exit code 1
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec ${NSC} -n ns-nse-composition -- wget -O /dev/null --timeout 5 "172.16.1.100:8080"
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
Connecting to 172.16.1.100:8080 (172.16.1.100:8080)
saving to '/dev/null'
'/dev/null' saved
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$
test@test-virtual-machine:~/source/nsm/tests/nsm1.6/test/admission-webhook-k8s$ kubectl exec "alpine" -n "ns-nse-composition" -- ifconfig
Defaulted container "alpine" out of: alpine, cmd-nsc, cmd-nsc-init (init)
eth0 Link encap:Ethernet HWaddr 5A:DC:EC:E7:82:E2
inet addr:10.244.2.7 Bcast:10.244.2.255 Mask:255.255.255.0
inet6 addr: fe80::58dc:ecff:fee7:82e2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:1006 (1006.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
nsm-1 Link encap:Ethernet HWaddr 02:FE:B1:65:2C:8C
inet addr:172.16.1.101 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::fe:b1ff:fe65:2c8c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1446 Metric:1
RX packets:31 errors:0 dropped:0 overruns:0 frame:0
TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2826 (2.7 KiB) TX bytes:4138 (4.0 KiB)
top - 18:34:22 up 56 days, 4 min, 5 users, load average: 6.85, 6.67, 4.96
Threads: 1463 total, 7 running, 1387 sleeping, 0 stopped, 0 zombie
%Cpu0 : 15.3 us, 11.9 sy, 0.0 ni, 68.1 id, 3.1 wa, 0.0 hi, 1.7 si, 0.0 st
%Cpu1 : 99.7 us, 0.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 19.8 us, 12.9 sy, 0.0 ni, 66.7 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st
%Cpu3 : 10.8 us, 13.5 sy, 0.0 ni, 75.1 id, 0.7 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu4 : 12.9 us, 12.9 sy, 0.0 ni, 72.4 id, 0.7 wa, 0.0 hi, 1.0 si, 0.0 st
%Cpu5 : 14.6 us, 12.9 sy, 0.0 ni, 71.5 id, 0.3 wa, 0.0 hi, 0.7 si, 0.0 st
%Cpu6 : 16.6 us, 10.5 sy, 0.0 ni, 72.5 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
%Cpu7 : 16.3 us, 15.6 sy, 0.0 ni, 67.4 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st
KiB Mem : 7115140 total, 160972 free, 2896488 used, 4057680 buff/cache
KiB Swap: 1746956 total, 1572620 free, 174336 used. 3090164 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND P
1597299 root 20 0 17.306g 264268 173908 R 18.6 3.7 1:55.16 vpp_main 1
1596856 root 20 0 17.306g 262680 172344 R 18.2 3.7 1:54.35 vpp_main 1
1594605 root 20 0 17.368g 272252 172628 R 17.6 3.8 1:55.88 vpp_main 1
1595009 root 20 0 17.306g 262236 171896 R 17.3 3.7 1:57.18 vpp_main 1
1506252 root 20 0 17.512g 292908 193180 R 13.8 4.1 1:47.62 vpp_main 1
1506188 root 20 0 17.512g 293092 193240 R 13.5 4.1 1:45.20 vpp_main 1
1497545 root 20 0 1191756 413720 58308 S 7.5 5.8 0:33.13 kube-apiserver 3
1673548 test 20 0 45584 5456 3300 R 4.7 0.1 0:02.39 top 7
@srini38 Very happy about it!
If you can collect the logs, that will be very cool. We are especially interested in the logs from the nsm-system and ns-nse-composition pods. It would be very interesting to know why the request takes so long.
kubectl cluster-info dump ...
command can help, I described it above
@glazychev-art Glad to share the logs. One thing that I observe is that the ping from nsc/nse takes hundreds of milliseconds. Not sure if this is due to all the instances of vpp_main hogging one CPU. cluster-state.zip
Seems like resolved.
@srini38 Feel free to reopen it if the problem is still actual.