edgemesh
edgemesh copied to clipboard
Help please, fatal err with "unsupported loadBalance policy PASSTHROUGH", which cause edgemesh reboot periodically
- edegemesh: 1.13.2
- kubeedge: 1.13.0
errLogs
I0426 18:13:53.781935 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
E0426 18:14:48.424862 6 loadbalancer.go:578] unsupported loadBalance policy PASSTHROUGH
E0426 18:14:48.425229 6 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 325 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x2079c20, 0x3d612e0})
/code/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00020f450})
/code/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75
panic({0x2079c20, 0x3d612e0})
/usr/local/go/src/runtime/panic.go:1038 +0x215
github.com/kubeedge/edgemesh/pkg/loadbalancer.(*LoadBalancer).OnDestinationRuleUpdate(0xc0003f4280, 0xc00102fc70, 0xc0018be000)
/code/pkg/loadbalancer/loadbalancer.go:544 +0x497
github.com/kubeedge/edgemesh/pkg/loadbalancer.(*LoadBalancer).handleUpdateDestinationRule(0xc0001fba00, {0x2386f40, 0xc0018be000}, {0x2386f40, 0xc0018be000})
/code/pkg/loadbalancer/loadbalancer.go:267 +0x57
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...)
/code/vendor/k8s.io/client-go/tools/cache/controller.go:238
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/code/vendor/k8s.io/client-go/tools/cache/shared_informer.go:785 +0x127
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f845f807338)
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013d0f38, {0x2935640, 0xc00101c810}, 0x1, 0xc000541500)
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000e0000, 0x3b9aca00, 0x0, 0xa0, 0xc0013d0f88)
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc000e71100)
/code/vendor/k8s.io/client-go/tools/cache/shared_informer.go:781 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x88
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x1d7ee77]
goroutine 325 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00020f450})
/code/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x2079c20, 0x3d612e0})
/usr/local/go/src/runtime/panic.go:1038 +0x215
github.com/kubeedge/edgemesh/pkg/loadbalancer.(*LoadBalancer).OnDestinationRuleUpdate(0xc0003f4280, 0xc00102fc70, 0xc0018be000)
/code/pkg/loadbalancer/loadbalancer.go:544 +0x497
github.com/kubeedge/edgemesh/pkg/loadbalancer.(*LoadBalancer).handleUpdateDestinationRule(0xc0001fba00, {0x2386f40, 0xc0018be000}, {0x2386f40, 0xc0018be000})
/code/pkg/loadbalancer/loadbalancer.go:267 +0x57
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...)
/code/vendor/k8s.io/client-go/tools/cache/controller.go:238
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/code/vendor/k8s.io/client-go/tools/cache/shared_informer.go:785 +0x127
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f845f807338)
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013d0f38, {0x2935640, 0xc00101c810}, 0x1, 0xc000541500)
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000e0000, 0x3b9aca00, 0x0, 0xa0, 0xc0013d0f88)
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc000e71100)
/code/vendor/k8s.io/client-go/tools/cache/shared_informer.go:781 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/code/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x88
then edgemesh restart:
I0426 18:14:49.221017 6 server.go:56] Version: v1.13.2-dirty
I0426 18:14:49.221106 6 server.go:92] [1] Prepare agent to run
I0426 18:14:49.221405 6 netif.go:96] bridge device edgemesh0 already exists
I0426 18:14:49.221548 6 server.go:96] edgemesh-agent running on CloudMode
I0426 18:14:49.221583 6 server.go:99] [2] New clients
W0426 18:14:49.221601 6 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0426 18:14:49.222541 6 server.go:106] [3] Register beehive modules
W0426 18:14:49.222585 6 module.go:37] Module EdgeDNS is disabled, do not register
I0426 18:14:49.224298 6 server.go:68] Using userspace Proxier.
I0426 18:14:49.924203 6 module.go:34] Module EdgeProxy registered successfully
I0426 18:14:50.019520 6 util.go:255] Listening to docker0 is meaningless, skip it.
I0426 18:14:50.019557 6 util.go:255] Listening to flannel.1 is meaningless, skip it.
I0426 18:14:50.019566 6 util.go:255] Listening to cni0 is meaningless, skip it.
I0426 18:14:50.019574 6 util.go:255] Listening to edgemesh0 is meaningless, skip it.
I0426 18:14:50.224567 6 module.go:181] I'm {12D3KooWJyV6bxmHv4RU36ddDCMqMnXbB9zy9jZ2tXsUH3FBK9PB: [/ip4/127.0.0.1/tcp/20006 /ip4/10.210.31.12/tcp/20006 /ip4/10.210.31.12/tcp/20006]}
I0426 18:14:50.224679 6 module.go:190] Run as a relay node
I0426 18:14:50.224746 6 module.go:203] Bootstrapping the DHT
I0426 18:14:50.520342 6 tunnel.go:80] Starting MDNS discovery service
I0426 18:14:50.718554 6 tunnel.go:93] Starting DHT discovery service
I0426 18:14:50.718802 6 module.go:34] Module EdgeTunnel registered successfully
I0426 18:14:50.718837 6 server.go:112] [4] Cache beehive modules
I0426 18:14:50.718854 6 server.go:119] [5] Start all modules
I0426 18:14:50.719025 6 core.go:24] Starting module EdgeProxy
I0426 18:14:50.719136 6 tunnel.go:491] Starting relay finder
I0426 18:14:50.719149 6 core.go:24] Starting module EdgeTunnel
I0426 18:14:50.719601 6 config.go:317] "Starting service config controller"
I0426 18:14:50.719659 6 shared_informer.go:240] Waiting for caches to sync for service config
I0426 18:14:50.719692 6 config.go:135] "Starting endpoints config controller"
I0426 18:14:50.719699 6 shared_informer.go:240] Waiting for caches to sync for endpoints config
I0426 18:14:50.721885 6 loadbalancer.go:239] "Starting loadBalancer destinationRule controller"
I0426 18:14:50.721912 6 shared_informer.go:240] Waiting for caches to sync for loadBalancer destinationRule
I0426 18:14:50.919770 6 shared_informer.go:247] Caches are synced for endpoints config
I0426 18:14:50.919777 6 shared_informer.go:247] Caches are synced for service config
I0426 18:14:51.018523 6 shared_informer.go:247] Caches are synced for loadBalancer destinationRule
I0426 18:14:51.726449 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="default/emqx-cloud-service:http-18083" protocol=TCP nodePort=30007
I0426 18:14:51.820191 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="default/emqx-cloud-service:http-18083" protocol=TCP nodePort=30007
I0426 18:14:51.823897 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="default/emqx-cloud-service:http-18083" protocol=TCP nodePort=30007
I0426 18:14:52.020528 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="default/emqx-cloud-service:tcp-1883" protocol=TCP nodePort=30005
I0426 18:14:52.023733 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="default/emqx-cloud-service:tcp-1883" protocol=TCP nodePort=30005
I0426 18:14:52.119892 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="default/emqx-cloud-service:tcp-1883" protocol=TCP nodePort=30005
I0426 18:14:52.618528 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="kubesphere-system/ks-apiserver" protocol=TCP nodePort=30000
I0426 18:14:52.821767 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="kubesphere-system/ks-apiserver" protocol=TCP nodePort=30000
I0426 18:14:52.922818 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="kubesphere-system/ks-apiserver" protocol=TCP nodePort=30000
I0426 18:14:53.820305 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="default/emqx-service:http-18083" protocol=TCP nodePort=30009
I0426 18:14:53.826071 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="default/emqx-service:http-18083" protocol=TCP nodePort=30009
I0426 18:14:53.924653 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="default/emqx-service:http-18083" protocol=TCP nodePort=30009
I0426 18:14:54.222615 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="default/emqx-service:tcp-1883" protocol=TCP nodePort=30025
I0426 18:14:54.321503 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="default/emqx-service:tcp-1883" protocol=TCP nodePort=30025
I0426 18:14:54.422868 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="default/emqx-service:tcp-1883" protocol=TCP nodePort=30025
I0426 18:14:55.122680 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="kubesphere-system/ks-console:nginx" protocol=TCP nodePort=30021
I0426 18:14:55.126046 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="kubesphere-system/ks-console:nginx" protocol=TCP nodePort=30021
I0426 18:14:55.128982 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="kubesphere-system/ks-console:nginx" protocol=TCP nodePort=30021
I0426 18:14:55.318744 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="default/speed-svc:http-0" protocol=TCP nodePort=30008
I0426 18:14:55.322449 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="default/speed-svc:http-0" protocol=TCP nodePort=30008
I0426 18:14:55.325463 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="default/speed-svc:http-0" protocol=TCP nodePort=30008
I0426 18:14:55.422006 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="default/speed-svc:http-1" protocol=TCP nodePort=30049
I0426 18:14:55.425289 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="default/speed-svc:http-1" protocol=TCP nodePort=30049
I0426 18:14:55.428235 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="default/speed-svc:http-1" protocol=TCP nodePort=30049
I0426 18:14:56.222213 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="kube-system/kubernetes-dashboard:https" protocol=TCP nodePort=30043
I0426 18:14:56.228422 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="kube-system/kubernetes-dashboard:https" protocol=TCP nodePort=30043
I0426 18:14:56.320236 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="kube-system/kubernetes-dashboard:https" protocol=TCP nodePort=30043
I0426 18:14:56.424879 6 proxier.go:895] "Opened iptables from-containers public port for service" servicePortName="kube-system/kubernetes-dashboard:http" protocol=TCP nodePort=30020
I0426 18:14:56.431608 6 proxier.go:906] "Opened iptables from-host public port for service" servicePortName="kube-system/kubernetes-dashboard:http" protocol=TCP nodePort=30020
I0426 18:14:56.523600 6 proxier.go:916] "Opened iptables from-non-local public port for service" servicePortName="kube-system/kubernetes-dashboard:http" protocol=TCP nodePort=30020
I0426 18:15:42.164136 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:16:03.968584 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:16:36.394297 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:16:52.894374 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:16:53.782193 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:17:50.930108 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:19:03.969294 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:19:52.893881 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:19:53.781739 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:20:50.930373 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:22:03.968856 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:22:41.692331 6 tunnel.go:205] Discovery service got a new stream from {12D3KooWSNe2kYGni4a2pVrmrmhsbtLVEgVK73FHBMatX4GmeuQw: [/ip4/172.29.50.53/tcp/20006]}
I0426 18:22:41.692619 6 tunnel.go:234] [DHT] Discovery from ido02-172-29-50-53.kedge : {12D3KooWSNe2kYGni4a2pVrmrmhsbtLVEgVK73FHBMatX4GmeuQw: [/ip4/172.29.50.53/tcp/20006]}
I0426 18:22:52.893726 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:22:53.781601 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:22:55.241128 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:22:55.241486 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:22:55.244360 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:22:55.244596 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:23:50.930114 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:25:03.968194 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:25:52.894166 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
I0426 18:25:53.782029 6 loadbalancer.go:720] Dial legacy network between EMPTY_POD_NAME - {tcp EMPTY_NODE_NAME 10.210.31.12:6443}
^C
[root@server data-logs]#
@Poorunga Help please.
env (svc, pod):
- restarts 178 (6m39s ago) @edgeNode
- restarts 99 (13m ago) @cloudNode
[root@server data-logs]# kubectl get svc -A -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default emqx-cloud-service NodePort 10.43.59.24 <none> 18083:30007/TCP,1883:30005/TCP 15d app=emqx-cloud
default emqx-service NodePort 10.43.15.18 <none> 18083:30009/TCP,1883:30025/TCP 19d app=emqx-test
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 36d <none>
default lb-svc-test ClusterIP 10.43.175.184 <none> 12345/TCP 9d app=edge-http-test
default speed-svc NodePort 10.43.215.24 <none> 80:30008/TCP,8080:30049/TCP 19d app=dpnginx-app
kube-system dashboard-metrics-scraper ClusterIP 10.43.122.96 <none> 8000/TCP 36d k8s-app=dashboard-metrics-scraper
kube-system kube-controller-manager-svc ClusterIP None <none> 10257/TCP 36d <none>
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 36d k8s-app=kube-dns
kube-system kube-scheduler-svc ClusterIP None <none> 10259/TCP 36d <none>
kube-system kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 36d <none>
kube-system kubernetes-dashboard NodePort 10.43.59.168 <none> 443:30043/TCP,80:30020/TCP 36d k8s-app=kubernetes-dashboard
kube-system metrics-server ClusterIP 10.43.53.90 <none> 443/TCP 36d k8s-app=metrics-server
kubesphere-controls-system default-http-backend ClusterIP 10.43.148.1 <none> 80/TCP 36d app=kubesphere,component=kubesphere-router
kubesphere-monitoring-system kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 36d app.kubernetes.io/component=exporter,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system node-exporter ClusterIP None <none> 9100/TCP 36d app.kubernetes.io/component=exporter,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system prometheus-k8s ClusterIP 10.43.4.199 <none> 9090/TCP,8080/TCP 36d app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system prometheus-operated ClusterIP None <none> 9090/TCP 36d app.kubernetes.io/name=prometheus
kubesphere-monitoring-system prometheus-operator ClusterIP None <none> 8443/TCP 36d app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=kube-prometheus
kubesphere-system ks-apiserver NodePort 10.43.233.200 <none> 80:30000/TCP 36d app=ks-apiserver,tier=backend
kubesphere-system ks-console NodePort 10.43.234.184 <none> 80:30021/TCP 36d app=ks-console,tier=frontend
kubesphere-system ks-controller-manager ClusterIP 10.43.78.84 <none> 443/TCP 36d app=ks-controller-manager,tier=backend
[root@server data-logs]#
[root@server data-logs]#
[root@server data-logs]# kubectl -n kubeedge get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
edgemesh-agent-26l9h 1/1 Running 178 (6m39s ago) 24h 172.0.0.1 xxx02-172-29-50-53.kedge <none> <none>
edgemesh-agent-jvrql 1/1 Running 99 (13m ago) 25h 10.210.31.12 10.210.31.12 <none> <none>
[root@server data-logs]#
loadBalancer not support PASSTHROUGH. I will fix panic soon. And you need to delete destinationRule which is used PASSTHROUGH.
thx a lot.
loadBalancer not support PASSTHROUGH. I will fix panic soon.
@Poorunga 麻烦问下进展如何,或可帮指引下我修复验证后,提PR 感谢~