kubernetes-kargo-logging-monitoring
kubernetes-kargo-logging-monitoring copied to clipboard
CrashLoopBackOff for fluentd pods
I have deployed kebrnetes kluster without efk enabled: kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready master,node 5d v1.9.2+coreos.0 node2 Ready master,node 5d v1.9.2+coreos.0 node3 Ready node 5d v1.9.2+coreos.0 node4 Ready node 5d v1.9.2+coreos.0
Then, I installed efk (kubectl apply -f logging)
Problem with ES I solved as described here
But still have problem with fluentd pods.
They have status "CrashLoopBackOff", "Running" or "Completed" depending on I don't know what:
[root@tdmkube-kube-master logging]# kubectl get pods -n logging|grep fluentd
fluentd-dwx7q 0/1 CrashLoopBackOff 4 2m
fluentd-qt9r7 0/1 CrashLoopBackOff 4 2m
fluentd-wfp56 0/1 CrashLoopBackOff 4 2m
fluentd-wj8wg 0/1 CrashLoopBackOff 3 1m
[root@tdmkube-kube-master logging]# kubectl get pods -n logging|grep fluentd
fluentd-dwx7q 0/1 CrashLoopBackOff 4 2m
fluentd-qt9r7 0/1 CrashLoopBackOff 4 2m
fluentd-wfp56 0/1 CrashLoopBackOff 4 2m
fluentd-wj8wg 0/1 CrashLoopBackOff 3 1m
[root@tdmkube-kube-master logging]# kubectl get pods -n logging|grep fluentd
fluentd-dwx7q 0/1 CrashLoopBackOff 4 2m
fluentd-qt9r7 0/1 CrashLoopBackOff 4 2m
fluentd-wfp56 0/1 CrashLoopBackOff 4 2m
fluentd-wj8wg 0/1 Completed 4 1m
[root@tdmkube-kube-master logging]# kubectl get pods -n logging|grep fluentd
fluentd-dwx7q 0/1 CrashLoopBackOff 4 2m
fluentd-qt9r7 0/1 CrashLoopBackOff 4 2m
fluentd-wfp56 0/1 Completed 5 3m
fluentd-wj8wg 0/1 Completed 4 1m
[root@tdmkube-kube-master logging]# kubectl get pods -n logging|grep fluentd
fluentd-dwx7q 0/1 CrashLoopBackOff 4 3m
fluentd-qt9r7 0/1 CrashLoopBackOff 4 2m
fluentd-wfp56 0/1 Completed 5 3m
fluentd-wj8wg 0/1 CrashLoopBackOff 4 2m
[root@tdmkube-kube-master logging]# kubectl get pods -n logging|grep fluentd
fluentd-dwx7q 0/1 CrashLoopBackOff 5 3m
fluentd-qt9r7 0/1 CrashLoopBackOff 5 3m
fluentd-wfp56 0/1 CrashLoopBackOff 5 4m
fluentd-wj8wg 0/1 CrashLoopBackOff 4 2m
They restarted continiusly, and change status.
kubectl logs fluentd-wj8wg shows nothing (empty output)
[root@tdmkube-kube-master logging]# kubectl describe pod fluentd-wj8wg -n logging shows this:
Name: fluentd-wj8wg
Namespace: logging
Node: node2/10.28.79.148
Start Time: Wed, 07 Feb 2018 15:46:08 +0300
Labels: app=fluentd
controller-revision-hash=1676977040
pod-template-generation=1
Annotations:
Normal SuccessfulMountVolume 7m kubelet, node2 MountVolume.SetUp succeeded for volume "varlibdockercontainers" Normal SuccessfulMountVolume 7m kubelet, node2 MountVolume.SetUp succeeded for volume "varlog" Normal SuccessfulMountVolume 7m kubelet, node2 MountVolume.SetUp succeeded for volume "fluentd-conf" Normal SuccessfulMountVolume 7m kubelet, node2 MountVolume.SetUp succeeded for volume "default-token-j2sfr" Normal Created 6m (x4 over 7m) kubelet, node2 Created container Normal Started 6m (x4 over 7m) kubelet, node2 Started container Normal Pulled 5m (x5 over 7m) kubelet, node2 Container image "gcr.io/google_containers/fluentd-elasticsearch:1.20" already present on machine Warning BackOff 1m (x24 over 7m) kubelet, node2 Back-off restarting failed container
Whats is problem here?
Did you ever figure out this issue? @tedam
Hi, I encounter exactly the same issue. Did someone solved it ?
I find this in the log file /var/log/fluentd.log :
2018-05-09 15:56:00 +0000 [error]: fluent/supervisor.rb:369:rescue in main_process: config error file="/etc/td-agent/td-agent.conf" error="Exception encountered fetching metadata from Kubernetes API endpoint: pods is forbidden: User "system:serviceaccount:logging:default" cannot list pods at the cluster scope"
I manage to start fluentd daemonset by creating an serviceAccount for efk and adding :
serviceAccountName: efk
to fluentd-daemonset.yaml
I hope it will help.