kubernetes-kargo-logging-monitoring
kubernetes-kargo-logging-monitoring copied to clipboard
Unable to mount volumes for pod logging/elasticsearch
Hey there,
Thanks for putting all this together!! Was exactly what I was looking for!
Originally, I used kubespray's (btw, you may want to "replace all" here, after the recent rename) efk_enabled flag, just as you suggest in section 2 in the readme. That worked fine, but:
a. I had an issue with the KIBANA_BASE_URL which I probably need to raise there,
b. they're still using the old 2.4.x versions of ES/Kibana
so, i wanted to give your kubectl apply -f logging approach a go, but I ran into an issue with the PVC you have there.
Here's the error message:
Unable to mount volumes for pod "elasticsearch-1832401789-f41vb_logging(5cddff81-75fa-11e7-ba5a-0019994e86b3)": timeout expired waiting for volumes to attach/mount for pod "logging"/"elasticsearch-1832401789-f41vb". list of unattached/unmounted volumes=[es-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "logging"/"elasticsearch-1832401789-f41vb". list of unattached/unmounted volumes=[es-data]
Was I supposed to have set up some dynamic volume provisioning for this to work?
Hi @gsaslis, you welcome! Seems like your claim for volume didn't work out.
-
In file https://github.com/gregbkr/kubernetes-kargo-logging-monitoring/blob/master/logging/elasticsearch-deployment.yaml try to comment
Volume-mountsandvolumesection, and run elasticsearch without persistence. -
Check your volume with :
kubectl get pvorkubectl get pvcand try to add it as a separated file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-pv-claim
labels:
app: elasticsearch
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Good luck!
The same issue. There is no any PV for PVc created in playbooks
@gsaslis @Techs-Y Did the tip above help fix the PV/PVC issue for you?
@selvik i think my problem back then was that i didn't have dynamic provisioning set up, so i ended up having to manually add the StorageClass myself.
@gregbkr do you think it would make sense adding an example like this to your repo?
@gsaslis : sure, please make a pull request with the documentation addition, I will merge it. I don't have the environment to test at the moment, sorry if I couldn't help much. Thank you for your help!
Hello,
I too ran into the same issue (volume failed to mount) but as you suggested I commented out the volume part from elasticsearch-deployment.yaml file.
# kubectl apply -f logging
worked fine. Got access to Kibana dashbaord and ES:30200
From Kibana dashbaord I am unable to do this (as you suggested):
Check logs coming in kibana, you just need to refresh, select Time-field name : @timestamps + create
What if my cluster in on cloud then how would I load this file (management > Saved Object > Import > logging/dashboards/elk-v1.json)?
Any hints on this.
I moved on to next step: Monitoring
First thing, there are two folders in your repo with name monitoring and monitoring2. What's the difference?
When I ran this command:
# kubectl apply -f monitoring
I got an error related to node-exporter image. You were using an image which is not available.
I updated it to image: node-exporter:v0.15.2 and it worked.
When I try to access grafana page but no logs and more surprise to me is: fluentd pods are not running and throwing an error CrashLoopBackOff.
On kubectl describe on fluentd pod I got a message:
ERROR: Back-off restarting failed container
Don't know what it is happening.
Can one suggest?
Thanks in advance!