elasticsearch_exporter
elasticsearch_exporter copied to clipboard
I'm not familiar with the ES exporter data. There's a gap problem
I'm not familiar with the ES exporter data. There's a gap problem
My profile:
- --es.uri=http://elasticsearch-logging.kube-system:9200 - --es.all - --es.indices - --es.timeout=30s - --es.clusterinfo.interval=30s - --es.shards - --es.snapshots - --web.listen-address=:9108 - --web.telemetry-path=/metrics
es version:6.6.1
Es deployed in kubernetes
I don't know why.
Hi @guleng , I am running into the same issue. Were you able to resolve it ?
Hello,
I have same issue, any news on this? Or any advise how to troubleshoot this? Exporter logs looking fine. No issues with stability of elastic cluster
level=info ts=2021-12-09T09:11:45.262705759Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:16:45.262761608Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:21:45.262684745Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:26:45.262707773Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:31:45.262754214Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:36:45.262657195Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:41:45.262754009Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:46:45.262706589Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:51:45.262658594Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T09:56:45.262689506Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T10:01:45.262625854Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T10:06:45.26276671Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T10:11:45.262754474Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T10:16:45.262677078Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T10:21:45.262632685Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T10:26:45.262713592Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T10:31:45.262894646Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label" level=info ts=2021-12-09T10:36:45.262632434Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label"
If someone is interested it seems I found solution. You need to just increase timeout parameter of the exporter. Default value is 5s, and when cluster is under big load it seems it's insufficient. I increase it to 30s and everything works fine :)
That would make sense to me. I think that the missing data must be related to data not being scraped in a timely manner from prometheus (like in the event of a timeout).