fluent-plugin-prometheus icon indicating copy to clipboard operation
fluent-plugin-prometheus copied to clipboard

metric value not coming

Open prasenforu opened this issue 5 years ago • 0 comments

I want to capture strings from Prometheus logs and send as a metric to Prometheus.

Prometheus logs as as below ..

level=info ts=2018-12-13T01:22:34.889490476Z caller=main.go:491 msg="Server is ready to receive web requests."
level=info ts=2018-12-13T01:22:43.606719182Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1544623200000 maxt=1544630400000
level=info ts=2018-12-13T01:22:44.867840961Z caller=head.go:348 component=tsdb msg="head GC completed" duration=146.85719ms
level=info ts=2018-12-13T01:22:45.048967385Z caller=head.go:357 component=tsdb msg="WAL truncation completed" duration=180.821539ms
level=info ts=2018-12-13T01:22:45.096764245Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1544630400000 maxt=1544637600000
level=info ts=2018-12-13T01:22:45.159380212Z caller=head.go:348 component=tsdb msg="head GC completed" duration=1.044081ms
level=info ts=2018-12-13T01:22:45.164312043Z caller=head.go:357 component=tsdb msg="WAL truncation completed" duration=4.840133ms
level=info ts=2018-12-13T01:22:45.187817457Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1544637600000 maxt=1544644800000
level=info ts=2018-12-13T01:22:45.221853437Z caller=head.go:348 component=tsdb msg="head GC completed" duration=1.164832ms
level=info ts=2018-12-13T01:22:45.225348238Z caller=head.go:357 component=tsdb msg="WAL truncation completed" duration=3.380607ms
level=info ts=2018-12-13T01:22:45.243914874Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1544644800000 maxt=1544652000000
level=info ts=2018-12-13T01:22:45.274733718Z caller=head.go:348 component=tsdb msg="head GC completed" duration=997.842µs
level=info ts=2018-12-13T01:22:45.278033979Z caller=head.go:357 component=tsdb msg="WAL truncation completed" duration=3.204118ms
level=info ts=2018-12-13T01:22:45.297322384Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1544652000000 maxt=1544659200000
level=info ts=2018-12-13T01:22:45.327136611Z caller=head.go:348 component=tsdb msg="head GC completed" duration=1.000777ms
level=info ts=2018-12-13T01:22:45.330860266Z caller=head.go:357 component=tsdb msg="WAL truncation completed" duration=3.64782ms
level=info ts=2018-12-13T03:00:00.562411676Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1544659200000 maxt=1544666400000
level=info ts=2018-12-13T03:00:01.033471567Z caller=head.go:348 component=tsdb msg="head GC completed" duration=32.736958ms
level=info ts=2018-12-13T03:00:01.037832792Z caller=head.go:357 component=tsdb msg="WAL truncation completed" duration=4.224238ms

Now I want to capture string WAL truncation completed and want to count them how many times its appeared in promethus logs.

Along with your plugins I fluent-plugin-datacounter plugins.

my config file as below ..

# Prevent fluentd from handling records containing its own logs.
    # Do not directly collect fluentd's own logs to avoid infinite loops.
    <match fluent.**>
      @type null
    </match>
    # input plugin that exports metrics
    <source>
      @type prometheus
      bind 0.0.0.0
      port 24231
      metrics_path /metrics
    </source>
    <source>
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/fluentd-containers.log.pos
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      tag kubernetes.*
      format json
      read_from_head true
    </source>
    <filter kubernetes.**>
      @type kubernetes_metadata
    </filter>
    # Clean Up the Logs from others namespace
    <match kubernetes.var.log.containers.**fluentd**.log>
      @type null
    </match>
    <match kubernetes.var.log.containers.**kube-system**.log>
      @type null
    </match>
    <match kubernetes.var.log.containers.**default**.log>
      @type null
    </match>
    <match kubernetes.var.log.containers.**openshift-infra**.log>
      @type null
    </match>
    <match kubernetes.var.log.containers.prometheus-0_openshift-metrics_prometheus-node-exporter**.log>
      @type null
    </match>
    <match kubernetes.var.log.containers.prometheus-0_openshift-metrics_alert**.log>
      @type null
    </match>
    <match kubernetes.var.log.containers.prometheus-0_openshift-metrics_fluentd**.log>
      @type null
    </match>
    <match kubernetes.var.log.containers.prometheus-0_openshift-metrics_prom-proxy**.log>
      @type null
    </match>

    <match kubernetes.var.log.containers.prometheus-0_openshift-metrics_prometheus-**.log>
      @type datacounter
      tag prom.log.counter
      count_interval 10
      aggregate all
      count_key msg
      pattern1 msg ^2\d\d$
      pattern2 compact compact
    </match>

    <filter prom.log.counter>
      @type prometheus
      <metric>
        name prom_log_counter_compact
        type counter
        desc prom log counter compact
        key compact_count
        <labels>
           host ${hostname}
        </labels>
      </metric>
      <metric>
        name prom_log_counter_wal
        type counter
        desc prom log counter wal
        key msg_count
        <labels>
           host ${hostname}
        </labels>
      </metric>
    </filter>

Metric coming as below ..

[root@masterb PK]# curl http://10.130.0.218:24231/metrics
# TYPE prom_log_counter_compact counter
# HELP prom_log_counter_compact prom log counter compact
prom_log_counter_compact{host="fluentd-ztk6x"} 0.0
# TYPE prom_log_counter_wal counter
# HELP prom_log_counter_wal prom log counter wal
prom_log_counter_wal{host="fluentd-ztk6x"} 0.0

So metrics data are not coming properly..

Please help ....

Logs from fluentd container ...

2018-12-13 06:03:53 +0000 [info]: #0 following tail of /var/log/containers/prometheus-0_openshift-metrics_alerts-proxy-f22d4108d41f820918b2761cbe68976c8b56052e62848246c771f5bf
29b3815d.log
2018-12-13 06:03:53 +0000 [info]: #0 following tail of /var/log/containers/prometheus-0_openshift-metrics_alert-buffer-a7989a84ed4ab1085c2b70aa0ea53f299aeca537cac23054d912c3c2
3a811848.log
2018-12-13 06:03:53 +0000 [info]: #0 following tail of /var/log/containers/prometheus-0_openshift-metrics_alertmanager-proxy-cf15cd2d92b2267506ba9dbe64c835126b8e628f69b0195daa
41fc651f09ac4d.log
2018-12-13 06:03:53 +0000 [info]: #0 following tail of /var/log/containers/prometheus-0_openshift-metrics_alertmanager-8fe323033b66b078d04a31540cb8ec673b76d8f5fa62207fca34dcf8
ba0eb312.log
2018-12-13 06:03:53 +0000 [info]: #0 following tail of /var/log/containers/fluentd-cd6j2_openshift-metrics_fluentd-935cc54c5993386daa737284211daf17e7efec4fb43b9366dcfc751fdfb1
7729.log
2018-12-13 06:03:53 +0000 [info]: #0 fluentd worker is now running worker=0
2018-12-13 06:04:03 +0000 [warn]: #0 no patterns matched tag="prom.log.counter"
2018-12-13 06:04:13 +0000 [warn]: #0 no patterns matched tag="prom.log.counter"
2018-12-13 06:04:24 +0000 [info]: #0 stats - namespace_cache_size: 3, pod_cache_size: 4, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, id_cache_miss: 4
2018-12-13 06:04:33 +0000 [warn]: #0 no patterns matched tag="prom.log.counter"
2018-12-13 06:04:54 +0000 [info]: #0 stats - namespace_cache_size: 3, pod_cache_size: 4, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, id_cache_miss: 4
2018-12-13 06:05:13 +0000 [warn]: #0 no patterns matched tag="prom.log.counter"
2018-12-13 06:05:24 +0000 [info]: #0 stats - namespace_cache_size: 3, pod_cache_size: 4, pod_cache_watch_misses: 2, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, i
d_cache_miss: 4
2018-12-13 06:05:54 +0000 [info]: #0 stats - namespace_cache_size: 3, pod_cache_size: 4, pod_cache_watch_misses: 3, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, i
d_cache_miss: 4
2018-12-13 06:06:24 +0000 [info]: #0 stats - namespace_cache_size: 3, pod_cache_size: 4, pod_cache_watch_misses: 3, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, i
d_cache_miss: 4
2018-12-13 06:06:33 +0000 [warn]: #0 no patterns matched tag="prom.log.counter"
2018-12-13 06:06:54 +0000 [info]: #0 stats - namespace_cache_size: 3, pod_cache_size: 4, pod_cache_watch_misses: 3, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, i
d_cache_miss: 4
2018-12-13 06:07:24 +0000 [info]: #0 stats - namespace_cache_size: 3, pod_cache_size: 4, pod_cache_watch_misses: 3, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, i
d_cache_miss: 4
2018-12-13 06:07:54 +0000 [info]: #0 stats - namespace_cache_size: 3, pod_cache_size: 5, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5, pod_cache_
watch_misses: 3

prasenforu avatar Dec 13 '18 05:12 prasenforu