grok_exporter icon indicating copy to clipboard operation
grok_exporter copied to clipboard

Making use of free Memory to speed up the parsing speed.

Open Yagyansh opened this issue 3 years ago • 4 comments

Hi. So, I am using wildcards to read the new log files that are written every 5 minutes and every file has about 6 lakh logs line at least. The thing is the parsing is happening quite slow and the buffer is not getting cleared, the memory usage of grok_exporter is only about 3-4% of my system. I want the grok_exporter to use more memory, I am okay with it consuming 80-85% of memory to increase the parsing speed. Because of slow parsing, I am not able to get the real-time analysis from grok_exporter, because even though new files are being read by grok_exporter it isn't even finished with the 1st file when the 3/4th file is ingested.


Config file:
global:
    config_version: 2
input:
    type: file
    path: /home/Service_Logs/elblogs/api/51*
    readall: true
    fail_on_missing_logfile: false
grok:
    patterns_dir: /home/Exporters/grok_exporter/patterns
metrics:
    - type: counter
      name: pattern_count
      help: Count for every pattern received, along-with Response Code and Method.
      match: '%{ALB}'
      labels:
          protocol: '{{.request_type}}'
          method: '{{.type}}'
          path: '{{.request_url}}'
          elb_code: '{{.elb_status_code}}'
          target_code: '{{.target_status_code}}'
server:
    host: localhost
    port: 9144

Is there any configuration that can be done to achieve this?

Yagyansh avatar Sep 27 '20 05:09 Yagyansh

Also, I have noticed the speeds decreases gradually, the parsing speed is amazing in the first 2-3 minutes(parses around 1 lakh lines in first 1-1.5 minutes), but then it starts to decrease(as low as some 20-30k in 1-1.5minutes).

Yagyansh avatar Sep 27 '20 06:09 Yagyansh

grok_exporter should read log lines into an internal buffer as fast as it can. Processing takes log data from that buffer. Processing should not slow down the reading of new log data. If processing is slower than reading, the buffer will grow and consume more memory. The buffer load can be monitored with the built-in grok_exporter_line_buffer_peak_load metric.

You should not expect too much memory usage. For example, if you have 600,000 lines and each line has 120 Bytes, the entire file should be under 70 M. That's not much compared to typical available system memory.

Could you comment how grok_exporter_line_buffer_peak_load behaves over time? This should be a good indicator of the internal buffer usage.

Moreover, processing speed should not decrease over time. The reason might be the readall: true configuration, because it might be that reading existing log data is faster than reading new log data. Please set readall: false and observe if the processing speed still decreases.

fstab avatar Sep 27 '20 20:09 fstab

Hi. Sorry for the very late response. I am trying a new use-case now. I have last 1 day data stored(S3 server access logs). Its around 60GB. I have given the wildcard to read all the files(10420 log files). I am using the default S3 Log Grok that is shipped in pattens with grok_exporter. Here is the config file.

global:
  config_version: 3
input:
  type: file
  path: /home/yagyansh.kumar/S3Logs/*
  readall: true
  fail_on_missing_logfile: false
imports:
- type: grok_patterns
  dir: /home/yagyansh.kumar/patterns
metrics:
- type: counter
  name: pattern_count
  help: Count for every pattern received, along-with Response Code and Method.
  match: '%{S3_ACCESS_LOG}'
  labels:
    clientip: '{{.clientip}}'
    method: '{{.verb}}'
    uri: '{{.request}}'

Here is the behaviour of buffer_load in last 15 minutes. image

The ingestion rate of logs into the grok_exporter is gradually decreasing. It was in the order of tens of thousands at the start and it has gradually dropped to tens of hundreds now.

The average processing time is increasing continuously. I am doing something wrong here? image

Oh and btw, I am running this on a 128GB RAM machine and 101GB is the constant usage of the Memory, which is totally fine but the increase in processing time is worrying.

Yagyansh avatar Dec 18 '20 13:12 Yagyansh

Okay, so I changed one thing here, instead of making grok_exporter read all the 9 crore+ log lines and filling it's buffer, I set a cron that adds 6000 lines at once everytime grok_exporter finishes the parsing of the last sent 6000 lines. Now, the buffer remains small, but here is my problem now - Only 1% memory is being used by grok_exporter and the buffer has around 4000 lines, why isn't grok_exporter using the free memory of my machine to clear out the buffer with a higher speed? With this cron approach, my processing time per log line should not increase, but it's still increasing steadily because the buffer(even though small) is not getting cleared out as fast as it should.

Yagyansh avatar Dec 18 '20 18:12 Yagyansh