beats icon indicating copy to clipboard operation
beats copied to clipboard

Multiple Harvested of log file path got started for the same log file name within the span of 7 seconds

Open vikas271019 opened this issue 1 year ago • 7 comments

We are using filebeat 8.4.3. The problem that we are facing is, multiple harvester is being created for a single log file, within a span of 7 seconds as result of which duplicate logs being generated from filebeat.

For example, please find the below set of logs flow from log producer to filebeat

Log Producer microservice logs -test-controller

Single logs being printed in log generator microservice container logs with message = "auto,1.64"

{"timestamp":"2024-05-10T05:54:51.350+02:00","service_id":"test-controller","message":"auto,1.64","metadata":{"category":"HA-in-service-performance","namespace":"test","pod_name":"test-controller-6f565dd5b9-qv2w2","application_id":"test884"},"severity":"info","version":"1.1.0","facility":"security/authorization messages","extra_data":{"test_stream":"dsp","in_service_performance":{"version": "1.0.0","originating_service_name": "test-controller","originating_service_version": "1.239.7-1","originating_pod_name": "test-controller-6f565dd5b9-qv2w2","event_type":"small-local-restart","reporting_service_version": "1.239.7-1"}}}

But in the configured output we are seeing the logs being seen twice, sometime 4 times. while we are analysing the beats logs we could see two harvesters being created. please find the filebeat logs as mentioned below for the same file path being started twice(test-controller-6f565dd5b9-qv2w2-d76e6f05091b6747ecac5660ab65b32c51368729a48640afbc86484cf3c05d15.log)

{"log.level":"info","@timestamp":"2024-05-10T04:56:47.874+0200","log.logger":"input.harvester","log.origin":{"file.name":"log/harvester.go","file.line":310},"message":"Harvester started for paths: [/var/log/containers/test-controller-6f565dd5b9-qv2w2-d76e6f05091b6747ecac5660ab65b32c51368729a48640afbc86484cf3c05d15.log]","service.name":"filebeat","input_id":"e271da1d-390e-48ac-be82-4be7d7e644bc","source_file":"/var/log/containers/test-controller-6f565dd5b9-qv2w2-d76e6f05091b6747ecac5660ab65b32c51368729a48640afbc86484cf3c05d15.log","state_id":"native::143563266-64515","finished":false,"os_id":"14266-64515","harvester_id":"e9f36c6a-d5c1-4ddc-958f-d254da5b6ea6","ecs.version":"1.6.0"} {"log.level":"info","@timestamp":"2024-05-10T04:56:54.160+0200","log.logger":"input.harvester","log.origin":{"file.name":"log/harvester.go","file.line":310},"message":"Harvester started for paths: [/var/log/containers/test-controller-6f565dd5b9-qv2w2-d76e6f05091b6747ecac5660ab65b32c51368729a48640afbc86484cf3c05d15.log]","service.name":"filebeat","input_id":"71aa7599-3723-4e6c-a3d2-f9ac0a1007ec","source_file":"/var/log/containers/test-controller-6f565dd5b9-qv2w2-d76e6f05091b6747ecac5660ab65b32c51368729a48640afbc86484cf3c05d15.log","state_id":"native::14266-64515","finished":false,"os_id":"14266-64515","harvester_id":"2e86644e-e9c7-4f68-b859-b225711f9220","ecs.version":"1.6.0"} {"log.level":"info","@timestamp":"2024-05-10T04:56:54.281+0200","log.logger":"input.harvester","log.origin":{"file.name":"log/harvester.go","file.line":337},"message":"Reader was closed. Closing.","service.name":"filebeat","input_id":"71aa7599-3723-4e6c-a3d2-f9ac0a1007ec","source_file":"/var/log/containers/test-controller-6f565dd5b9-qv2w2-d76e6f05091b6747ecac5660ab65b32c51368729a48640afbc86484cf3c05d15.log","state_id":"native::14266-64515","finished":false,"os_id":"143563266-64515","harvester_id":"2e86644e-e9c7-4f68-b859-b225711f9220","ecs.version":"1.6.0"}

Hence we wanted to understand this is causing duplicate logs being sent to the configured output? Why multiple harvester is being created? is there any way we can avoid it?

NOTE: there are no restarts or disturbances caused to the registry file.

vikas271019 avatar May 15 '24 07:05 vikas271019

@vikas271019 Please share your Filebeat configuration. And please enclose it in triple backticks (```) so the formatting is correctly preserved.

ycombinator avatar May 15 '24 21:05 ycombinator

Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)

elasticmachine avatar May 15 '24 21:05 elasticmachine

@vikas271019 Please share your Filebeat configuration. And please enclose it in triple backticks (```) so the formatting is correctly preserved. As requested pleased find the below logshipper config file.

Name:         test-log-shipper-cfg
Namespace:    abc
Labels:       app.kubernetes.io/instance=the-test-ab-controller
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=test-log-shipper
              app.kubernetes.io/version=19.1.0_16
Annotations:  testsson.com/product-name: Log Shipper
              testsson.com/product-number: C 201 1464
              testsson.com/product-revision: 19.1.0
              meta.helm.sh/release-name: the-test-ab-controller
              meta.helm.sh/release-namespace: abc

Data
====
filebeat.yml:
----

filebeat.autodiscover:
  providers:
  - type: kubernetes
    namespace: "abc"
    hints.enabled: false
    add_resource_metadata:
      deployment: false
      cronjob: false
      namespace:
        enabled: false
      node:
        enabled: false
    templates:
      - config:
        - type: container
          paths:
          - /var/log/containers/${data.kubernetes.pod.name}_${data.kubernetes.namespace}_${data.kubernetes.container.name}-${data.kubernetes.container.id}.log
    appenders:
      - type: config
        config:
          fields:
            logplane: "test-app-logs"
          fields_under_root: true
          close_timeout: "5m"
          ignore_older: "24h"
          clean_inactive: "25h"
          close_removed: false
          clean_removed: false
output.logstash:
  hosts: "test-log-transformer:5044"
  ssl.certificate_authorities: "${TRUSTED_INTERNAL_ROOT_CA_PATH}/ca.crt"
  ssl.certificate: "${LT_CLIENT_CERT_PATH}/${CERT}"
  ssl.key: "${LT_CLIENT_CERT_PATH}/${KEY}"
  ssl.verification_mode: "full"
  ssl.renegotiation: "freely"
  ssl.supported_protocols: ["TLSv1.2", "TLSv1.3"]
  ssl.cipher_suites: []
  bulk_max_size: 2048
  worker: 1
  pipelining: 0
  ttl: 30
filebeat.registry.flush: 5s
logging.level: "info"
logging.metrics.enabled: false
http.enabled: true
http.host: localhost
http.port: 5066

Events:

vikas271019 avatar May 16 '24 05:05 vikas271019

Hi Team, As requested , I have shared the logshipper config map file . kindly confirm if this setting up the ignore_older to 24 may cause this duplicate issues

vikas271019 avatar May 20 '24 11:05 vikas271019

Hi All, Please provide an updates on the given issue.

vikas271019 avatar May 22 '24 13:05 vikas271019

We did lot of improvement on K8S environment in the last versions. Would it be possible for you to upgrade to a more up2date version and confirm if you are still facing the same problem?

pierrehilbert avatar May 22 '24 13:05 pierrehilbert

We did lot of improvement on K8S environment in the last versions. Would it be possible for you to upgrade to a more up2date version and confirm if you are still facing the same problem?

--Thanks for your reply, as you specifically mentioned lot of improvements on K8S environment in the latest version of fliebeat. kindly confirm based on what all factors you suspect this issue is related to K8S environment.

vikas271019 avatar May 23 '24 05:05 vikas271019

We did lot of improvement on K8S environment in the last versions. Would it be possible for you to upgrade to a more up2date version and confirm if you are still facing the same problem?

--Thanks for your reply, as you specifically mentioned lot of improvements on K8S environment in the latest version of fliebeat. kindly confirm based on what all factors you suspect this issue is related to K8S environment.

Hi Team, Kindly provide your updates on the above mentioned statement

vikas271019 avatar May 27 '24 14:05 vikas271019

Hi @pierrehilbert we have uplifted to filbeat version 8.12.1, kindly confirm will this version include the latest changes of K8S environment.

vikas271019 avatar Jun 03 '24 06:06 vikas271019

Hi! We just realized that we haven't looked into this issue in a while. We're sorry!

We're labeling this issue as Stale to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1. Thank you for your contribution!

botelastic[bot] avatar Jun 03 '25 07:06 botelastic[bot]