containers-roadmap icon indicating copy to clipboard operation
containers-roadmap copied to clipboard

[EKS] [request]: EKS fargate logging by aws-for-fluent-bit issues

Open Jason-AUS opened this issue 2 years ago β€’ 9 comments

Community Note

  • Please vote on this issue by adding a πŸ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request

  • support modify, nest filters after kubernetes filter in aws-for-fluent-bit logger filters.conf file
  • support pipeline in es output plugin

Which service(s) is this request for?

EKS Fargate aws-for-fluent-bit

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

Hi there, I'm running a Cronjob in EKS fargate and use aws-for-fluent-big logger to send the job logs to a elasticsearch cluster.

After trying many combination of filters in aws-for-fluent-bit logger as illustrated here and here

I found if using modify or nest filters after the kubernetes filter in filters.conf file, the aws-for-fluent-bit will not start;

modify or nest filters only can be placed before kubernetes filter.

here is my aws-logging configmap and filters.conf file:

kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
data:
  flb_log_cw: "true" #ships fluent-bit process logs to CloudWatch

  parsers.conf: |
    [PARSER]
        Name crio
        Format Regex
        Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z

  filters.conf: |
    [FILTER]
        Name parser
        Match kube.*
        Key_name log
        Parser crio

    [FILTER]
        Name modify
        Match kube.*
        Copy log message

    [FILTER]
        Name modify
        Match kube.*
        Condition Key_does_not_exist level
        Condition Key_value_matches stream stderr
        Add level error

    [FILTER]
        Name modify
        Match kube.*
        Condition Key_does_not_exist level
        Condition Key_value_matches stream stdout
        Add level info

    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Buffer_Size 0
        Kube_Meta_Cache_TTL 300s

  output.conf: |
    [OUTPUT]
        Name es
        Match  kube.*
        Host eshost
        Port esport
        HTTP_User username
        HTTP_Passwd password
        Index fluentbit-%Y-%m-%d
        tls On
        tls.verify Off

in the previouse filters.conf, I want to add an extra nest filter after kubernetes filter to add kubernetes metadata fields to root level:

    [FILTER]
        Name         nest
        Match        *
        Operation    lift
        Nested_under kubernetes

Unfortunately after I added the filter, aws-for-fluent-bit just doesn't work and start and I cannot find any logs from fluent-bit in Cloudwatch.

The second issue is I found the elasticsearch output plugin does not support pipeline parameter, it been defined in the fluent bit doc here

Are you currently working around this issue? How are you currently solving this problem?

Problem is not solved yet

Additional context Anything else we should know?

Attachments If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)

Jason-AUS avatar Jan 14 '22 21:01 Jason-AUS

Thanks for this info @Jason-AUS and I apologize for the issues using other filters (namely, nest and modify) with the newly supported kubernetes filter. Let me have the team look into this and I'll get back to you shortly.

akestner avatar Jan 21 '22 20:01 akestner

No problem, Thank you Alex,

Looking forward to hearing from you soon.

Kind regards,

Jason

Alex Kestner @.***>于2022εΉ΄1月22ζ—₯ 周六07:24ε†™ι“οΌš

Thanks for this info @Jason-AUS https://github.com/Jason-AUS and I apologize for the issues using other filters (namely, nest and modify) with the newly supported kubernetes filter. Let me have the team look into this and I'll get back to you shortly.

β€” Reply to this email directly, view it on GitHub https://github.com/aws/containers-roadmap/issues/1625#issuecomment-1018833379, or unsubscribe https://github.com/notifications/unsubscribe-auth/APOHY4KS34CRUJ4RVH62BQTUXG6HPANCNFSM5L72YYBQ . You are receiving this because you were mentioned.Message ID: @.***>

Jason-AUS avatar Jan 22 '22 06:01 Jason-AUS

Is there any workaround for this at the moment?

Fodoj avatar Mar 25 '22 19:03 Fodoj

@Fodoj seems EKS team is working on this issue but not solved yet. other possible workaround might be using native fluentbit sidecar container in the same fargate pod, which I think is not a good option.

Jason-AUS avatar Mar 26 '22 00:03 Jason-AUS

Hello, any progress here? We want to add additional fields based on the kubernetes annotations but now because of that bug it's impossible.

comutrex avatar Apr 19 '22 14:04 comutrex

@akestner any updates on this issue ??

Pixis-Akshay-Gopani avatar Jun 20 '22 17:06 Pixis-Akshay-Gopani

I was having the same issue with any filter that I added after the Kubernetes one. Adding filters before the Kubernetes one works tho. Does anyone knows which fluentbit version is AWS using for Fargate?

gugacavalieri avatar Jun 29 '22 17:06 gugacavalieri

Fargate validates against the following supported filters: grep, parser, record_modifier, rewrite_tag, throttle, nest, modify, and kubernetes.

https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html

PettitWesley avatar Jul 22 '22 16:07 PettitWesley

We have the same issue with trying to send to Splunk HEC. In the link above it makes it sound like the fargate logging only works when going to cloudwatch or kinesis.

toiye avatar Jul 27 '22 23:07 toiye

Closing this issue as we recently finished patching all EKS clusters using Fargate to resolve the issue described here caused by additional, nested filters after the Kubernetes filter. If you were impacted, please restart the pods that ran into this issue to pick up the changes.

akestner avatar Aug 30 '22 00:08 akestner