fluent-bit icon indicating copy to clipboard operation
fluent-bit copied to clipboard

I have installed fluent bit in my kubernetes . but still i have not getting any response in my kibana dashboard. Also i have checked fluent bit pod logs not getting any errors

Open kishorpacefin opened this issue 2 years ago • 14 comments

I have checked in fluent bit pods i am not getting any errors . but one thing i saw i am getting successfully connection to kubernetes but not getting any connection related to elastic search.

here is my details fluent-bit.conf: | [SERVICE] Daemon Off Flush 1 Log_Level info Parsers_File /fluent-bit/etc/parsers.conf Parsers_File /fluent-bit/etc/conf/custom_parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 Health_Check On

[INPUT]
    Name tail
    Path /var/log/containers/*.log
    multiline.parser docker, cri
    Tag kube.*
    Mem_Buf_Limit 512MB
    Skip_Long_Lines On

[INPUT]
    Name systemd
    Tag host.*
    Systemd_Filter _SYSTEMD_UNIT=kubelet.service
    Read_From_Tail On

[FILTER]
    Name kubernetes
    Match kube.*
    Merge_Log On
    Keep_Log Off
    K8S-Logging.Parser On 
    K8S-Logging.Exclude On

[OUTPUT]
    Name            es
    Match           kube.*
    Host            my_app.es.centralindia.azure.elastic-cloud.com
    Port            443
    Logstash_Format On
    HTTP_User       elastic
    HTTP_Passwd     password
    Suppress_Type_Name On
    Retry_Limit     False
    tls             On
    tls.verify      Off
    Replace_Dots    On


[OUTPUT]
    Name            es
    Match           host.*
    Host            my_app.es.centralindia.azure.elastic-cloud.com
    Port            443
    Logstash_Format On
    HTTP_User       elastic
    HTTP_Passwd    password
    Suppress_Type_Name On
    Logstash_Prefix node
    Retry_Limit     False
    tls             On
    tls.verify      Off
    Replace_Dots    On                            

and here logs i am getting
Fluent Bit v2.2.0

  • Copyright (C) 2015-2023 The Fluent Bit Authors
  • Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
  • https://fluentbit.io

[2023/11/21 08:56:59] [ info] [fluent bit] version=2.2.0, commit=db8487d123, pid=1 [2023/11/21 08:56:59] [ info] [storage] ver=1.5.1, type=memory, sync=normal, checksum=off, max_chunks_up=128 [2023/11/21 08:56:59] [ info] [cmetrics] version=0.6.4 [2023/11/21 08:56:59] [ info] [ctraces ] version=0.3.1 [2023/11/21 08:56:59] [ info] [input:tail:tail.0] initializing [2023/11/21 08:56:59] [ info] [input:tail:tail.0] storage_strategy='memory' (memory only) [2023/11/21 08:56:59] [ info] [input:tail:tail.0] multiline core started [2023/11/21 08:56:59] [ info] [input:systemd:systemd.1] initializing [2023/11/21 08:56:59] [ info] [input:systemd:systemd.1] storage_strategy='memory' (memory only) [2023/11/21 08:56:59] [ info] [filter:kubernetes:kubernetes.0] https=1 host=kubernetes.default.svc port=443 [2023/11/21 08:56:59] [ info] [filter:kubernetes:kubernetes.0] token updated [2023/11/21 08:56:59] [ info] [filter:kubernetes:kubernetes.0] local POD info OK [2023/11/21 08:56:59] [ info] [filter:kubernetes:kubernetes.0] testing connectivity with API server... [2023/11/21 08:56:59] [ info] [filter:kubernetes:kubernetes.0] connectivity OK [2023/11/21 08:56:59] [ info] [output:es:es.0] worker #1 started [2023/11/21 08:56:59] [ info] [output:es:es.0] worker #0 started [2023/11/21 08:56:59] [ info] [output:es:es.1] worker #0 started [2023/11/21 08:56:59] [ info] [output:es:es.1] worker #1 started [2023/11/21 08:56:59] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020 [2023/11/21 08:56:59] [ info] [sp] stream processor started [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1829472 watch_fd=1 name=/var/log/containers/argocd-redis-7d8d46cc7f-sx7x8_argocd_redis-e8de5686da06e1d7e478a7bb2f621c2c175c8b9f40e6f6355ef56bcf94d862c7.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200490 watch_fd=2 name=/var/log/containers/argocd-server-5986f74c99-xs52l_argocd_argocd-server-9fda83cf12b5f7d96f8073759d2f188e88b01751de577f34dfa49eace8f25856.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1827648 watch_fd=3 name=/var/log/containers/calico-kube-controllers-7c8c45649-kdn9q_calico-system_calico-kube-controllers-493e2b61885f1e044b1e067f38c1cd81dbbd2c922d51e75d09192a4553397ba1.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200818 watch_fd=4 name=/var/log/containers/calico-node-n7jn9_calico-system_calico-node-30536f2af8a8f745daad02a95f5dad31f80223a3b96b6d80d997580e4c0da013.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200725 watch_fd=5 name=/var/log/containers/calico-node-n7jn9_calico-system_flexvol-driver-0551b09373ac01ff6ae2dbf22a749d0d32ff766a0ba2eb764660ed898447312a.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200737 watch_fd=6 name=/var/log/containers/calico-node-n7jn9_calico-system_install-cni-c9c3d04c27d360fe0e18d8028a5d3ce278b735d679b038049b226eb05fc46fe9.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1827504 watch_fd=7 name=/var/log/containers/calico-typha-566f69464-7lxvj_calico-system_calico-typha-88c3c7bdab1ae9af0c72497fcc6b9230bf8320ab21dbef5dda096befaa74193a.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806722 watch_fd=8 name=/var/log/containers/cloud-node-manager-729ct_kube-system_cloud-node-manager-d747502990e72749de6f4d47f97618988b846d60b7ffec58abe26c407bc51d07.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200588 watch_fd=9 name=/var/log/containers/cm-ftp-automation-84957cdf45-ngm6w_pocketful-dev_cm-ftp-automation-3211560af05b0ee7fc4450c46d332bef279641a2358220c4e2980cd5d79d8ea0.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1827681 watch_fd=10 name=/var/log/containers/coredns-76b9877f49-v8fgw_kube-system_coredns-3b65111c78f1b0766220a5d7e9bcc54c12b986895f635cc41d1e4e45a4e52708.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1829519 watch_fd=11 name=/var/log/containers/coredns-autoscaler-85f7d6b75d-896lh_kube-system_autoscaler-cd107e3cf9ae23313806d5b2ae9bd9d9d538a42bcd02914f99f08e0a33a9e508.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806856 watch_fd=12 name=/var/log/containers/csi-azuredisk-node-gfg9t_kube-system_azuredisk-6340e691d437f3f76f7c35d7ea88989ece1fba5dac1d0f91b33453c84e9eb348.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806724 watch_fd=13 name=/var/log/containers/csi-azuredisk-node-gfg9t_kube-system_liveness-probe-e397803789dd1aad24328dcfbfaccbc9e2ce0cac8579649f426abc345d4d026b.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806820 watch_fd=14 name=/var/log/containers/csi-azuredisk-node-gfg9t_kube-system_node-driver-registrar-b47b5c4e35aa21503651231c10fbd055cad607cb19d71ff5f8fc9585095fbb33.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806851 watch_fd=15 name=/var/log/containers/csi-azurefile-node-dv2c4_kube-system_azurefile-b3ce6864f6bef638b67e4c6cd060c6722f81e357fa8fc0aa7d4945afd2609efa.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806704 watch_fd=16 name=/var/log/containers/csi-azurefile-node-dv2c4_kube-system_liveness-probe-7c306ae32bf275e33de7bc40513abeb3ec2a7a2dc779317c8b14264ad94fa610.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806797 watch_fd=17 name=/var/log/containers/csi-azurefile-node-dv2c4_kube-system_node-driver-registrar-5aaedb51b220b3ddf5fc4cc7c9aeabe5c232dd3d02fe4b0fce7d7c05b3d40e2f.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4176370 watch_fd=18 name=/var/log/containers/digiowebhook-app-65f76567fb-xrpl8_pocketful-dev_digiowebhook-app-da220796e260a489f7ee0b3fd48f28e1f7b7199f8a1b1c850682dfe463b2c2cb.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806569 watch_fd=19 name=/var/log/containers/ftp-websocket-7c56cd6b5-vbmn2_pocketful-dev_ftp-websocket-eda5486a0e710f75a28b044e85c3a23149773e8736ada51229fe2ce671d0e5c3.log [2023/11/21 08:56:59] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1827263 watch_fd=20 name=/var/log/containers/konnectivity-agent-5df89d895c-67wlg_kube-system_konnectivity-agent-a0abb1271cbe365225023f812672381227a12f6b1b75dc58bd3b4b18852dfed4.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1806701 watch_fd=21 name=/var/log/containers/kube-proxy-vzmzp_kube-system_kube-proxy-bootstrap-1ca65abbccf52436fc20ec1fe173214f9f1fd44cec48a9971855b0da5fc26aaf.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=3899259 watch_fd=22 name=/var/log/containers/notifications-app-5d4c774bc5-grnms_pocketful-dev_notifications-app-ec0286586ee186aab9c279cc403bcd315871d94d69c2fbbe434e11b04d6d4c8d.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4161973 watch_fd=23 name=/var/log/containers/pace-rbac-app-65cbc94798-fbw29_pocketful-dev_pace-rbac-app-2fa007b24c9d6e4ce8c6de7b5eaeef7e292c49ffd9299f2768c2610381f3f44e.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200513 watch_fd=24 name=/var/log/containers/pacejobs-app-887d4f8f6-vgggp_pocketful-dev_pacejobs-app-71f8bddfff4fa919eee629f9f170e04c63367cb7a617b98e68a4055f95b0f383.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200653 watch_fd=25 name=/var/log/containers/pacejobs-app-887d4f8f6-vgggp_pocketful-dev_pacejobs-app-aac87bdebcba4f55188c3f90196fef067047f6d841d40afada231947425b203a.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=3884256 watch_fd=26 name=/var/log/containers/paper-trading-app-7c94599c58-ch876_pocketful-dev_paper-trading-app-058474c0d7d6d2517790586bacfb66f2d1078ac086d2cf4b8001779438c4a678.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200558 watch_fd=27 name=/var/log/containers/payment-pocketful-app-866b7857db-bprcr_pocketful-dev_payment-pocketful-app-73e6439fe5310b8f20652d95a6b3ba5c5f93540ee3efabdb4e80e7c0232cc96b.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200845 watch_fd=28 name=/var/log/containers/pocketful-landing-page-app-547b477d99-vmwj2_pocketful-dev_pocketful-landing-page-app-ffe0bfcab81257f3683e196ff65a051aec1b9469945c87d91b7793c196a15241.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200532 watch_fd=29 name=/var/log/containers/pocketful-web-app-84d4b9b8f8-gvtl8_pocketful-dev_pocketful-web-app-9cc74a50f12b4e341298e909fa03d4aa2223f5c5b5df374938671b6f64e9eec9.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200743 watch_fd=30 name=/var/log/containers/pocketful-web-app-84d4b9b8f8-jdtwc_pocketful-dev_pocketful-web-app-429b70e18c68985d4621a6c0e4cbcd0aa0f09279ebc0052a1bc5edce00367062.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200772 watch_fd=31 name=/var/log/containers/pocketful-web-app-84d4b9b8f8-szjsz_pocketful-dev_pocketful-web-app-d7857d0c2a47177ac080ed12f14ca8aa843f390bab82d68ae96c4794b96066aa.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4190841 watch_fd=32 name=/var/log/containers/qr-util-app-6c49979d87-s89w9_pocketful-dev_qr-util-app-7c80fc27453e4523ab05a4afef1b387cee4f50e0555bf27a55fc75e1f9b6a67c.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200624 watch_fd=33 name=/var/log/containers/redis-3a-6c8fdfd647-vcctq_pocketful-dev_redis-3a-6bee588addb767ef4a2f70f0a167f2a04eb0bbaa677c8ac824b340d68fae33b5.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200793 watch_fd=34 name=/var/log/containers/space-app-b8966d597-2k26r_pocketful-dev_space-app-5d28448abb3cb749368354a65483f745bdff8f81de420a1d7a57edd4ce660447.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1807112 watch_fd=35 name=/var/log/containers/kube-proxy-vzmzp_kube-system_kube-proxy-5371bb2ed4e1a036f29465f091e98f0a67ef86124af43fd7f149f69c015aa9a7.log [2023/11/21 08:57:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=4200512 watch_fd=36 name=/var/log/containers/fluent-bit-wq5dc_kube-logging_fluent-bit-b5649877da26dce225f086ce9660855a2c41cba56b7ad73672cfcea66005606d.log

here you can see no error in my logs but stiil not receiving any logs in my kibana

i have checked my elastc endpoint it is accessable from kubernetes Test-NetConnection -ComputerName my_app.es.centralindia.azure.elastic-cloud.com -Port 443

ComputerName :my_app.es.centralindia.azure.elastic-cloud.com RemoteAddress : //here i just hide my ip RemotePort : 443 InterfaceAlias : Wi-Fi SourceAddress : //hide my source ip TcpTestSucceeded : True

kishorpacefin avatar Nov 21 '23 09:11 kishorpacefin

I hope those are not valid credentials for your Elasticsearch instance - if so please revoke immediately.

patrick-stephens avatar Nov 21 '23 10:11 patrick-stephens

@patrick-stephens Yes this is not my valid credential i have give wrong credential , but in my actual file i have given right creadential which is accesssable from kubernetes cluster. can you just ignore the credential and tell me how can i resolve this issue.

kishorpacefin avatar Nov 21 '23 10:11 kishorpacefin

Are you running Windows containers? Just because of your usage of Test-NetConnection and if so are the pods for Fluent Bit on Linux?

Add a stdout output and verify it is showing actual log output in your FB pod.

I would suggest running a fluent/fluent-bit:2.2.0-debug image and shelling into it with exec then testing connection from there with curl or similar. That will ensure it is not a certs, DNS or other networking issue with the actual Fluent Bit pod.

After that you can enable debug logging (log_level debug) and tracing (trace_X on) for Elasticsearch to diagnose further: https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch

patrick-stephens avatar Nov 21 '23 10:11 patrick-stephens

actually i am using windows os for that i am checking with Test-NetConnection

And i am using latest image apiVersion: apps/v1 kind: DaemonSet metadata: name: fluent-bit namespace: kube-logging labels: helm.sh/chart: fluent-bit-0.40.0 app.kubernetes.io/name: fluent-bit app.kubernetes.io/instance: fluent-bit app.kubernetes.io/version: "2.2.0" app.kubernetes.io/managed-by: Helm spec: selector: matchLabels: app.kubernetes.io/name: fluent-bit app.kubernetes.io/instance: fluent-bit template: metadata: labels: app.kubernetes.io/name: fluent-bit app.kubernetes.io/instance: fluent-bit annotations: checksum/config: 0e4b82a5a898bb1862b44f9c66bba94060bd524b2a62ea84ea6dbfed995a5636 spec: serviceAccountName: fluent-bit hostNetwork: false dnsPolicy: ClusterFirst containers: - name: fluent-bit image: "cr.fluentbit.io/fluent/fluent-bit:2.2.0" imagePullPolicy: Always command: - /fluent-bit/bin/fluent-bit args: - --workdir=/fluent-bit/etc - --config=/fluent-bit/etc/conf/fluent-bit.conf ports: - name: http containerPort: 2020 protocol: TCP livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: /api/v1/health port: http volumeMounts: - name: config mountPath: /fluent-bit/etc/conf - mountPath: /var/log name: varlog - mountPath: /var/lib/docker/containers name: varlibdockercontainers readOnly: true - mountPath: /etc/machine-id name: etcmachineid readOnly: true volumes: - name: config configMap: name: fluent-bit - hostPath: path: /var/log name: varlog - hostPath: path: /var/lib/docker/containers name: varlibdockercontainers - hostPath: path: /etc/machine-id type: File name: etcmachineid here you can see this

here is my curl command and i have domment the actual password

curl -u elastic:password -v https://pocketful-new.es.centralindia.azure.elastic-cloud.com/

  • Trying 20.204.247.135:443...
  • Connected to pocketful-new.es.centralindia.azure.elastic-cloud.com (20.204.247.135) port 443
  • schannel: disabled automatic use of client certificate
  • ALPN: curl offers http/1.1
  • ALPN: server accepted http/1.1
  • using HTTP/1.1
  • Server auth using Basic with user 'elastic'

GET / HTTP/1.1 Host: pocketful-new.es.centralindia.azure.elastic-cloud.com Authorization: Basic ZWxhc3RpYzpSSklhZW11c1ZPeUhEc3kzMU5XNjBGVUw= User-Agent: curl/8.4.0 Accept: /

  • schannel: remote party requests renegotiation
  • schannel: renegotiating SSL/TLS connection
  • schannel: SSL/TLS connection renegotiated < HTTP/1.1 200 OK < Content-Length: 565 < Content-Type: application/json < X-Cloud-Request-Id: by0HJP9sTpuBchY1mkgphQ < X-Elastic-Product: Elasticsearch < X-Found-Handling-Cluster: 8529b3cd192b4853bc121be3f4af109e < X-Found-Handling-Instance: instance-0000000003 < Date: Tue, 21 Nov 2023 11:04:38 GMT < { "name" : "instance-0000000003", "cluster_name" : "8529b3cd192b4853bc121be3f4af109e", "cluster_uuid" : "6tlKHhRBQ7Wdh3U52EmojQ", "version" : { "number" : "8.11.1", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "6f9ff581fbcde658e6f69d6ce03050f060d1fd0c", "build_date" : "2023-11-11T10:05:59.421038163Z", "build_snapshot" : false, "lucene_version" : "9.8.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" }
  • Connection #0 to host pocketful-new.es.centralindia.azure.elastic-cloud.com left intact

here you can see it successfully connect to my elastic search . but still i am not able to see any logs in kibana

kishorpacefin avatar Nov 21 '23 11:11 kishorpacefin

Well, you're not using a Windows container which is my point - the difference in OS may mean networking is not working.

image: "cr.fluentbit.io/fluent/fluent-bit:2.2.0"

You're also mounting a Linux path for the container logs.

Did you test from inside the Fluent Bit container itself? Use a debug image to shell in and test as I say. It doesn't really matter what your host is doing, you need to verify what the container sees from networking perspective.

patrick-stephens avatar Nov 21 '23 11:11 patrick-stephens

so you are saying i need to go inside the container and from container i need to test

can you tell me briefly(each and every steps) how can i do testing from container. so i can test .

kishorpacefin avatar Nov 21 '23 11:11 kishorpacefin

https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/ shows how to use kubectl exec (and plenty of other docs on it) - make sure you switch your deployment to the -debug image otherwise there is no shell and it will fail: https://github.com/fluent/fluent-bit/issues/8207#issuecomment-1820687715

Then you can use curl (again, well documented online) to check your endpoint.

patrick-stephens avatar Nov 21 '23 14:11 patrick-stephens

Hello @patrick-stephens, I tried this. Used the -debug image and got the shell and tried to curl my elasticsearch server and it was successful. However, the log_level debug seems to give more information on another issue. In my case, the debug logs show ...skip (invalid) entry=/var/log/containers/... messages when trying to read the log files of the pod. It skips them all and then throws the message 0 new files found on path '/var/log/containers/*.log' Please do you have any idea what could be causing the skipping?

MalchielUrias avatar Nov 22 '23 10:11 MalchielUrias

Check the logs and see, you can access the path from the debug image. Does the path exist and is it accessible, are there files and are they accessible or have readable data?

Add a stdout output and verify it is showing actual log output in your FB pod.

This is also why I said to use stdout before rather than trying to debug via a large stack on top, it would show you the data it is reading directly (or not show anything as in this case and therefore explaining why there is no data).

patrick-stephens avatar Nov 22 '23 10:11 patrick-stephens

Looking at the code: https://github.com/fluent/fluent-bit/blob/2555bd45d2060c519fb278640da737ab2f31f7ea/plugins/in_tail/tail_scan_glob.c#L267

This shows the reason for skipping is that it is not a regular file: https://github.com/fluent/fluent-bit/blob/2555bd45d2060c519fb278640da737ab2f31f7ea/plugins/in_tail/tail_scan_glob.c#L234-L235C47

Is it a symlink and is the destination of the symlink also mounted, i.e. it may be a dangling link? This happens fairly often with pod log -> container symlinks where only one side is mounted so it links to nothing.

Something is up with those files anyway, they're not being reported as regular files to Fluent Bit so it must be something in your deployment (this is outside of FB control, it just asks for the file and gets told it is invalid).

patrick-stephens avatar Nov 22 '23 12:11 patrick-stephens

Thank you so much @patrick-stephens. I have been able to sort our the issue.

@kishorpacefin I hope this helps with your issue as well. The official helm template creates a ds with volume mounts for /var/log/containers/

However the files in /var/log/containers/ are symlinks to files in /var/log/pods/

The files in /var/log/pods/ are also symlinks to files in your $DOCKER_HOME/containers

So you need to add another volume mouth that matches $DOCKER_HOME/containers which in my case was /var/lib/containerd/container_logs since I was using contained.

You can find the symlink directory that your is pointing to by running the command: find . -type l -ls command which shows you the symlinks in your folder. Run this command on your node machine.

After you add the volume mount and the volume everything should be alright.

Read through this for more info: https://github.com/fluent/fluent-bit/issues/2676#issuecomment-709675214

MalchielUrias avatar Nov 22 '23 15:11 MalchielUrias

@MalchielUrias what are the steps he told me firstly i am able to go inside the container by image cr.fluentbit.io/fluent/fluent-bit:2.2.0. then after that i am not able to go to my fluent bit container by using this command i am going inside this conatiner

kubectl run -i --tty --rm debug-pod -n kube-logging --image=fluent/fluent-bit:2.2.0-debug -- /bin/sh

then after that inside this conatiner when i am trying to access test with curl i ma getting this result curl -u "elastic:password" -k "https://pocketful-new.es.centralindia.azure.elastic-cloud.com/" { "name" : "instance-0000000001", "cluster_name" : "8529b3cd192b4853bc121be3f4af109e", "cluster_uuid" : "6tlKHhRBQ7Wdh3U52EmojQ", "version" : { "number" : "8.11.1", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "6f9ff581fbcde658e6f69d6ce03050f060d1fd0c", "build_date" : "2023-11-11T10:05:59.421038163Z", "build_snapshot" : false, "lucene_version" : "9.8.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" }

can you tell me each process you do to solve this issue . my main issue is my fluent bit is not able to send data to elastic search . but inside my fluent bit pod i am not getting any error.

so what are the steps you follow can you explain me properly so i can integrate it.

Use a debug image to shell in and test as I say. It doesn't really matter what your host is doing, you need to verify what the container sees from networking perspective.(can you explain me from this stage what you have done it)

kishorpacefin avatar Nov 22 '23 15:11 kishorpacefin

Check your host logs, do you have symlinks from /var/logs/container to somewhere else? Check the pod spec mounts both the source and destination. It's not a FB issue in that case, it's just a misconfiguration in your pod spec.

More recently, container runtimes are using symlinks for kubelet logs particularly docker.

patrick-stephens avatar Nov 22 '23 17:11 patrick-stephens

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.

github-actions[bot] avatar Feb 21 '24 01:02 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Feb 27 '24 01:02 github-actions[bot]