helm-charts
helm-charts copied to clipboard
[fluentd-elasticsearch] Incorrect handling of very long log entries (>16K characters)
Describe the bug My application logs very long (more than 16K characters) messages in one entry. The problem is that such entries are split in ElastciSearch/Kibana into two separate records.
Example Docker logs:
{"log":"09:25:23.626 very_long_message_that_is_cut_after_16_k_characters...","stream":"stdout","time":"2021-11-25T09:25:23.629585 122Z"} {"log":"09:25:23.629 rest_of_very_long_message\n","stream":"stdout","time":"2021-11-25T09:25:23.629585122Z"}
I assume that the problem is related to concat plugin
configuration: https://github.com/kokuwaio/helm-charts/blob/main/charts/fluentd-elasticsearch/templates/configmaps.yaml#L158
The problem seems to be fixed when I change key message
to key log
in https://github.com/kokuwaio/helm-charts/blob/43fde9d95ba7e9f69a979fdeaeab4cc1badb3835/charts/fluentd-elasticsearch/templates/configmaps.yaml#L161.
Version of Helm and Kubernetes:
Helm Version: 3.7.1
$ helm version
please put the output of it here
Kubernetes Version: 1.19
$ kubectl version
please put the output of it here
Which version of the chart: 13.1.0
What happened: Long log entry is split into two records in ElasticSearch/Kibana.
What you expected to happen: Long log entry should be saved as one record in ElasticSearch/Kibana.
How to reproduce it (as minimally and precisely as possible): Run container that generates very long log entry, at least 16K characters.