fix for issue 9125 - gelf output plugin creates invalid GELF messages from JSON arrays parsed in kubernetes filter
…escaped
Changed flb_sds_cat to flb_sds_cat_utf8 in "else" case in flb_msgpack_gelf_value() of flb_pack_gelf.c. Creates valid JSON output when input arrays contain quotes. fixes #9125
Enter [N/A] in the box, if an item is not applicable to your change.
Testing Before we can approve your change; please submit the following in a comment:
- [x] Example configuration file for the change
- [x] Debug log output from testing the change
- [x] Attached Valgrind output that shows no leaks or memory corruption was found
If this is a change to packaging of containers or native binaries then please confirm it works for all targets.
- [N/A] Run local packaging test showing all targets (including any new ones) build.
- [N/A] Set
ok-package-testlabel to test for all targets (requires maintainer to do).
Documentation
- [N/A] Documentation required for this feature
Backporting
- [ ] Backport to latest stable release.
Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.
valgrind output:
$ valgrind bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf -R /etc/fluent-bit/parsers.conf
==2819754== Memcheck, a memory error detector
==2819754== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==2819754== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info
==2819754== Command: bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf -R /etc/fluent-bit/parsers.conf
==2819754==
Fluent Bit v3.1.4
* Copyright (C) 2015-2024 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
______ _ _ ______ _ _ _____ __
| ___| | | | | ___ (_) | |____ |/ |
| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __ / /`| |
| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / \ \ | |
| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /.___/ /_| |_
\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ \____(_)___/
[2024/07/24 19:11:59] [error] [parser] parser named 'apache' already exists, skip.
[2024/07/24 19:11:59] [ warn] [env] variable ${K8S_HTTP_BUF_LIMIT} is used but not set
[2024/07/24 19:11:59] [ info] Configuration:
[2024/07/24 19:11:59] [ info] flush time | 1.000000 seconds
[2024/07/24 19:11:59] [ info] grace | 5 seconds
[2024/07/24 19:11:59] [ info] daemon | 0
[2024/07/24 19:11:59] [ info] ___________
[2024/07/24 19:11:59] [ info] inputs:
[2024/07/24 19:11:59] [ info] tail
[2024/07/24 19:11:59] [ info] ___________
[2024/07/24 19:11:59] [ info] filters:
[2024/07/24 19:11:59] [ info] kubernetes.0
[2024/07/24 19:11:59] [ info] lua.1
[2024/07/24 19:11:59] [ info] lua.2
[2024/07/24 19:11:59] [ info] ___________
[2024/07/24 19:11:59] [ info] outputs:
[2024/07/24 19:11:59] [ info] gelf.0
[2024/07/24 19:11:59] [ info] stdout.1
[2024/07/24 19:11:59] [ info] ___________
[2024/07/24 19:11:59] [ info] collectors:
[2024/07/24 19:11:59] [ info] [fluent bit] version=3.1.4, commit=f6a9600b46, pid=2819754
[2024/07/24 19:11:59] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2024/07/24 19:11:59] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/07/24 19:11:59] [ info] [cmetrics] version=0.9.1
[2024/07/24 19:11:59] [ info] [ctraces ] version=0.5.2
[2024/07/24 19:11:59] [ info] [input:tail:tail.0] initializing
[2024/07/24 19:11:59] [ info] [input:tail:tail.0] storage_strategy='memory' (memory only)
[2024/07/24 19:11:59] [debug] [tail:tail.0] created event channels: read=21 write=22
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] flb_tail_fs_inotify_init() initializing inotify tail input
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] inotify watch fd=27
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] scanning path /path/to/logs/logs/*.log
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] file will be read in POSIX_FADV_DONTNEED mode /path/to/logs/logs/test.log
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] inode=121378756 with offset=1869 appended as /path/to/logs/logs/test.log
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] scan_glob add(): /path/to/logs/logs/test.log, inode 121378756
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] file will be read in POSIX_FADV_DONTNEED mode /path/to/logs/logs/test2.log
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] inode=121378760 with offset=5195 appended as /path/to/logs/logs/test2.log
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] scan_glob add(): /path/to/logs/logs/test2.log, inode 121378760
[2024/07/24 19:11:59] [debug] [input:tail:tail.0] 2 new files found on path '/path/to/logs/logs/*.log'
[2024/07/24 19:11:59] [ info] [filter:kubernetes:kubernetes.0] https=1 host=kubernetes.default.svc port=443
[2024/07/24 19:11:59] [ warn] [filter:kubernetes:kubernetes.0] using Dummy Metadata
[2024/07/24 19:11:59] [debug] [gelf:gelf.0] created event channels: read=30 write=31
[2024/07/24 19:11:59] [debug] [stdout:stdout.1] created event channels: read=32 write=33
[2024/07/24 19:11:59] [trace] [router] input=tail.0 tag=tail.*
[2024/07/24 19:11:59] [debug] [router] match rule tail.0:gelf.0
[2024/07/24 19:11:59] [ info] [output:stdout:stdout.1] worker #0 started
[2024/07/24 19:11:59] [debug] [router] match rule tail.0:stdout.1
[2024/07/24 19:11:59] [ info] [sp] stream processor started
[2024/07/24 19:12:00] [debug] [input:tail:tail.0] inode=121378756 file=/path/to/logs/logs/test.log promote to TAIL_EVENT
[2024/07/24 19:12:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=121378756 watch_fd=1 name=/path/to/logs/logs/test.log
[2024/07/24 19:12:00] [debug] [input:tail:tail.0] inode=121378760 file=/path/to/logs/logs/test2.log promote to TAIL_EVENT
[2024/07/24 19:12:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=121378760 watch_fd=2 name=/path/to/logs/logs/test2.log
[2024/07/24 19:12:00] [debug] [input:tail:tail.0] [static files] processed 0b, done
[2024/07/24 19:12:08] [debug] [input:tail:tail.0] inode=121378760, /path/to/logs/logs/test2.log, events: IN_MODIFY
[2024/07/24 19:12:08] [trace] [input chunk] update output instances with new chunk size diff=324, records=1, input=tail.0
[2024/07/24 19:12:09] [trace] [task 0x5331e30] created (id=0)
[2024/07/24 19:12:09] [debug] [task] created task=0x5331e30 id=0 OK
[0] tail.home.scne59.OS.TelIT.Graylog.logs.test2.log: [[1720512679.507903575, {}], {"stream"=>"stderr", "log"=>"{"version": "1.0", "clients": ["\"a\"","'b'","c\n","d"], "string": "this is a \"test\"\n"}", "logtag"=>"F", "message"=>"(no message)", "data"=>{"string"=>"this is a "test"", "version"=>"1.0", "clients"=>[""a"", "'b'", "c
", "d"]}, "kubernetes"=>{"dummy"=>"Wed Jul 24 19:12:08 2024"}, "timestamp_message"=>"2024-07-09T08:11:19.507903521+00:00"}]
[2024/07/24 19:12:09] [debug] [output:stdout:stdout.1] task_id=0 assigned to thread #0
[2024/07/24 19:12:09] [trace] [upstream] get new connection for 127.0.0.1:1444, net setup:
net.connect_timeout = 10 seconds
net.source_address = any
net.keepalive = enabled
net.keepalive_idle_timeout = 30 seconds
net.max_worker_connections = 0
[2024/07/24 19:12:09] [debug] [out flush] cb_destroy coro_id=0
[2024/07/24 19:12:09] [trace] [coro] destroy coroutine=0x53479c0 data=0x53479e0
[2024/07/24 19:12:09] [trace] [net] connection #52 in process to 127.0.0.1:1444
[2024/07/24 19:12:09] [trace] [engine] resuming coroutine=0x5332160
[2024/07/24 19:12:09] [trace] [io] connection OK
[2024/07/24 19:12:09] [debug] [upstream] KA connection #52 to 127.0.0.1:1444 is connected
[2024/07/24 19:12:09] [trace] [io coro=0x5332160] [net_write] trying 448 bytes
[2024/07/24 19:12:09] [trace] [io coro=0x5332160] [fd 52] write_async(2)=448 (448/448)
[2024/07/24 19:12:09] [trace] [io coro=0x5332160] [net_write] ret=448 total=448/448
[2024/07/24 19:12:09] [debug] [upstream] KA connection #52 to 127.0.0.1:1444 is now available
[2024/07/24 19:12:09] [trace] [engine] [task event] task_id=0 out_id=1 return=OK
[2024/07/24 19:12:09] [trace] [engine] [task event] task_id=0 out_id=0 return=OK
[2024/07/24 19:12:09] [debug] [out flush] cb_destroy coro_id=0
[2024/07/24 19:12:09] [trace] [coro] destroy coroutine=0x5332160 data=0x5332180
[2024/07/24 19:12:09] [debug] [task] destroy task=0x5331e30 (task_id=0)
^C[2024/07/24 19:12:27] [engine] caught signal (SIGINT)
[2024/07/24 19:12:27] [trace] [engine] flush enqueued data
[2024/07/24 19:12:27] [ warn] [engine] service will shutdown in max 5 seconds
[2024/07/24 19:12:27] [ info] [input] pausing tail.0
[2024/07/24 19:12:27] [ info] [engine] service has stopped (0 pending tasks)
[2024/07/24 19:12:27] [ info] [input] pausing tail.0
[2024/07/24 19:12:27] [trace] [upstream] destroy connection #52 to 127.0.0.1:1444
[2024/07/24 19:12:27] [debug] [input:tail:tail.0] inode=121378756 removing file name /path/to/logs/logs/test.log
[2024/07/24 19:12:27] [ info] [output:stdout:stdout.1] thread worker #0 stopping...
[2024/07/24 19:12:27] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=121378756 watch_fd=1
[2024/07/24 19:12:27] [ info] [output:stdout:stdout.1] thread worker #0 stopped
==2819754==
==2819754== HEAP SUMMARY:
==2819754== in use at exit: 0 bytes in 0 blocks
==2819754== total heap usage: 7,069 allocs, 7,069 frees, 3,092,069 bytes allocated
==2819754==
==2819754== All heap blocks were freed -- no leaks are possible
==2819754==
==2819754== For lists of detected and suppressed errors, rerun with: -s
==2819754== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
please sign off your commit (DCO error)
This PR is stale because it has been open 45 days with no activity. Remove stale label or comment or this will be closed in 10 days.
I'm facing this issue, too. I'd appreciate a fix very much.
Removed stale commits and signed off as required.
This PR is stale because it has been open 45 days with no activity. Remove stale label or comment or this will be closed in 10 days.