fluent-bit-go
fluent-bit-go copied to clipboard
Panic occurs when storage.type filesystem
I got go panic when I set storage.type filesystem on fluent bit v1.4.2 and fluent bit go s3 v0.7.0.
panic: reflect: call of reflect.Value.Index on int64 Value
goroutine 17 [running, locked to thread]:
reflect.Value.Index(0x7f1a281177e0, 0xc0002fb0c0, 0x86, 0x0, 0x0, 0xcd8, 0x1)
/usr/local/go/src/reflect/value.go:956 +0x1b1
github.com/cosmo0920/fluent-bit-go-s3/vendor/github.com/fluent/fluent-bit-go/output.GetRecord(0xc000302b30, 0x3, 0x3, 0xc0000ccd80, 0xcd8)
/go/src/github.com/cosmo0920/fluent-bit-go-s3/vendor/github.com/fluent/fluent-bit-go/output/decoder.go:80 +0x106
main.(*fluentPlugin).GetRecord(0x7f1a287fbda0, 0xc000302b30, 0x627, 0xc0005cae00, 0x6b0, 0x7f1a27ea4029)
/go/src/github.com/cosmo0920/fluent-bit-go-s3/out_s3.go:68 +0x2d
main.FLBPluginFlushCtx(0x0, 0x7f1a26ae10b2, 0x7f1900000f4e, 0x7f1a28d35460, 0x28)
/go/src/github.com/cosmo0920/fluent-bit-go-s3/out_s3.go:306 +0x109
main._cgoexpwrap_6036375e9d10_FLBPluginFlushCtx(0x0, 0x7f1a26ae10b2, 0xf4e, 0x7f1a28d35460, 0x0)
_cgo_gotypes.go:88 +0x49
This doesn't always happen and I don't know how to reproduce it. It seems to have a high probability of occurring when the fluent bit is repeatedly restarted.
https://github.com/fluent/fluent-bit-go/blob/master/output/decoder.go#L79-L81
if slice.Kind(m) != reflect.Slice {
return -1, 0, nil
}
slice := reflect.ValueOf(m)
t := slice.Index(0).Interface()
data := slice.Index(1)
This is a suggestion. It might be better to check that it is slice to be type safe.
same here. amazon/aws-for-fluent-bit:2.3.1, FluentBit 1.4.5
panic: reflect: call of reflect.Value.Index on int64 Value
goroutine 17 [running, locked to thread]:
reflect.Value.Index(0x7f500bc36c00, 0x1c000991550, 0x86, 0x0, 0x0, 0x1c0014d8840, 0x194)
/usr/local/go/src/reflect/value.go:942 +0x1d2
github.com/fluent/fluent-bit-go/output.GetRecord(0x1c000dccc70, 0x1c0000a9d80, 0x80, 0x1c0014d8840, 0xd2f269)
/go/pkg/mod/github.com/fluent/[email protected]/output/decoder.go:80 +0x10b
main.FLBPluginFlushCtx(0x0, 0x7f5011e7a098, 0x1c000018f68, 0x7f4ff594b920, 0x28)
/cloudwatch/fluent-bit-cloudwatch.go:134 +0x1c9
main._cgoexpwrap_47a790186e23_FLBPluginFlushCtx(0x0, 0x7f5011e7a098, 0x76732e7400018f68, 0x7f4ff594b920, 0xb96c61636f6c2e72)
_cgo_gotypes.go:88 +0x49
[SERVICE]
Parsers_File /fluent-bit/parsers/parsers.conf
Parsers_File /fluent-bit/etc/parsers_file.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port ${FLUENT_HTTP_PORT}
storage.path /var/log/fluentbit
storage.backlog.mem_limit 5M
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 1MB
Skip_Long_Lines On
Refresh_Interval 10
Docker_Mode On
storage.type filesystem
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Merge_Log_Key data
K8S-Logging.Parser On
K8S-Logging.Exclude On
Labels On
[OUTPUT]
Name cloudwatch
Match kube.*
region ${FLUENT_CLOUDWATCH_REGION}
log_group_name /eks/xxx/${FLUENT_CLOUDWATCH_ENV}/k8s
log_stream_prefix fluentbit-
@fluent/contributors
Assigned this to me because I am a member of this repo; there are a few folks on my team who help maintain the AWS go plugins. I'll make sure myself or one of them spends some time troubleshooting this issue this month.
Note identical failure but completely different circumstances. Might shed some light: https://github.com/fluent/fluent-bit-go/issues/34