Fluentd filter parser plugin bug
Describe the bug
Hi!
When I ceate a filter in fluentd with the parser plugin the secret "fluentd-config" that is created don't contain the values correctly
To Reproduce
My manifests:
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: Fluentd
metadata:
name: fluentd
namespace: kubesphere-logging-system
labels:
app.kubernetes.io/name: fluentd
spec:
globalInputs:
- forward:
bind: 0.0.0.0
port: 24224
replicas: 3
workers: 3
image: kubesphere/fluentd:v1.14.4
fluentdCfgSelector:
matchLabels:
config.fluentd.fluent.io/enabled: "true"
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: FluentdConfig
metadata:
name: fluentd-config
namespace: default
labels:
config.fluentd.fluent.io/enabled: "true"
spec:
filterSelector:
matchLabels:
filter.fluentd.fluent.io/enabled: "true"
filter.fluentd.fluent.io/app: ingress-nginx
watchedConstainers:
- controller
outputSelector:
matchLabels:
output.fluentd.fluent.io/enabled: "true"
output.fluentd.fluent.io/app: ingress-nginx
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: Filter
metadata:
name: fluentd-filter-json-parser
namespace: default
labels:
filter.fluentd.fluent.io/enabled: "true"
filter.fluentd.fluent.io/app: ingress-nginx
spec:
filters:
- parser:
keyName: log
parse:
type: json
logLevel: debug
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: Output
metadata:
name: fluentd-output-es
namespace: default
labels:
output.fluentd.fluent.io/enabled: "true"
output.fluentd.fluent.io/app: ingress-nginx
spec:
outputs:
- elasticsearch:
host: elasticsearch-logging-data.kubesphere-logging-system.svc
port: 9200
logstashFormat: true
logstashPrefix: ks-ingress-log
user:
valueFrom:
secretKeyRef:
name: fluent-secret
key: user
password:
valueFrom:
secretKeyRef:
name: fluent-secret
key: password
fluentd-config secret generated on app.conf section filter:
...
<filter **>
@id FluentdConfig-ingress-nginx-fluentd-config::ingress-nginx::filter::fluentd-filter-json-parser-0
@type parser
key_name 0xc000bde310
<parse>
@log_level debug
@type json
</parse>
</filter>
...
Expected behavior
...
<filter **>
@id FluentdConfig-ingress-nginx-fluentd-config::ingress-nginx::filter::fluentd-filter-json-parser-0
@type parser
key_name log
<parse>
@log_level debug
@type json
</parse>
</filter>
...
Your Environment
- Fluent Operator version: release-1.0
- Container Runtime: docker
How did you install fluent operator?
with yaml https://raw.githubusercontent.com/fluent/fluent-operator/release-1.0/manifests/setup/setup.yaml
What happened?
No response
Your Error Log
2022-04-27 15:46:38 +0000 [warn]: #0 dump an error event: error_class=ArgumentError error="0xc000b26450 does not exist" location=nil tag="kube.var.log.containers.ingress-nginx-controller-controller-75c5fbdc88-55b4s_ingress-nginx_controller-ecbb24b557da5a4565da2ab71f346389be3ce903f3ae2f8abe99d682377725a1.log"
Additional context
No response
Can this condition be reproduced? Can you show other configurations?
Can this condition be reproduced? Can you show other configurations?
An apology @wenchajun , I have added more configuration details.
I tested it a few times and I found that if the configuration is not cluster-wide you should add the namespace you want the configuration to belong to.
I tested it a few times and I found that if the configuration is not cluster-wide you should add the namespace you want the configuration to belong to.
Besides, you'd better install the fluent operator new version v1.0.2 that includes fixes to fluentd config reload
Hi!. I've added namespace in configuration details
I seem to be having the same bug. key_name 0xc000bde310 is probably an object reference that isn't being translated to the correct string value.
The same issue in fluent operator v1.7 and fluentd container v1.14.6
I think I've run into this issue. I have two Fluentd ClusterFilters running:
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterFilter
metadata:
name: de-dot
labels:
filter.fluentd.fluent.io/enabled: "true"
filter.fluentd.fluent.io/tenant: "core"
spec:
filters:
- customPlugin:
config: |
<filter **>
@type dedot
de_dot_separator _
de_dot_nested ${FLUENTD_DEDOT_NESTED:=true}
</filter>
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterFilter
metadata:
name: parser-filter
labels:
filter.fluentd.fluent.io/enabled: "true"
filter.fluentd.fluent.io/tenant: "core"
spec:
filters:
- customPlugin:
config: |
<filter **>
@type parser
key_name log
reserve_data true
<parse>
@type json
</parse>
</filter>
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterFluentdConfig
metadata:
labels:
config.fluentd.fluent.io/enabled: "true"
name: cluster-fluentd-config
spec:
clusterFilterSelector:
matchLabels:
filter.fluentd.fluent.io/enabled: "true"
filter.fluentd.fluent.io/tenant: "core"
clusterOutputSelector:
matchLabels:
output.fluentd.fluent.io/enabled: "true"
output.fluentd.fluent.io/tenant: "core"
watchedNamespaces: ${FLUENT_WATCHED_NAMESPACES}
But only the de-dot one was added to the app.conf:
<ROOT>
<system>
rpc_endpoint "127.0.0.1:24444"
log_level info
workers 1
</system>
<source>
@type forward
bind "0.0.0.0"
port 24224
</source>
<match **>
@id main
@type label_router
<route>
@label "@33b5ad9c15abdec648ede544d80f80f5"
<match>
namespaces ...
</match>
</route>
</match>
<label @33b5ad9c15abdec648ede544d80f80f5>
<filter **>
@type dedot
de_dot_separator "_"
de_dot_nested true
</filter>
<match **>
@type opensearch
host "XXXX"
port 443
logstash_format true
logstash_prefix "logs-XXX-core"
scheme https
log_os_400_reason true
@log_level "info"
<buffer>
@type "memory"
path /buffers/opensearch/XXX-core
flush_mode interval
flush_interval 60s
flush_thread_count 2
flush_at_shutdown true
retry_type exponential_backoff
retry_max_times 10
retry_wait 1s
retry_max_interval 60s
chunk_limit_size 8MB
total_limit_size 512MB
overflow_action throw_exception
compress text
</buffer>
<endpoint>
url https://XXXX
region "us-west-2"
assume_role_arn "XXXX"
assume_role_web_identity_token_file "/var/run/secrets/eks.amazonaws.com/serviceaccount/token"
</endpoint>
</match>
</label>
<match **>
@type null
@id main-no-output
</match>
<label @FLUENT_LOG>
<match fluent.*>
@type null
@id main-fluentd-log
</match>
</label>
</ROOT>
I think I've run into this issue. I have two Fluentd ClusterFilters running:
@kaiohenricunha Is it possible for you to find the root cause and create a PR for this?