upgrading splunk helm chart from 1.4.9 to 1.4.10 fails
What happened:
Been using splunk-connect-for-kubernetes with 1.4.7 tag for a while and wanted to update to the latest 1.4.13 - Upgrading helm chart stopped pushing logs to splunk enterprise 8.1.4. Added debug logs and that doesn't complain of any issue.
Then took each upgrade at a time and realized it failed when upgrading from 1.4.9 to 1.4.10
Added debug logs and that doesn't complain of any issue.
What you expected to happen: Splunk upgrade should work as expected
How to reproduce it (as minimally and precisely as possible):
- Install via splunk helm chart
1.4.9tag- log format type - cri
- splunk enterprise version - 8.1.4
- Upgrade to
1.4.10
Anything else we need to know?:
helm upgrade --install my-splunk splunk/helm-chart/splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging \
--namespace splunk \
--values splunk/helm-chart/splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging/values.yaml \
--set podSecurityPolicy.create=true \
--set podSecurityPolicy.apiGroup=policy \
--set podSecurityPolicy.apparmor_security=false \
--set containers.logFormatType="cri" \
--set global.logLevel="debug" \
--wait --timeout=15m
affinity: {}
buffer:
'@type': memory
chunk_limit_records: 100000
chunk_limit_size: 20m
flush_interval: 5s
flush_thread_count: 1
overflow_action: block
retry_max_times: 5
retry_type: periodic
total_limit_size: 600m
charEncodingUtf8: false
containers:
logFormat: null
logFormatType: cri
path: /var/log
pathDest: /var/lib/docker/containers
refreshInterval: null
removeBlankEvents: true
customFilters: {}
customMetadata: null
customMetadataAnnotations: null
environmentVar: null
extraVolumeMounts: []
extraVolumes: []
fluentd:
exclude_path: null
path: /var/log/containers/*.log
global:
kubernetes:
clusterName: sbx-car1
logLevel: debug
metrics:
service:
enabled: true
headless: true
monitoring_agent_enabled: true
monitoring_agent_index_name: null
prometheus_enabled: true
serviceMonitor:
additionalLabels: {}
enabled: false
interval: ""
metricsPort: 24231
scrapeTimeout: 10s
splunk:
hec:
caFile: null
clientCert: null
clientKey: null
host: 10.*.*.*
indexName: kubernetes
insecureSSL: false
port: 8088
protocol: https
token: 6****1
image:
name: splunk/fluentd-hec
pullPolicy: IfNotPresent
pullSecretName: null
tag: 1.2.8
usePullSecret: false
indexFields: []
journalLogPath: /run/log/journal
k8sMetadata:
cache_ttl: 3600
podLabels:
- app
- k8s-app
- release
watch: true
kubernetes:
clusterName: null
securityContext: false
logLevel: null
logs:
dns-controller:
from:
pod: dns-controller
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:dns-controller
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
dns-sidecar:
from:
container: sidecar
pod: kube-dns
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:kubedns-sidecar
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
dnsmasq:
from:
pod: kube-dns
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:dnsmasq
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
docker:
from:
journald:
unit: docker.service
sourcetype: kube:docker
timestampExtraction:
format: '%Y-%m-%dT%H:%M:%S.%NZ'
regexp: time="(?<time>\d{4}-\d{2}-\d{2}T[0-2]\d:[0-5]\d:[0-5]\d.\d{9}Z)"
etcd:
from:
container: etcd-container
pod: etcd-server
timestampExtraction:
format: '%Y-%m-%d %H:%M:%S.%N'
regexp: (?<time>\d{4}-\d{2}-\d{2} [0-2]\d:[0-5]\d:[0-5]\d\.\d{6})
etcd-events:
from:
container: etcd-container
pod: etcd-server-events
timestampExtraction:
format: '%Y-%m-%d %H:%M:%S.%N'
regexp: (?<time>\d{4}-[0-1]\d-[0-3]\d [0-2]\d:[0-5]\d:[0-5]\d\.\d{6})
etcd-minikube:
from:
container: etcd
pod: etcd-minikube
timestampExtraction:
format: '%Y-%m-%d %H:%M:%S.%N'
regexp: (?<time>\d{4}-\d{2}-\d{2} [0-2]\d:[0-5]\d:[0-5]\d\.\d{6})
kube-apiserver:
from:
pod: kube-apiserver
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:kube-apiserver
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
kube-audit:
from:
file:
path: /var/log/kube-apiserver-audit.log
sourcetype: kube:apiserver-audit
timestampExtraction:
format: '%Y-%m-%dT%H:%M:%SZ'
kube-controller-manager:
from:
pod: kube-controller-manager
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:kube-controller-manager
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
kube-dns-autoscaler:
from:
container: autoscaler
pod: kube-dns-autoscaler
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:kube-dns-autoscaler
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
kube-proxy:
from:
pod: kube-proxy
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:kube-proxy
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
kube-scheduler:
from:
pod: kube-scheduler
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:kube-scheduler
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
kubedns:
from:
pod: kube-dns
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:kubedns
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
kubelet:
from:
journald:
unit: kubelet.service
multiline:
firstline: /^\w[0-1]\d[0-3]\d/
sourcetype: kube:kubelet
timestampExtraction:
format: '%m%d %H:%M:%S.%N'
regexp: \w(?<time>[0-1]\d[0-3]\d [^\s]*)
nodeSelector:
beta.kubernetes.io/os: linux
podAnnotations: null
podSecurityPolicy:
apiGroup: policy
apparmor_security: false
create: true
priorityClassName: null
rbac:
create: true
openshiftPrivilegedSccBinding: false
resources:
requests:
cpu: 100m
memory: 200Mi
secret:
create: true
name: null
sendAllMetadata: false
serviceAccount:
create: true
name: null
sourcetypePrefix: kube
splunk:
hec:
caFile: null
clientCert: null
clientKey: null
host: 10.*.*.*
indexName: kubernetes
insecureSSL: true
port: 443
protocol: null
token: 67C*********A4**
ingest_api:
debugIngestAPI: null
eventsEndpoint: null
ingestAPIHost: null
ingestAuthHost: null
serviceClientIdentifier: null
serviceClientSecretKey: null
tenant: null
tokenEndpoint: null
tolerations:
- effect: NoSchedule
key: ""
operator: Exists
Log snippet for reference:
2022-08-01 10:43:57 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2022-08-01 10:43:57 +0000 [info]: gem 'fluentd' version '1.14.2'
2022-08-01 10:43:57 +0000 [info]: gem 'fluent-plugin-concat' version '2.4.0'
2022-08-01 10:43:57 +0000 [info]: gem 'fluent-plugin-jq' version '0.5.1'
2022-08-01 10:43:57 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.9.2'
2022-08-01 10:43:57 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.1'
2022-08-01 10:43:57 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2022-08-01 10:43:57 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.8'
2022-08-01 10:43:57 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2022-08-01 10:43:57 +0000 [debug]: Kubernetes URL is not set - inspecting environ
2022-08-01 10:43:57 +0000 [debug]: Kubernetes URL is now 'https://172.16.128.1:443/api'
2022-08-01 10:43:57 +0000 [debug]: Found directory with secrets: /var/run/secrets/kubernetes.io/serviceaccount
2022-08-01 10:43:57 +0000 [debug]: Found CA certificate: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
2022-08-01 10:43:57 +0000 [debug]: Found pod token: /var/run/secrets/kubernetes.io/serviceaccount/token
2022-08-01 10:43:57 +0000 [debug]: Creating K8S client
2022-08-01 10:43:58 +0000 [debug]: No fluent logger for internal event
2022-08-01 10:43:58 +0000 [info]: using configuration file: <ROOT>
<system>
log_level debug
root_dir "/tmp/fluentd"
</system>
<source>
@id containers.log
@type tail
@label @CONCAT
tag "tail.containers.*"
path "/var/log/containers/*.log"
pos_file "/var/log/splunk-fluentd-containers.log.pos"
path_key "source"
read_from_head true
refresh_interval 60
<parse>
@type "regexp"
expression /^(?<time>[^\s]+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
time_format "%Y-%m-%dT%H:%M:%S.%N%:z"
time_key "time"
time_type string
localtime false
unmatched_lines
</parse>
</source>
<source>
@id tail.file.kube-audit
@type tail
@label @CONCAT
tag "tail.file.kube:apiserver-audit"
path "/var/log/kube-apiserver-audit.log"
pos_file "/var/log/splunk-fluentd-kube-audit.pos"
read_from_head true
path_key "source"
<parse>
@type "regexp"
expression /^(?<log>.*)$/
time_key "time"
time_type string
time_format "%Y-%m-%dT%H:%M:%SZ"
unmatched_lines
</parse>
</source>
<source>
@id journald-docker
@type systemd
@label @CONCAT
tag "journald.kube:docker"
path "/run/log/journal"
matches [{"_SYSTEMD_UNIT":"docker.service"}]
read_from_head true
<storage>
@type "local"
persistent true
path "/var/log/splunkd-fluentd-journald-docker.pos.json"
</storage>
<entry>
field_map {"MESSAGE":"log","_SYSTEMD_UNIT":"source"}
field_map_strict true
</entry>
</source>
<source>
@id journald-kubelet
@type systemd
@label @CONCAT
tag "journald.kube:kubelet"
path "/run/log/journal"
matches [{"_SYSTEMD_UNIT":"kubelet.service"}]
read_from_head true
<storage>
@type "local"
persistent true
path "/var/log/splunkd-fluentd-journald-kubelet.pos.json"
</storage>
<entry>
field_map {"MESSAGE":"log","_SYSTEMD_UNIT":"source"}
field_map_strict true
</entry>
</source>
<source>
@id fluentd-monitor-agent
@type monitor_agent
@label @SPLUNK
tag "monitor_agent"
</source>
<label @CONCAT>
<filter tail.containers.var.log.containers.**>
@type concat
key "log"
partial_key "logtag"
partial_value "P"
separator ""
timeout_label "@SPLUNK"
</filter>
<filter tail.containers.var.log.containers.dns-controller*dns-controller*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter tail.containers.var.log.containers.kube-dns*sidecar*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter tail.containers.var.log.containers.kube-dns*dnsmasq*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter tail.containers.var.log.containers.kube-proxy*kube-proxy*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter tail.containers.var.log.containers.kube-dns*kubedns*.log>
@type concat
key "log"
timeout_label "@SPLUNK"
stream_identity_key "stream"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
separator ""
use_first_timestamp true
</filter>
<filter journald.kube:kubelet>
@type concat
key "log"
timeout_label "@SPLUNK"
multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
flush_interval 5
</filter>
<match **>
@type relabel
@label @SPLUNK
</match>
</label>
<label @SPLUNK>
<filter tail.containers.**>
@type grep
<exclude>
key "log"
pattern \A\z
</exclude>
</filter>
<filter tail.containers.**>
@type kubernetes_metadata
annotation_match [".*"]
de_dot false
watch true
cache_ttl 3600
</filter>
<filter tail.containers.**>
@type record_transformer
enable_ruby
<record>
sourcetype ${record.dig("kubernetes", "annotations", "splunk.com/sourcetype") ? record.dig("kubernetes", "annotations", "splunk.com/sourcetype") : "kube:container:"+record.dig("kubernetes","container_name")}
container_name ${record.dig("kubernetes","container_name")}
namespace ${record.dig("kubernetes","namespace_name")}
pod ${record.dig("kubernetes","pod_name")}
container_id ${record.dig("docker","container_id")}
pod_uid ${record.dig("kubernetes","pod_id")}
container_image ${record.dig("kubernetes","container_image")}
cluster_name sbx-car1
splunk_index ${record.dig("kubernetes", "annotations", "splunk.com/index") ? record.dig("kubernetes", "annotations", "splunk.com/index") : record.dig("kubernetes", "namespace_annotations", "splunk.com/index") ? (record["kubernetes"]["namespace_annotations"]["splunk.com/index"]) : ("kubernetes")}
label_app ${record.dig("kubernetes","labels","app")}
label_k8s-app ${record.dig("kubernetes","labels","k8s-app")}
label_release ${record.dig("kubernetes","labels","release")}
exclude_list ${record.dig("kubernetes", "annotations", "splunk.com/exclude") ? record.dig("kubernetes", "annotations", "splunk.com/exclude") : record.dig("kubernetes", "namespace_annotations", "splunk.com/exclude") ? (record["kubernetes"]["namespace_annotations"]["splunk.com/exclude"]) : ("false")}
</record>
</filter>
<filter tail.containers.**>
@type grep
<exclude>
key "exclude_list"
pattern /^true$/
</exclude>
</filter>
<filter tail.containers.var.log.pods.**>
@type jq_transformer
jq ".record | . + (.source | capture(\"/var/log/pods/(?<pod_uid>[^/]+)/(?<container_name>[^/]+)/(?<container_retry>[0-9]+).log\")) | .sourcetype = (\"kube:container:\" + .container_name) | .splunk_index = \"kubernetes\""
</filter>
<filter journald.**>
@type jq_transformer
jq ".record.source = \"/run/log/journal/\" + .record.source | .record.sourcetype = (.tag | ltrimstr(\"journald.\")) | .record.cluster_name = \"sbx-car1\" | .record.splunk_index = \"kubernetes\" |.record"
</filter>
<filter tail.file.**>
@type jq_transformer
jq ".record.sourcetype = (.tag | ltrimstr(\"tail.file.\")) | .record.cluster_name = \"sbx-car1\" | .record.splunk_index = \"kubernetes\" | .record"
</filter>
<filter monitor_agent>
@type jq_transformer
jq ".record.source = \"namespace:splunk/pod:cio-splunk-splunk-kubernetes-logging-f9gzz\" | .record.sourcetype = \"fluentd:monitor-agent\" | .record.cluster_name = \"sbx-car1\" | .record.splunk_index = \"kubernetes\" | .record"
</filter>
<match **>
@type splunk_hec
protocol https
hec_host "10.2.200.55"
hec_port 443
hec_token "67C2E9C9-D7ED-40AD-824D-A4D623A7A491"
index_key "splunk_index"
insecure_ssl true
host "k8s-sbx-car1-c1"
source_key "source"
sourcetype_key "sourcetype"
app_name "splunk-kubernetes-logging"
app_version "1.4.10"
<fields>
container_retry
pod_uid
pod
container_name
namespace
container_id
cluster_name
label_app
label_k8s-app
label_release
</fields>
<buffer>
@type "memory"
chunk_limit_records 100000
chunk_limit_size 20m
flush_interval 5s
flush_thread_count 1
overflow_action block
retry_max_times 5
retry_type periodic
total_limit_size 600m
</buffer>
<format monitor_agent>
@type "json"
</format>
<format>
@type "single_value"
message_key "log"
add_newline false
</format>
</match>
</label>
<source>
@type prometheus
</source>
<source>
@type forward
</source>
<source>
@type prometheus_monitor
<labels>
host ${hostname}
</labels>
</source>
<source>
@type prometheus_output_monitor
<labels>
host ${hostname}
</labels>
</source>
</ROOT>
2022-08-01 10:43:58 +0000 [info]: starting fluentd-1.14.2 pid=1 ruby="2.7.4"
2022-08-01 10:43:58 +0000 [info]: spawn command to main: cmdline=["/usr/bin/ruby", "-r/usr/local/share/gems/gems/bundler-2.2.30/lib/bundler/setup", "-Eascii-8bit:ascii-8bit", "/usr/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "--under-supervisor"]
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.**" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.dns-controller*dns-controller*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*sidecar*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*dnsmasq*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-proxy*kube-proxy*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*kubedns*.log" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding filter in @CONCAT pattern="journald.kube:kubelet" type="concat"
2022-08-01 10:43:58 +0000 [info]: adding match in @CONCAT pattern="**" type="relabel"
2022-08-01 10:43:58 +0000 [info]: adding filter in @SPLUNK pattern="tail.containers.**" type="grep"
2022-08-01 10:43:58 +0000 [info]: adding filter in @SPLUNK pattern="tail.containers.**" type="kubernetes_metadata"
2022-08-01 10:43:59 +0000 [debug]: #0 Kubernetes URL is not set - inspecting environ
2022-08-01 10:43:59 +0000 [debug]: #0 Kubernetes URL is now 'https://172.16.128.1:443/api'
2022-08-01 10:43:59 +0000 [debug]: #0 Found directory with secrets: /var/run/secrets/kubernetes.io/serviceaccount
2022-08-01 10:43:59 +0000 [debug]: #0 Found CA certificate: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
2022-08-01 10:43:59 +0000 [debug]: #0 Found pod token: /var/run/secrets/kubernetes.io/serviceaccount/token
2022-08-01 10:43:59 +0000 [debug]: #0 Creating K8S client
2022-08-01 10:43:59 +0000 [info]: adding filter in @SPLUNK pattern="tail.containers.**" type="record_transformer"
2022-08-01 10:43:59 +0000 [info]: adding filter in @SPLUNK pattern="tail.containers.**" type="grep"
2022-08-01 10:43:59 +0000 [info]: adding filter in @SPLUNK pattern="tail.containers.var.log.pods.**" type="jq_transformer"
2022-08-01 10:43:59 +0000 [info]: adding filter in @SPLUNK pattern="journald.**" type="jq_transformer"
2022-08-01 10:43:59 +0000 [info]: adding filter in @SPLUNK pattern="tail.file.**" type="jq_transformer"
2022-08-01 10:43:59 +0000 [info]: adding filter in @SPLUNK pattern="monitor_agent" type="jq_transformer"
2022-08-01 10:43:59 +0000 [info]: adding match in @SPLUNK pattern="**" type="splunk_hec"
2022-08-01 10:43:59 +0000 [info]: adding source type="tail"
2022-08-01 10:43:59 +0000 [info]: adding source type="tail"
2022-08-01 10:43:59 +0000 [info]: adding source type="systemd"
2022-08-01 10:43:59 +0000 [info]: adding source type="systemd"
2022-08-01 10:43:59 +0000 [info]: adding source type="monitor_agent"
2022-08-01 10:43:59 +0000 [info]: adding source type="prometheus"
2022-08-01 10:43:59 +0000 [info]: adding source type="forward"
2022-08-01 10:43:59 +0000 [info]: adding source type="prometheus_monitor"
2022-08-01 10:43:59 +0000 [info]: adding source type="prometheus_output_monitor"
2022-08-01 10:43:59 +0000 [debug]: #0 No fluent logger for internal event
2022-08-01 10:43:59 +0000 [info]: #0 starting fluentd worker pid=22 ppid=1 worker=0
2022-08-01 10:43:59 +0000 [debug]: #0 buffer started instance=51100 stage_size=0 queue_size=0
2022-08-01 10:43:59 +0000 [info]: #0 listening port port=24224 bind="0.0.0.0"
2022-08-01 10:43:59 +0000 [debug]: #0 listening prometheus http server on http:://0.0.0.0:24231//metrics for worker0
2022-08-01 10:43:59 +0000 [debug]: #0 enqueue_thread actually running
2022-08-01 10:43:59 +0000 [debug]: #0 flush_thread actually running
2022-08-01 10:43:59 +0000 [debug]: #0 Start webrick HTTP server listening
2022-08-01 10:43:59 +0000 [debug]: #0 [fluentd-monitor-agent] listening monitoring http server on http://0.0.0.0:24220/api/plugins for worker0
2022-08-01 10:43:59 +0000 [debug]: #0 [fluentd-monitor-agent] Start webrick HTTP server listening
2022-08-01 10:43:59 +0000 [debug]: #0 [fluentd-monitor-agent] tag parameter is specified. Emit plugins info to 'monitor_agent'
2022-08-01 10:43:59 +0000 [debug]: #0 [containers.log] Remove unwatched line from pos_file: /var/log/containers/cio-splunk-splunk-kubernetes-logging-pwqnw_splunk_splunk-fluentd-k8s-logs-1ece2a4268d802ebc0999eadf038f9946d859584087a8dab101b6b5356d6aa80.log ffffffffffffffff 0000000008be2ee7
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/trident-csi-m7whb_trident_driver-registrar-3f9d62c329c7509007ab9d6dfaff1a5da29595fc5d74c3cb6a21b7e6b31e8b7b.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/calico-node-q5jxk_kube-system_upgrade-ipam-998ac3d176c280e638ee537c084404103c62ea046c495aa9c90e7ae2d029c94a.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/calico-node-q5jxk_kube-system_install-cni-f157aa443a3b629d509144450eeb59f0a183c439364e5359dab07d27ba8ede9a.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/calico-node-q5jxk_kube-system_flexvol-driver-5c4cd20efae4044e45f6bd78690439b119bea27eecf9f68870793cef05c7829a.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/calico-node-q5jxk_kube-system_calico-node-e79f0ee390df649bc99cfe3c970265ef159132ea78199a0ba5016b3434192c8e.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/kube-proxy-jbv76_kube-system_kube-proxy-b5970479965d4b0fc58c51cadaec8dcee7cf6e74e44cb79c29ddcc7a12b7fcdb.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/kube-scheduler-k8s-sbx-car1-c1_kube-system_kube-scheduler-6a8f3ac35efd6029558e400e3ec88daffcbf7126dfa1e090fc0f88ca56ff0579.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/kube-controller-manager-k8s-sbx-car1-c1_kube-system_kube-controller-manager-27ba1277b48aa50ff169ee3bf9e5cf1d5b5a33ee93e4ebc67c83953e3181edc7.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/etcd-k8s-sbx-car1-c1_kube-system_etcd-af8cfde999d9e9be3e962c4946e0c6cbef411c2cce68f33eeea307139adfd463.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/kube-scheduler-k8s-sbx-car1-c1_kube-system_kube-scheduler-61e496054f472abc61a063df5846d4d25fd2ceca7a8a2be444b9d5f22d042135.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/kube-apiserver-k8s-sbx-car1-c1_kube-system_kube-apiserver-369d9c2a0a270a1f4371ac50080a495ef5aedf11e3f9b6c768ad5ae96f6dfc5f.log
2022-08-01 10:43:59 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::ConcatFilter, Fluent::Plugin::ConcatFilter] uses `#filter_stream` method.
2022-08-01 10:43:59 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::KubernetesMetadataFilter, Fluent::Plugin::RecordTransformerFilter] uses `#filter_stream` method.
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/cio-datadog-rxntc_datadog_init-volume-b9c9694d20622ac5a37a4054decd33f80a5b8d58543c58aa8c7d7310562e6d8c.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/cio-datadog-rxntc_datadog_init-config-3286568169b362c05c9ee7b0afe4f15624e48da0ba9cfdce7028e0f310e1d5b6.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/cio-datadog-rxntc_datadog_agent-55997cde2056621c412d8620ebdc5773d13ea0ccd0b79dd084c928d871cf129f.log
2022-08-01 10:43:59 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::KubernetesMetadataFilter, Fluent::Plugin::RecordTransformerFilter] uses `#filter_stream` method.
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/cio-datadog-rxntc_datadog_trace-agent-10bef90ea0ba42f0f484a42ef8b436c3f3335e354999fd30e42472fa8c8507a9.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/cio-datadog-rxntc_datadog_process-agent-60151969619dcea36059d3b2ef84c701f187e0ba30f85af1a7307eea90b4755e.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/trident-csi-m7whb_trident_trident-main-afdd4513f62f9f95231bf8fb563837dbb874ae9bd084aa2e3a9809353527f466.log
2022-08-01 10:43:59 +0000 [info]: #0 [containers.log] following tail of /var/log/containers/cio-splunk-splunk-kubernetes-logging-f9gzz_splunk_splunk-fluentd-k8s-logs-e5a84b09ffb70583183de0fe1acf4bf3b53b5c8864f98cf2d98f1c8597f5f172.log
2022-08-01 10:43:59 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::KubernetesMetadataFilter, Fluent::Plugin::RecordTransformerFilter] uses `#filter_stream` method.
2022-08-01 10:43:59 +0000 [info]: #0 fluentd worker is now running worker=0
2022-08-01 10:44:04 +0000 [debug]: #0 [Sending] Chunk: 5e52bad6d496701cb4e675a537059318(351991B).
2022-08-01 10:44:04 +0000 [debug]: #0 [Response] Chunk: 5e52bad6d496701cb4e675a537059318 Size: 351991 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.025378243
2022-08-01 10:44:09 +0000 [debug]: #0 [Sending] Chunk: 5e52badb90944cb5a22c1d7e32a67b00(22543B).
2022-08-01 10:44:09 +0000 [debug]: #0 [Response] Chunk: 5e52badb90944cb5a22c1d7e32a67b00 Size: 22543 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.003792928
2022-08-01 10:44:14 +0000 [debug]: #0 [Sending] Chunk: 5e52bae055cfba1a8f74c1e866588124(13859B).
2022-08-01 10:44:14 +0000 [debug]: #0 [Response] Chunk: 5e52bae055cfba1a8f74c1e866588124 Size: 13859 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002925154
2022-08-01 10:44:19 +0000 [debug]: #0 [Sending] Chunk: 5e52bae51a4ffc6b5d1bbdc7f471e570(15976B).
2022-08-01 10:44:19 +0000 [debug]: #0 [Response] Chunk: 5e52bae51a4ffc6b5d1bbdc7f471e570 Size: 15976 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002914354
2022-08-01 10:44:24 +0000 [debug]: #0 [Sending] Chunk: 5e52bae9df0f6806e62ff3c9c0f6fe07(24826B).
2022-08-01 10:44:24 +0000 [debug]: #0 [Response] Chunk: 5e52bae9df0f6806e62ff3c9c0f6fe07 Size: 24826 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.003297083
2022-08-01 10:44:24 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::KubernetesMetadataFilter, Fluent::Plugin::RecordTransformerFilter] uses `#filter_stream` method.
2022-08-01 10:44:28 +0000 [info]: #0 stats - namespace_cache_size: 18, pod_cache_size: 9, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, id_cache_miss: 4, namespace_cache_host_updates: 18, pod_cache_host_updates: 9
2022-08-01 10:44:29 +0000 [debug]: #0 [Sending] Chunk: 5e52baeea37cb9e2f745fd29c9c7e485(14789B).
2022-08-01 10:44:29 +0000 [debug]: #0 [Response] Chunk: 5e52baeea37cb9e2f745fd29c9c7e485 Size: 14789 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.004289732
2022-08-01 10:44:34 +0000 [debug]: #0 [Sending] Chunk: 5e52baf368dc6f67fdd3da3de3c578f0(52042B).
2022-08-01 10:44:34 +0000 [debug]: #0 [Response] Chunk: 5e52baf368dc6f67fdd3da3de3c578f0 Size: 52042 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.003285215
2022-08-01 10:44:39 +0000 [debug]: #0 [Sending] Chunk: 5e52baf82cdbe7cd7bf948b989e8da62(25395B).
2022-08-01 10:44:39 +0000 [debug]: #0 [Response] Chunk: 5e52baf82cdbe7cd7bf948b989e8da62 Size: 25395 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002491412
2022-08-01 10:44:44 +0000 [debug]: #0 [Sending] Chunk: 5e52bafcf20257a475fff88bb88618ee(15243B).
2022-08-01 10:44:44 +0000 [debug]: #0 [Response] Chunk: 5e52bafcf20257a475fff88bb88618ee Size: 15243 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002218426
2022-08-01 10:44:49 +0000 [debug]: #0 [Sending] Chunk: 5e52bb01b66c970937b1a7951739dd16(14249B).
2022-08-01 10:44:49 +0000 [debug]: #0 [Response] Chunk: 5e52bb01b66c970937b1a7951739dd16 Size: 14249 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002409298
2022-08-01 10:44:54 +0000 [debug]: #0 [Sending] Chunk: 5e52bb067b3c4f96530497baa6697683(28723B).
2022-08-01 10:44:54 +0000 [debug]: #0 [Response] Chunk: 5e52bb067b3c4f96530497baa6697683 Size: 28723 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002565564
2022-08-01 10:44:58 +0000 [info]: #0 stats - namespace_cache_size: 18, pod_cache_size: 9, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, id_cache_miss: 4, namespace_cache_host_updates: 18, pod_cache_host_updates: 9
2022-08-01 10:44:59 +0000 [debug]: #0 [Sending] Chunk: 5e52bb0b3fa9326a41596a2670999237(14703B).
2022-08-01 10:44:59 +0000 [debug]: #0 [Response] Chunk: 5e52bb0b3fa9326a41596a2670999237 Size: 14703 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002377513
2022-08-01 10:45:04 +0000 [debug]: #0 [Sending] Chunk: 5e52bb1003d82e36db3d4865b7f831fd(35217B).
2022-08-01 10:45:04 +0000 [debug]: #0 [Response] Chunk: 5e52bb1003d82e36db3d4865b7f831fd Size: 35217 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.004899007
2022-08-01 10:45:09 +0000 [debug]: #0 [Sending] Chunk: 5e52bb14c917e2a6c181ec6ba697bd74(22928B).
2022-08-01 10:45:09 +0000 [debug]: #0 [Response] Chunk: 5e52bb14c917e2a6c181ec6ba697bd74 Size: 22928 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002561714
2022-08-01 10:45:14 +0000 [debug]: #0 [Sending] Chunk: 5e52bb198e54659a447e4443fb88d75d(13863B).
2022-08-01 10:45:14 +0000 [debug]: #0 [Response] Chunk: 5e52bb198e54659a447e4443fb88d75d Size: 13863 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002151599
2022-08-01 10:45:14 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::KubernetesMetadataFilter, Fluent::Plugin::RecordTransformerFilter] uses `#filter_stream` method.
2022-08-01 10:45:19 +0000 [debug]: #0 [Sending] Chunk: 5e52bb1e52869bb42fba9d32709040a8(15774B).
2022-08-01 10:45:19 +0000 [debug]: #0 [Response] Chunk: 5e52bb1e52869bb42fba9d32709040a8 Size: 15774 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002921517
2022-08-01 10:45:24 +0000 [debug]: #0 [Sending] Chunk: 5e52bb2317a71dbfcd021c72b2bf2151(27417B).
2022-08-01 10:45:24 +0000 [debug]: #0 [Response] Chunk: 5e52bb2317a71dbfcd021c72b2bf2151 Size: 27417 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.007044653
2022-08-01 10:45:28 +0000 [info]: #0 stats - namespace_cache_size: 18, pod_cache_size: 9, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5, namespace_cache_host_updates: 18, pod_cache_host_updates: 9
2022-08-01 10:45:29 +0000 [debug]: #0 [Sending] Chunk: 5e52bb27ebec5efc6d065d24bd336bb0(20683B).
2022-08-01 10:45:29 +0000 [debug]: #0 [Response] Chunk: 5e52bb27ebec5efc6d065d24bd336bb0 Size: 20683 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.006932221
2022-08-01 10:45:34 +0000 [debug]: #0 [Sending] Chunk: 5e52bb2ca9e23bc76ab970cff1229d36(43305B).
2022-08-01 10:45:34 +0000 [debug]: #0 [Response] Chunk: 5e52bb2ca9e23bc76ab970cff1229d36 Size: 43305 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.007475059
2022-08-01 10:45:39 +0000 [debug]: #0 [Sending] Chunk: 5e52bb317371e08886f90018af63d153(31611B).
2022-08-01 10:45:39 +0000 [debug]: #0 [Response] Chunk: 5e52bb317371e08886f90018af63d153 Size: 31611 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.007972494
2022-08-01 10:45:44 +0000 [debug]: #0 [Sending] Chunk: 5e52bb363a26057a2de0cdab6779dc3b(11407B).
2022-08-01 10:45:44 +0000 [debug]: #0 [Response] Chunk: 5e52bb363a26057a2de0cdab6779dc3b Size: 11407 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.003493736
2022-08-01 10:45:49 +0000 [debug]: #0 [Sending] Chunk: 5e52bb3af7c2a20f8458f908c655480f(16707B).
2022-08-01 10:45:49 +0000 [debug]: #0 [Response] Chunk: 5e52bb3af7c2a20f8458f908c655480f Size: 16707 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.007868955
2022-08-01 10:45:54 +0000 [debug]: #0 [Sending] Chunk: 5e52bb3fc3a551fe31c5fd287ec5a47c(19230B).
2022-08-01 10:45:54 +0000 [debug]: #0 [Response] Chunk: 5e52bb3fc3a551fe31c5fd287ec5a47c Size: 19230 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.008825512
2022-08-01 10:45:58 +0000 [info]: #0 stats - namespace_cache_size: 18, pod_cache_size: 9, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5, namespace_cache_host_updates: 18, pod_cache_host_updates: 9
2022-08-01 10:45:59 +0000 [debug]: #0 [Sending] Chunk: 5e52bb448831aecc3d811553ca8e5b2a(38051B).
2022-08-01 10:45:59 +0000 [debug]: #0 [Response] Chunk: 5e52bb448831aecc3d811553ca8e5b2a Size: 38051 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.003466014
2022-08-01 10:46:04 +0000 [debug]: #0 [Sending] Chunk: 5e52bb4945f7240bfa9cd2557872fbeb(11104B).
2022-08-01 10:46:04 +0000 [debug]: #0 [Response] Chunk: 5e52bb4945f7240bfa9cd2557872fbeb Size: 11104 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.008013682
2022-08-01 10:46:09 +0000 [debug]: #0 [Sending] Chunk: 5e52bb4e11bc352a671e63277f67c1a7(27140B).
2022-08-01 10:46:09 +0000 [debug]: #0 [Response] Chunk: 5e52bb4e11bc352a671e63277f67c1a7 Size: 27140 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.00748863
2022-08-01 10:46:14 +0000 [debug]: #0 [Sending] Chunk: 5e52bb52d65299fc9986bb16900a4ea1(12797B).
2022-08-01 10:46:14 +0000 [debug]: #0 [Response] Chunk: 5e52bb52d65299fc9986bb16900a4ea1 Size: 12797 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002128075
2022-08-01 10:46:19 +0000 [debug]: #0 [Sending] Chunk: 5e52bb579402eed46e0ad04216aec464(16704B).
2022-08-01 10:46:19 +0000 [debug]: #0 [Response] Chunk: 5e52bb579402eed46e0ad04216aec464 Size: 16704 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.00843603
2022-08-01 10:46:24 +0000 [debug]: #0 [Sending] Chunk: 5e52bb5c5fcb0e36a9f78418a381b2f5(19230B).
2022-08-01 10:46:24 +0000 [debug]: #0 [Response] Chunk: 5e52bb5c5fcb0e36a9f78418a381b2f5 Size: 19230 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.007306458
2022-08-01 10:46:28 +0000 [info]: #0 stats - namespace_cache_size: 18, pod_cache_size: 9, namespace_cache_api_updates: 5, pod_cache_api_updates: 5, id_cache_miss: 5, namespace_cache_host_updates: 18, pod_cache_host_updates: 9
2022-08-01 10:46:29 +0000 [debug]: #0 [Sending] Chunk: 5e52bb61248cdf7dfd38b032ff7af21c(22066B).
2022-08-01 10:46:29 +0000 [debug]: #0 [Response] Chunk: 5e52bb61248cdf7dfd38b032ff7af21c Size: 22066 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002285078
2022-08-01 10:46:34 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::KubernetesMetadataFilter, Fluent::Plugin::RecordTransformerFilter] uses `#filter_stream` method.
2022-08-01 10:46:34 +0000 [debug]: #0 [Sending] Chunk: 5e52bb65e2251721b9feaeb10028db7c(41968B).
2022-08-01 10:46:34 +0000 [debug]: #0 [Response] Chunk: 5e52bb65e2251721b9feaeb10028db7c Size: 41968 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.002736453
2022-08-01 10:46:39 +0000 [debug]: #0 [Sending] Chunk: 5e52bb6aa4230e794a004f3c8b16febb(35294B).
2022-08-01 10:46:39 +0000 [debug]: #0 [Response] Chunk: 5e52bb6aa4230e794a004f3c8b16febb Size: 35294 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.010067323
2022-08-01 10:46:44 +0000 [debug]: #0 [Sending] Chunk: 5e52bb6f7294ad57e8465e3e40433d30(14182B).
2022-08-01 10:46:44 +0000 [debug]: #0 [Response] Chunk: 5e52bb6f7294ad57e8465e3e40433d30 Size: 14182 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.001939222
2022-08-01 10:46:49 +0000 [debug]: #0 [Sending] Chunk: 5e52bb743061b58297963aadac1fe777(18076B).
2022-08-01 10:46:49 +0000 [debug]: #0 [Response] Chunk: 5e52bb743061b58297963aadac1fe777 Size: 18076 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.009182324
2022-08-01 10:46:54 +0000 [debug]: #0 [Sending] Chunk: 5e52bb78fbf2c1e0229789e7e72717a5(17843B).
2022-08-01 10:46:54 +0000 [debug]: #0 [Response] Chunk: 5e52bb78fbf2c1e0229789e7e72717a5 Size: 17843 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.009768623
2022-08-01 10:46:58 +0000 [info]: #0 stats - namespace_cache_size: 18, pod_cache_size: 9, namespace_cache_api_updates: 6, pod_cache_api_updates: 6, id_cache_miss: 6, namespace_cache_host_updates: 18, pod_cache_host_updates: 9
2022-08-01 10:46:59 +0000 [debug]: #0 [Sending] Chunk: 5e52bb7dc0ae7c1fc3911e49729fdf7c(37430B).
2022-08-01 10:46:59 +0000 [debug]: #0 [Response] Chunk: 5e52bb7dc0ae7c1fc3911e49729fdf7c Size: 37430 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.005342832
2022-08-01 10:47:04 +0000 [debug]: #0 [Sending] Chunk: 5e52bb827e80f06181479f0b2f95deed(12477B).
2022-08-01 10:47:04 +0000 [debug]: #0 [Response] Chunk: 5e52bb827e80f06181479f0b2f95deed Size: 12477 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.008025782
2022-08-01 10:47:09 +0000 [debug]: #0 [Sending] Chunk: 5e52bb874a0f897f3014237cef7ad391(31297B).
2022-08-01 10:47:09 +0000 [debug]: #0 [Response] Chunk: 5e52bb874a0f897f3014237cef7ad391 Size: 31297 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.009660581
2022-08-01 10:47:14 +0000 [debug]: #0 [Sending] Chunk: 5e52bb8c0e9d3ceee4910ce6d21fb254(11405B).
2022-08-01 10:47:14 +0000 [debug]: #0 [Response] Chunk: 5e52bb8c0e9d3ceee4910ce6d21fb254 Size: 11405 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.001933759
2022-08-01 10:47:19 +0000 [debug]: #0 [Sending] Chunk: 5e52bb90cc7429ebf2751ca73d5e780b(16706B).
2022-08-01 10:47:19 +0000 [debug]: #0 [Response] Chunk: 5e52bb90cc7429ebf2751ca73d5e780b Size: 16706 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.008612022
2022-08-01 10:47:24 +0000 [debug]: #0 [Sending] Chunk: 5e52bb95983700163d0e819a70acee09(18516B).
2022-08-01 10:47:24 +0000 [debug]: #0 [Response] Chunk: 5e52bb95983700163d0e819a70acee09 Size: 18516 Response: #<Net::HTTPOK 200 OK readbody=true> Duration: 0.00764794
2022-08-01 10:47:28 +0000 [info]: #0 stats - namespace_cache_size: 18, pod_cache_size: 9, namespace_cache_api_updates: 6, pod_cache_api_updates: 6, id_cache_miss: 6, namespace_cache_host_updates: 18, pod_cache_host_updates: 9
Environment:
- Kubernetes version (use
kubectl version):1.23.9 - Ruby version (use
ruby --version):ruby 2.7.4p191 (2021-07-07 revision a21a3b7d23) [x86_64-linux] - OS (e.g:
cat /etc/os-release): "Red Hat Enterprise Linux 8.4" - Splunk version:
8.1.4 - Splunk Connect for Kubernetes helm chart version:
1.4.10 - Others:
Comparing https://github.com/splunk/splunk-connect-for-kubernetes/compare/1.4.9...1.4.10 I see
fluent-hecversion is upgraded from1.2.7to1.2.8. Comparing https://github.com/splunk/fluent-plugin-splunk-hec/compare/1.2.7...1.2.8 I see fluend is upgraded to1.14.2from1.13.2Also tested by using1.2.7tag on this helm chart and that seem to work as expected.
It's very strange. Logs suggest that SCK is able to send the logs successfully. Have you checked splunkd internal logs? Also, can you try with the latest version (v1.4.15)
I tried with the latest version and it's the same behavior. I see it only works until 1.4.9 - I don't have access to the other splunkd internal logs, but I'll try to get that.
Forgot to mention that I do see monitoring events getting pushed - but everything else like (container logs, kubelet) is missing.

Hi @srikiz, any update on this? You are able to receive monitoring events, which suggests that there is no issue with the fluentd or splunk_hec plugin. Also, the logs are not indicating any issue. I would suggest doing a clean installation with default settings. And start from there.
I have the same behaviour.
Hi @harshit-splunk - I haven't got a chance to test this again. But based on my previous test, I was able to receive monitoring events and also the journal logs. But definitely container logs are missing. I will take another look at it sometime this week.
Hi, @srikiz! Take a look at the time field attached to the event. In my case, after upgrade I figured out the time field is two hours difference.
Enabling localTime worked for me:
localTime: true
Thanks @ansilva1 - that worked for me as well ! I am closing this ticket.