helm-charts
helm-charts copied to clipboard
No logs received from other component once it has been redeployed
Describe the bug Fluentd runs fine and logs everything at startup. but when other containers get new deploys the logs stop showing up in elasticsearch. once we restart fluentd they show up again.
Version of Helm and Kubernetes:
Helm Version:
$ helm version
version.BuildInfo{Version:"v3.4.2", GitCommit:"23dd3af5e19a02d4f4baa5b2f242645a1a3af629", GitTreeState:"clean", GoVersion:"go1.14.13"}
Kubernetes Version:
AWS EKS cluster nodes are AWS managed nodes groups al default.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-eks-49a6c0", GitCommit:"49a6c0bf091506e7bafcdb1b142351b69363355a", GitTreeState:"clean", BuildDate:"2020-12-23T22:10:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Which version of the chart: 11.9.0
What happened: once another deployment cycles after a new version of it is deployed logs stop showing up
What you expected to happen: I can redeploy other tools as much as I like without losing logs
How to reproduce it (as minimally and precisely as possible): this is in a pretty minimal setup. we have a basic chart that depends on version 11.9.0 nothing else is installed by this chart. other components in the cluster are also installed via helm charts
values.yaml (only put values which differ from the defaults)
fluentd-elasticsearch:
image:
pullPolicy: IfNotPresent
# Specify where fluentd can find logs
hostLogDir:
varLog: /var/log
dockerContainers: /var/lib/docker/containers
libSystemdDir: /usr/lib64
elasticsearch:
hosts:
- elasticnode.internal:80
scheme: 'http'
ssl_version: TLSv1_2
auth:
enabled: true
user: "usernam"
password: "password"
logstash:
enabled: true
prefix: "eksstaging"
buffer:
enabled: true
# ref: https://docs.fluentd.org/configuration/buffer-section#chunk-keys
chunkKeys: ""
type: "file"
path: "/var/log/fluentd-buffers/kubernetes.system.buffer"
flushMode: "interval"
retryType: "exponential_backoff"
flushThreadCount: 2
flushInterval: "5s"
retryForever: true
retryMaxInterval: 30
chunkLimitSize: "256M"
queueLimitLength: 20
overflowAction: "block"
# If you want to change args of fluentd process
# by example you can add -vv to launch with trace log
fluentdArgs: "--no-supervisor -q"
install command
helm upgrade --install ops-fluentd-elasticsearch . --namespace ops-fluentd-elasticsearch -f acc-values.yaml
chart.yml
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: fluentd-elasticsearch
version: 11.9.0
dependencies:
- name: fluentd-elasticsearch
version: 11.9.0
repository: "@kokuwa"
condition:
tags:
Anything else we need to know:
We have the exact same problem in our team.
@monotek How could we help debugging this?