Results 11 comments of Junli Wang

We are also facing the same issue even upgraded to the latest 3.8.8. I saw commits on Aug2 may have some potential fix to reduce memory footprint, but they are...

yes, we build source code from v3.8.8 using Rust 1.68, use base image ubi8.7, and run it with k8s 1.25 on IBM Cloud, and use the same cpu and memory...

captured a few line before the OOM happened, and this is from a dev cluster. ``` 2023-09-22T18:40:41.625183Z INFO metrics: {"fs":{"events":816548,"creates":21,"deletes":28,"writes":816499,"lines":310,"bytes":29985,"files_tracked":282},"memory":{"active":541609984,"allocated":526899256,"resident":550993920},"ingest":{"requests":45,"requests_size":206933,"rate_limits":0,"retries":0,"retries_success":0,"retries_failure":0,"requests_duration":1275.252,"requests_timed_out":0,"requests_failed":0,"requests_succeeded":45},"k8s":{"lines":0,"creates":369,"deletes":1,"events":370},"journald":{"lines":0,"bytes":0},"retry":{"pending":0,"storage_used":0}} 2023-09-22T18:41:41.626438Z INFO metrics: {"fs":{"events":816973,"creates":21,"deletes":28,"writes":816924,"lines":310,"bytes":29985,"files_tracked":282},"memory":{"active":541601792,"allocated":526890968,"resident":550985728},"ingest":{"requests":45,"requests_size":206933,"rate_limits":0,"retries":0,"retries_success":0,"retries_failure":0,"requests_duration":1275.252,"requests_timed_out":0,"requests_failed":0,"requests_succeeded":45},"k8s":{"lines":0,"creates":369,"deletes":1,"events":370},"journald":{"lines":0,"bytes":0},"retry":{"pending":0,"storage_used":0}} 2023-09-22T18:42:41.627303Z INFO metrics: {"fs":{"events":817315,"creates":21,"deletes":28,"writes":817266,"lines":310,"bytes":29985,"files_tracked":282},"memory":{"active":541609984,"allocated":526899256,"resident":550993920},"ingest":{"requests":45,"requests_size":206933,"rate_limits":0,"retries":0,"retries_success":0,"retries_failure":0,"requests_duration":1275.252,"requests_timed_out":0,"requests_failed":0,"requests_succeeded":45},"k8s":{"lines":0,"creates":369,"deletes":1,"events":370},"journald":{"lines":0,"bytes":0},"retry":{"pending":0,"storage_used":0}} 2023-09-22T18:43:41.628544Z INFO...

metric logs after OOM and restart ``` 2023-09-22T18:53:41.641727Z INFO metrics: {"fs":{"events":821128,"creates":21,"deletes":28,"writes":821079,"lines":310,"bytes":29985,"files_tracked":338},"memory":{"active":543903744,"allocated":528960328,"resident":553312256},"ingest":{"requests":45,"requests_size":206933,"rate_limits":0,"retries":0,"retries_success":0,"retries_failure":0,"requests_duration":1275.252,"requests_timed_out":0,"requests_failed":0,"requests_succeeded":45},"k8s":{"lines":0,"creates":369,"deletes":1,"events":370},"journald":{"lines":0,"bytes":0},"retry":{"pending":0,"storage_used":0}} 2023-09-22T18:54:41.643316Z INFO metrics: {"fs":{"events":821420,"creates":21,"deletes":28,"writes":821371,"lines":310,"bytes":29985,"files_tracked":338},"memory":{"active":543903744,"allocated":528960328,"resident":553312256},"ingest":{"requests":45,"requests_size":206933,"rate_limits":0,"retries":0,"retries_success":0,"retries_failure":0,"requests_duration":1275.252,"requests_timed_out":0,"requests_failed":0,"requests_succeeded":45},"k8s":{"lines":0,"creates":369,"deletes":1,"events":370},"journald":{"lines":0,"bytes":0},"retry":{"pending":0,"storage_used":0}} 2023-09-22T18:55:41.644821Z INFO metrics: {"fs":{"events":821754,"creates":21,"deletes":28,"writes":821705,"lines":310,"bytes":29985,"files_tracked":338},"memory":{"active":543903744,"allocated":528960328,"resident":553312256},"ingest":{"requests":45,"requests_size":206933,"rate_limits":0,"retries":0,"retries_success":0,"retries_failure":0,"requests_duration":1275.252,"requests_timed_out":0,"requests_failed":0,"requests_succeeded":45},"k8s":{"lines":0,"creates":369,"deletes":1,"events":370},"journald":{"lines":0,"bytes":0},"retry":{"pending":0,"storage_used":0}} 2023-09-22T18:56:41.646640Z INFO metrics: {"fs":{"events":822037,"creates":21,"deletes":28,"writes":821988,"lines":310,"bytes":29985,"files_tracked":338},"memory":{"active":543895552,"allocated":528952040,"resident":553304064},"ingest":{"requests":45,"requests_size":206933,"rate_limits":0,"retries":0,"retries_success":0,"retries_failure":0,"requests_duration":1275.252,"requests_timed_out":0,"requests_failed":0,"requests_succeeded":45},"k8s":{"lines":0,"creates":369,"deletes":1,"events":370},"journald":{"lines":0,"bytes":0},"retry":{"pending":0,"storage_used":0}} 2023-09-22T18:57:41.648169Z INFO metrics: {"fs":{"events":822326,"creates":21,"deletes":28,"writes":822277,"lines":310,"bytes":29985,"files_tracked":338},"memory":{"active":543903744,"allocated":528960328,"resident":553312256},"ingest":{"requests":45,"requests_size":206933,"rate_limits":0,"retries":0,"retries_success":0,"retries_failure":0,"requests_duration":1275.252,"requests_timed_out":0,"requests_failed":0,"requests_succeeded":45},"k8s":{"lines":0,"creates":369,"deletes":1,"events":370},"journald":{"lines":0,"bytes":0},"retry":{"pending":0,"storage_used":0}} 2023-09-22T18:58:41.648995Z INFO metrics:...

I'm not seeing any config in the log, just below lines of watching or ignoring files based on the env `LOGDNA_EXCLUSION_RULES` this piece is between above 2 metrics piece I...

found it after another restart ``` 2023-09-22T20:24:59.064833Z INFO logdna_agent: running version: 3.8.8 2023-09-22T20:24:59.066821Z INFO logdna_agent: Uid: 5000 5000 5000 5000 2023-09-22T20:24:59.066847Z INFO logdna_agent: Gid: 5000 5000 5000 5000 2023-09-22T20:24:59.066917Z INFO...

I tried `MZ_CACHE_CLEAR_INTERVAL`, `CACHE_CLEAR_INTERVAL`, `LOGDNA_MZ_CACHE_CLEAR_INTERVAL`, `LOGDNA_CACHE_CLEAR_INTERVAL`, none of them can work. starting up log still prints the default 21600. not sure if it is configurable: https://github.com/logdna/logdna-agent-v2#configuration

`LOGDNA_CLEAR_CACHE_INTERVAL` works, found from this commit https://github.com/logdna/logdna-agent-v2/commit/0b0cb1d58068e84cfd284720ac2f8130db428fb4

Broker has per listener config that forces clients to re-authenticate: https://kafka.apache.org/documentation/#brokerconfigs_connections.max.reauth.ms By default it is off.

as an user, I was confused about how to attach an access tag to an instance, thought `ibm_resource_access_tag` should work in the same way as `ibm_resource_tag`, but actually `ibm_resource_access_tag` does...