falco icon indicating copy to clipboard operation
falco copied to clipboard

metadata_download.max_mb config not working

Open s7an-it opened this issue 1 year ago • 2 comments

Describe the bug

When check logs of daemonset I see giving up on read more than 100 MB of data for k8s_replicaset_handler_state while setting metadata_download.max_mb is set to 200 in configmap/init yamls. k8s_handler (k8s_replicaset_handler_state::collect_data()[https://x.x.x.x] an error occurred while receiving data from k8s_replicaset_handler_state, m_blocking_socket=1, m_watching=0, Socket handler (k8s_replicaset_handler_state): read more than 100 MB of data from https://x.x.x.x/apis/apps/v1/replicasets?pretty=false (104858058 bytes, 104263 reads). Giving up

How to reproduce it

Deployed with helm chart 3.1.3, the helm was rendered and deployed as pure yaml by ArgoCD. EKS version 1.22. The configmap is seen to take the change but the log message persist and pod of daemonset was restarted after. Values.yaml setup: falcosidekick: enabled: true config: fission.function: "falco-pod-delete" webui: enabled: true falco: json_output: true

Container orchestrator metadata fetching params

metadata_download: # -- Max allowed response size (in Mb) when fetching metadata from Kubernetes. max_mb: 200 # -- Sleep time (in μs) for each download chunck when fetching metadata from Kubernetes. chunk_wait_us: 1000 # -- Watch frequency (in seconds) when fetching metadata from Kubernetes. watch_freq_sec: 1 driver: kind: ebpf

Expected behaviour

Modifying this value in Helm and configmap should change the message to 200 MB.

Screenshots

Environment

  • Falco version: falco --version Sat Apr 1 23:20:09 2023: Falco version: 0.34.1 (x86_64) Sat Apr 1 23:20:09 2023: Falco initialized with configuration file: /etc/falco/falco.yaml {"default_driver_version":"4.0.0+driver","driver_api_version":"3.0.0","driver_schema_version":"2.0.0","engine_version":"16","falco_version":"0.34.1","libs_version":"0.10.4","plugin_api_version":"2.0.0"}
  • System info: falco --support | jq .system_info /bin/sh: 5: jq: not found Sat Apr 1 23:20:40 2023: Falco version: 0.34.1 (x86_64) Sat Apr 1 23:20:40 2023: Falco initialized with configuration file: /etc/falco/falco.yaml Sat Apr 1 23:20:40 2023: Loading rules from file /etc/falco/falco_rules.yaml
  • Cloud provider or hardware configuration: AWS/EKS
  • OS:

cat /etc/os-release

PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"

  • Kernel:

uname -a

Linux falco-vkb7s 5.4.231-137.341.amzn2.x86_64 #1 SMP Tue Feb 14 21:50:55 UTC 2023 x86_64 GNU/Linux

  • Installation method: EKS->Helm->ArgoCD

Additional context It can be seen from the config.yaml that the change from configmap is loaded but has no effect (perhaps only the logger doesn't reflect it and I have went over 200, I will check this:

cat /etc/falco/falco.yaml

buffered_outputs: false file_output: enabled: false filename: ./events.txt keep_alive: false grpc: bind_address: unix:///run/falco/falco.sock enabled: false threadiness: 0 grpc_output: enabled: false http_output: enabled: true url: http://falco-falcosidekick:2801 user_agent: falcosecurity/falco json_include_output_property: true json_include_tags_property: true json_output: true libs_logger: enabled: false severity: debug load_plugins: [] log_level: info log_stderr: true log_syslog: true metadata_download: chunk_wait_us: 1000 max_mb: 200 watch_freq_sec: 1 modern_bpf: cpus_for_each_syscall_buffer: 2 output_timeout: 2000 outputs: max_burst: 1000 rate: 1 plugins:

  • init_config: null library_path: libk8saudit.so name: k8saudit open_params: http://:9765/k8s-audit
  • library_path: libcloudtrail.so name: cloudtrail
  • init_config: "" library_path: libjson.so name: json priority: debug program_output: enabled: false keep_alive: false program: 'jq ''{text: .output}'' | curl -d @- -X POST https://hooks.slack.com/services/XXX' rules_file:
  • /etc/falco/falco_rules.yaml
  • /etc/falco/falco_rules.local.yaml
  • /etc/falco/rules.d stdout_output: enabled: true syscall_buf_size_preset: 4 syscall_event_drops: actions:
    • log
    • alert max_burst: 1 rate: 0.03333 simulate_drops: false threshold: 0.1 syscall_event_timeouts: max_consecutives: 1000 syslog_output: enabled: true time_format_iso_8601: false watch_config_files: true webserver: enabled: true k8s_healthz_endpoint: /healthz listen_port: 8765 ssl_certificate: /etc/falco/falco.pem ssl_enabled: false

s7an-it avatar Apr 01 '23 23:04 s7an-it

After doing more checks I see some missing links that even if setting the helm override options with max_mb for example 1023 result in this message. I tried to follow-up the code changes on both ends and I think I failed to see the mapping. https://github.com/falcosecurity/libs/blob/master/userspace/libsinsp/socket_handler.h @ldegio @zuc, any idea?

s7an-it avatar Apr 02 '23 00:04 s7an-it

Issues go stale after 90d of inactivity.

Mark the issue as fresh with /remove-lifecycle stale.

Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle stale

poiana avatar Jul 01 '23 01:07 poiana

Stale issues rot after 30d of inactivity.

Mark the issue as fresh with /remove-lifecycle rotten.

Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle rotten

poiana avatar Jul 31 '23 01:07 poiana

Rotten issues close after 30d of inactivity.

Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Provide feedback via https://github.com/falcosecurity/community. /close

poiana avatar Aug 30 '23 01:08 poiana

@poiana: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Provide feedback via https://github.com/falcosecurity/community. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

poiana avatar Aug 30 '23 01:08 poiana