argo-cd icon indicating copy to clipboard operation
argo-cd copied to clipboard

The notifications-controller logging level keeps in “info” for argocd, although the config map argocd-cmd-params-cm configurarion is set to "error".

Open ferchdav opened this issue 1 year ago • 12 comments

The notifications-controller logging level keeps in “info” for argocd, although the config map argocd-cmd-params-cm configurarion is set to "error".

Could you please provide a fix?

We want to avoid the large amount of logs that are stored and cause a great cost.

Here the details:

- Doc link: https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-cmd-params-cm-yaml/

  • argocd-cmd-params-cm configuration:
    • Argo CD Notifications Controller Properties notificationscontroller.log.level: “error” notificationscontroller.log.format: “json” notificationscontroller.selfservice.enabled: "false"

- Log example: { "insertId": "yl1x08uioffimr03", "jsonPayload": { "msg": "Trigger on-sync-status-unknown result: [{[0].6SzWb05EK-0v90hwjyytTbN7S6A [app-sync-status-unknown] false}]", "level": "info", "resource": "argocd/xxxxxx” }, "resource": { "type": "k8s_container", "labels": { "cluster_name": “cluster_name", "namespace_name": "argocd", "project_id": “npe", "location": "us-east3", "container_name": "notifications-controller", "pod_name": "argocd-notifications-controller-7kjsdhfkjshdf-sdfsdf4f" } }, "timestamp": "2024-07-xx”, "severity": "INFO", "labels": { "k8s-pod/pod-template-hash": “45sdfsdfsdf", .... }, "logName": "projects/npe/logs/stderr", "receiveTimestamp": "2024-07-xx” }

- Notifications Current code which I think causes the issue: The container args takes global.logging.level first:

  • --loglevel={{ default .Values.global.logging.level .Values.notifications.logLevel }} https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/templates/argocd-notifications/deployment.yaml#L66 and the global logging level is set to a fix value “info” which determines the final result of the issue. https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/values.yaml#L71 (edited)

- Slack message: https://cloud-native.slack.com/archives/C01UKS2NKK3/p1721760075212969

ferchdav avatar Jul 27 '24 14:07 ferchdav

@ferchdav I can't reproduce this issue. Can you tell me how to reproduce it in detail?

juwon8891 avatar Jul 27 '24 17:07 juwon8891

Hello @juwon8891

Yes, here the steps.:

  1. Get ArgoCD installed on GCP GKE

  2. According to documentation, configure the Config Map argocd-cmd-params-cm with the following data:

  • Doc link: https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-cmd-params-cm-yaml/

Argo CD Notifications Controller Properties

notificationscontroller.log.level: “error” notificationscontroller.log.format: “json” notificationscontroller.selfservice.enabled: "false"

  • Config Map:

apiVersion: v1 kind: ConfigMap metadata: name: argocd-cmd-params-cm namespace: argocd labels: app.kubernetes.io/component: server app.kubernetes.io/instance: argocd app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: argocd-cmd-params-cm app.kubernetes.io/part-of: argocd app.kubernetes.io/version: v2.11.4 helm.sh/chart: argo-cd-6.9.0 data: applicationsetcontroller.log.format: json applicationsetcontroller.log.level: error applicationsetcontroller.policy: sync controller.log.format: json controller.log.level: error notificationscontroller.log.format: json notificationscontroller.log.level: error notificationscontroller.selfservice.enabled: "false" reposerver.log.format: json reposerver.log.level: error server.log.format: json server.log.level: error

  1. Restart the ArgoCD components. kubectl rollout restart sts/argocd-application-controller -n argocd kubectl rollout restart deploy/argocd-applicationset-controller -n argocd kubectl rollout restart deploy/argocd-dex-server -n argocd kubectl rollout restart deploy/argocd-notifications-controller -n argocd kubectl rollout restart deploy/argocd-redis-ha-haproxy -n argocd kubectl rollout restart sts/argocd-redis-ha-server -n argocd kubectl rollout restart deploy/argocd-repo-server -n argocd kubectl rollout restart deploy/argocd-server -n argocd

  2. Review the GCP logs where will see logs like the one copied below where you will get the INFO loggging level information due to the default details settings:

{ "insertId": "yl1x08uioffimr03", "jsonPayload": { "msg": "Trigger on-sync-status-unknown result: [{[0].6SzWb05EK-0v90hwjyytTbN7S6A [app-sync-status-unknown] false}]", "level": "info", "resource": "argocd/xxxxxx” }, "resource": { "type": "k8s_container", "labels": { "cluster_name": “cluster_name", "namespace_name": "argocd", "project_id": “npe", "location": "us-east3", "container_name": "notifications-controller", "pod_name": "argocd-notifications-controller-7kjsdhfkjshdf-sdfsdf4f" } }, "timestamp": "2024-07-xx”, "severity": "INFO", "labels": { "k8s-pod/pod-template-hash": “45sdfsdfsdf", .... }, "logName": "projects/npe/logs/stderr", "receiveTimestamp": "2024-07-xx” }

I think the current Notifications code here is causing the issue:

The container args takes global.logging.level first:

--loglevel={{ default .Values.global.logging.level .Values.notifications.logLevel }} https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/templates/argocd-notifications/deployment.yaml#L66

and the global logging level is set to a fix value “info” which determines the final result of the issue. https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/values.yaml#L71 (edited)

ferchdav avatar Jul 29 '24 17:07 ferchdav

Sorry I can't reproduce this issue. What environment does this message appear in? Trigger on-sync-status-unknow result: [{[0].6SzWb05EK-0v90hwjytytTbN7S6A [app-sync-status-unknow]

juwon8891 avatar Jul 29 '24 18:07 juwon8891

@juwon8891 Have you set your ArgoCD on GCP GKE? If so, you will see that despite having the configmap argocd-cmd-params-cm configured with error, all logs with INFO (default value in https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/values.yaml#L71) are being saved on the GCP Logging console from the notification controller causing a lot of expense because of unnecessary information. I hope you have all the information you need now. Please tell me if you don't.

ferchdav avatar Jul 29 '24 21:07 ferchdav

@juwon8891 PFA logging_notification_controller.png with details. You will see that the notification-controller is sending logs with level error, debug or info despite error was set in the configmap logging_notification_controller

The logging level configured for the rest of the argocd components are working OK

ferchdav avatar Jul 29 '24 21:07 ferchdav

image I still can't find the log in the gke environment.. how about actually checking the container log with the kubectl command?

juwon8891 avatar Jul 29 '24 22:07 juwon8891

@juwon8891 It's seems you are getting no logs in gcp from your argocd instance. Try any of these 2 options:

  1. resource.labels.namespace_name="argocd" labels.k8s-pod/app_kubernetes_io/name="argocd-notifications-controller"

or just:

  1. resource.labels.namespace_name="argocd"

and let me know please.

ferchdav avatar Jul 30 '24 16:07 ferchdav

image I can still only get the "error" log. If possible, could you check the container log?

juwon8891 avatar Jul 30 '24 17:07 juwon8891

@juwon8891

Yes, here the container logs from 1 of the pods where you see level info, warning, etc

Screenshot 2024-07-30 at 17 00 07

and here the configmap conf:

Screenshot 2024-07-30 at 17 00 26

ferchdav avatar Jul 30 '24 21:07 ferchdav

Did you restart the notification controller pod after you set up the configmap?

juwon8891 avatar Aug 02 '24 14:08 juwon8891

Hello @juwon8891 Yes, I've restarted all pieces.

ferchdav avatar Aug 06 '24 02:08 ferchdav

Had simillar, maybe same issue: image text payload is treated as error as told before, that's standard behaviour. Changing to JSON adopts it for google cloud monitoring.

I have added patch:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cmd-params-cm
data:
  applicationsetcontroller.log.format: "json"
  controller.log.format: "json"
  notificationscontroller.log.format: "json"
  reposerver.log.format: "json"
  server.log.format: "json"

forced recreation of all pods:

kubectl delete pods --all -n argocd

and it worked: image

ignotas avatar Oct 13 '24 08:10 ignotas

Thanks! Checking asap

On Sun, Oct 13, 2024 at 5:14 AM ignotas @.***> wrote:

I have added patch: apiVersion: v1 kind: ConfigMap metadata: name: argocd-cmd-params-cm data: applicationsetcontroller.log.format: "json" controller.log.format: "json" notificationscontroller.log.format: "json" reposerver.log.format: "json" server.log.format: "json" forced recreation of all pods: kubectl delete pods --all -n argocd and it worked: image.png (view on web) https://github.com/user-attachments/assets/05ee8501-6cf4-4778-94bf-00bbe36cbc9d

— Reply to this email directly, view it on GitHub https://github.com/argoproj/argo-cd/issues/19276#issuecomment-2408877632, or unsubscribe https://github.com/notifications/unsubscribe-auth/BB4KFAZ3623GD6Y2JSFBPL3Z3ITX7AVCNFSM6AAAAABLR542B6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBYHA3TONRTGI . You are receiving this because you were mentioned.Message ID: @.***>

ferchdav avatar Oct 21 '24 17:10 ferchdav

@ferchdav, is this resolved?

andrii-korotkov-verkada avatar Nov 11 '24 05:11 andrii-korotkov-verkada

Hi All, I am experiencing the same issues on ArgoCD 2.12.4 , setting the notificationscontroller.log.level and notificationscontroller.log.format in the ConfigMap does not change the default values ; info, text . Do I have to remove from argocd notification controller deployment the args:

  • '--loglevel=info'
  • '--logformat=text'

Currently my POD is starting like this: - /usr/local/bin/argocd-notifications

  • '--metrics-port=9001'
  • '--loglevel=info'
  • '--logformat=text'
  • '--namespace=test'
  • '--argocd-repo-server=argocd-repo-server:8081'
  • '--secret-name=argocd-notifications-secret'

DimitarKapashikov avatar Jan 22 '25 13:01 DimitarKapashikov

I wound up having to do this globally. I know not ideal for folx who want varying log levels between components, but after restarting the argocd deployments, this works for me now.

global:
  domain: {}
  logging:
    format: text
    level: error

shermanericts avatar Jan 22 '25 18:01 shermanericts

I have this exact issue, argo cd on GKE polluting the logs with severity: "ERROR" when the textPayload says level=info any idea how to solve this ? As anyone managed to find a clean fix for this ? @ignotas ? @juwon8891 ?

LeoAnt02 avatar Feb 21 '25 03:02 LeoAnt02

somewhat related:

  • i set the notificationscontroller.log.level: "debug"
  • restart the deployment of argocd-notifications-controller
  • and i only see info logs

fredleger avatar Apr 25 '25 10:04 fredleger

Hi All, I am experiencing the same issues on ArgoCD 2.12.4 , setting the notificationscontroller.log.level and notificationscontroller.log.format in the ConfigMap does not change the default values ; info, text . Do I have to remove from argocd notification controller deployment the args:

  • '--loglevel=info'
  • '--logformat=text'

Currently my POD is starting like this: - /usr/local/bin/argocd-notifications

  • '--metrics-port=9001'
  • '--loglevel=info'
  • '--logformat=text'
  • '--namespace=test'
  • '--argocd-repo-server=argocd-repo-server:8081'
  • '--secret-name=argocd-notifications-secret'

Yup I confirm this happens.

I have this in my argocd-cmd-params-cm:

  notificationscontroller.log.format: json                                                                        
  notificationscontroller.log.level: info   

However my pod continues to start up with:

    Args:                                                                                                         
      /usr/local/bin/argocd-notifications                                                                         
      --metrics-port=9001                                                                                         
      --loglevel=info                                                                                             
      --logformat=text                                                                                            
      --namespace=argocd-system                                                                                   
      --argocd-repo-server=argocd-repo-server:8081                                                                
      --secret-name=argocd-notifications-secret  

Maybe similar issue to https://github.com/argoproj/argo-cd/pull/10513?

jeremych1000 avatar Jun 05 '25 14:06 jeremych1000

adding "controller.log.level: error" to argocd-cmd-params-cm configMap fixed it for me

jorgyp avatar Jun 06 '25 01:06 jorgyp

@jorgyp This didn't work for me. I still get logs from argo-repo-server with severity ERROR even though they are info logs. Was there any other configs you changed?

I tried all of the above in this thread.

MitchDart avatar Jul 11 '25 07:07 MitchDart

Hello everyone, We are hitting this issue also on the application controller. on the argocd-cmd-params-cm configmap the logs are configured with these settings:

controller.log.format: json                                                                                                                                                                                                     
controller.log.level: warn

The statefullset is configured using the helm chart with the env variables:

- name: ARGOCD_APPLICATION_CONTROLLER_LOGFORMAT
  valueFrom:
    configMapKeyRef:
      key: controller.log.format
      name: argocd-cmd-params-cm
      optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_LOGLEVEL
  valueFrom:
    configMapKeyRef:
      key: controller.log.level
      name: argocd-cmd-params-cm
      optional: true

These environment variables are loaded in ArgoCD in this part of the code so it should work. The log format is respected but the level is not.

Thanks

ricardojdsilva87 avatar Jul 28 '25 10:07 ricardojdsilva87

We are seeing the same behaviour where the value is correctly set from the helm values to the cmd-params configmap

controller.log.format: json
controller.log.level: error

but the controller keeps outputing info statements e.g.

{"dry-run":"none","level":"info","manager":"argocd-controller","msg":"Applying resource .... in cluster: https://......eks.amazonaws.com, namespace: ....","serverSideApply":false,"serverSideDiff":true,"time":"2025-11-06T12:20:33Z"}
{"level":"info","msg":"Warning: would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"....\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"...\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"...\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"...\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")","time":"2025-11-06T12:20:32Z"}

Can this be caused by 3rd-party libraries like helm and kubectl that are used? If so, can we propagate the configured log level to the respective client libraries?

platformoperationstkp avatar Nov 06 '25 12:11 platformoperationstkp

For me in ArgoCD Helm chart 7.9.1 it was the same effect, that all other pods changed the log level and format when the change was applied. But I had to manually restart he notification controller pod for the configuration to take effect. So perhaps for some other it might be the same that the pod does not restart automatically due to lack of config checksum in the annotations. The value file change was as follows:

global:
  logging:
    format: json
    level: warn

martivo avatar Dec 10 '25 10:12 martivo