dashboard-metrics-scraper icon indicating copy to clipboard operation
dashboard-metrics-scraper copied to clipboard

How to disable the access log?

Open mindw opened this issue 5 years ago • 14 comments

Environment
Installation method: kubectl apply
Kubernetes version: 1.14
Dashboard version: 2.0.0-b4
Operating system: Linux
Steps to reproduce

Run the sidecar inside the dashboard pod::

        args:
          - --metric-resolution=30s
          - --log-level=warn
        ports:
        - containerPort: 8000
          name: http
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: http
          initialDelaySeconds: 30
          timeoutSeconds: 30

Get logs: kubectl -n kube-system logs svc/kubernetes-dashboard -c dashboard-metrics-scraper

Observed result
10.0.7.85 - - [20/Sep/2019:23:47:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.13"
10.0.7.85 - - [20/Sep/2019:23:47:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.13"
127.0.0.1 - - [20/Sep/2019:23:47:31 +0000] "GET /healthz?timeout=32s HTTP/1.1" 200 25 "" "dashboard/v0.0.0 (linux/amd64) kubernetes/$Format"
Expected result

An empty log except for warning messages.

Comments
  • Looks like the sidecar uses gorilla combined logging handler. Couldn't find a way to disable it.
  • That format is inconsistent with the rest of the k8s components which use klog. Makes writing a parser harder than it should be. https://github.com/kubernetes/kubernetes/issues/61006 https://github.com/kubernetes/klog

mindw avatar Sep 22 '19 08:09 mindw

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Dec 21 '19 09:12 fejta-bot

/remove-lifecycle stale

mindw avatar Dec 24 '19 16:12 mindw

I am experiencing the same thing with the kube-probe logs for liveness and readiness probes.

danksim avatar Feb 12 '20 22:02 danksim

We're seeing the same thing in our logs. The probe polling the root endpoint (and receiving a redirect) in rather rapid interval yielding quite a few loglines (thousands per hour).

10.12.28.58 - - [19/Mar/2020:13:34:02 +0000] "GET / HTTP/1.1" 302 138 "-" "kube-probe/1.14+"
10.12.28.58 - - [19/Mar/2020:13:34:03 +0000] "GET /expired HTTP/1.1" 200 43714 "http://10.12.7.211:80/" "kube-probe/1.14+"

MarcelTon avatar Mar 20 '20 12:03 MarcelTon

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jun 18 '20 12:06 fejta-bot

/remove-lifecycle stale

mindw avatar Jun 18 '20 13:06 mindw

/lifecycle frozen

maciaszczykm avatar Jun 18 '20 13:06 maciaszczykm

Hi people, Does anyone find a solution to disable this INFO log? Maybe some flag or config on kubelet?

GrigorievNick avatar Dec 11 '21 15:12 GrigorievNick

Disable access_log in your nginx config

server {
   ...
   access_log off;
   ...
}

sxwebdev avatar Mar 15 '22 08:03 sxwebdev

Is there any update on this ?

How others are tackling this ?

nikhilagrawal577 avatar Apr 19 '22 15:04 nikhilagrawal577

We dropped the dashboard from the cluster. It was too difficult to pass security review 🙁

mindw avatar Apr 19 '22 15:04 mindw

anyone solve this?

tooptoop4 avatar Feb 10 '23 03:02 tooptoop4

Disable access_log in your nginx config

server {
   ...
   access_log off;
   ...
}

Thanks ! Any ideas on where in the pods filesystem this is ? Then we could override it with a replacement from a configmap etc.. ;-)

mikementzmaersk avatar Jun 02 '23 15:06 mikementzmaersk

Disable access_log in your nginx config

server {
   ...
   access_log off;
   ...
}

Thanks ! Any ideas on where in the pods filesystem this is ? Then we could override it with a replacement from a configmap etc.. ;-)

@mikementzmaersk @sxwebdev comment seems to be a red herring as it refers to the nginx access log. If you're looking to filter out the logs then adding custom rules to your k8s log collector would be one way to do it (EA fluentd/fluent-bit/beat etc).

mindw avatar Jun 05 '23 10:06 mindw