lens icon indicating copy to clipboard operation
lens copied to clipboard

Reported memory usage doubled on minikube with Docker driver

Open DieterDP-ng opened this issue 2 years ago • 10 comments

Describe the bug

The memory reported by Lens for pods or namespaces (and maybe nodes as well) is double the actual value on minikube with the docker driver.

I didn't examine Lens source, but issue 5660 mentions that these values come from the CAdvisor container_memory_working_set_bytes metric. It looks like CAdvisor metrics are duplicated when using the Docker driver.

For example: http://prometheus.minikube.test/api/v1/query?query=container_memory_working_set_bytes{pod%3D%22promtail-minikube-p568t%22} returns:

{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {
          "__name__": "container_memory_working_set_bytes",
          "container": "POD",
          "id": "/docker/85b8af02515cb3e49ca58e7f9d5f59e038a1cb400bec20870842393ee6fe39c4/kubepods/besteffort/podb8b24008-42a6-4456-9cbd-002f0399b169/e9083fade7222b6a3fdd744bbd65af0192143ca825605572180d8490565d7313",
          "image": "k8s.gcr.io/pause:3.6",
          "instance": "minikube",
          "job": "kubernetes-nodes-cadvisor",
          "name": "k8s_POD_promtail-minikube-p568t_minikube-shared_b8b24008-42a6-4456-9cbd-002f0399b169_2",
          "namespace": "minikube-shared",
          "pod": "promtail-minikube-p568t"
        },
        "value": [
          1662460342.559,
          "344064"
        ]
      },
      {
        "metric": {
          "__name__": "container_memory_working_set_bytes",
          "container": "POD",
          "id": "/kubepods/besteffort/podb8b24008-42a6-4456-9cbd-002f0399b169/e9083fade7222b6a3fdd744bbd65af0192143ca825605572180d8490565d7313",
          "image": "k8s.gcr.io/pause:3.6",
          "instance": "minikube",
          "job": "kubernetes-nodes-cadvisor",
          "name": "k8s_POD_promtail-minikube-p568t_minikube-shared_b8b24008-42a6-4456-9cbd-002f0399b169_2",
          "namespace": "minikube-shared",
          "pod": "promtail-minikube-p568t"
        },
        "value": [
          1662460342.559,
          "344064"
        ]
      },
      {
        "metric": {
          "__name__": "container_memory_working_set_bytes",
          "container": "promtail",
          "id": "/docker/85b8af02515cb3e49ca58e7f9d5f59e038a1cb400bec20870842393ee6fe39c4/kubepods/besteffort/podb8b24008-42a6-4456-9cbd-002f0399b169/bcea5ce36f51412aec7df474c621ef957517f9ec2dcb6723c433fc1128fa5c03",
          "image": "sha256:297a6d3c3fa22e3ff144be351c67b135e7bc43cc126fab504b2e8d932d32d523",
          "instance": "minikube",
          "job": "kubernetes-nodes-cadvisor",
          "name": "k8s_promtail_promtail-minikube-p568t_minikube-shared_b8b24008-42a6-4456-9cbd-002f0399b169_2",
          "namespace": "minikube-shared",
          "pod": "promtail-minikube-p568t"
        },
        "value": [
          1662460342.559,
          "88150016"
        ]
      },
      {
        "metric": {
          "__name__": "container_memory_working_set_bytes",
          "container": "promtail",
          "id": "/kubepods/besteffort/podb8b24008-42a6-4456-9cbd-002f0399b169/bcea5ce36f51412aec7df474c621ef957517f9ec2dcb6723c433fc1128fa5c03",
          "image": "sha256:297a6d3c3fa22e3ff144be351c67b135e7bc43cc126fab504b2e8d932d32d523",
          "instance": "minikube",
          "job": "kubernetes-nodes-cadvisor",
          "name": "k8s_promtail_promtail-minikube-p568t_minikube-shared_b8b24008-42a6-4456-9cbd-002f0399b169_2",
          "namespace": "minikube-shared",
          "pod": "promtail-minikube-p568t"
        },
        "value": [
          1662460342.559,
          "88305664"
        ]
      },
      {
        "metric": {
          "__name__": "container_memory_working_set_bytes",
          "id": "/kubepods/besteffort/podb8b24008-42a6-4456-9cbd-002f0399b169",
          "instance": "minikube",
          "job": "kubernetes-nodes-cadvisor",
          "namespace": "minikube-shared",
          "pod": "promtail-minikube-p568t"
        },
        "value": [
          1662460342.559,
          "88580096"
        ]
      }
    ]
  }
}

Note that the memory reported in entry 1 equals entry 2, and that entry 3 is very similar to entry 4.

In our Grafana dashboards, we use an additional filter id=~"/kubepods.+" for this case.

To Reproduce Steps to reproduce the behavior:

  1. Start minikube with Docker driver, eg: minikube start --vm-driver=docker --cni=calico --kubernetes-version=1.23.10
  2. Install any pod.
  3. Look at memory of that pod in Lens, it will be double of what is reported by helm top pod <podname>

Expected behavior I expect Lens to report correct memory usage.

Screenshots image

Environment (please complete the following information):

  • Lens Version: Lens-6.0.1-latest.20220810.2
  • OS: Ubuntu 20.04.5 LTS
  • Installation method (e.g. snap or AppImage in Linux): Installed the .deb package from https://k8slens.dev/

DieterDP-ng avatar Sep 06 '22 10:09 DieterDP-ng

Cursor

same doubling in EKS (docker://20.10.17)

matti avatar Oct 31 '22 16:10 matti

Lens 6.4.0, similar doubling on AWS EKS with containerd, so I feel it's not only related to docker driver. the pod memory doubles while the container memeory metric is correct. image but the memory metric worked fine in Lens 6.2.5 on the same cluster.

kyleli666 avatar Mar 02 '23 01:03 kyleli666

I think that the issue with the query from here:

https://github.com/lensapp/lens/blob/fef94430649885d5368c2a41353012294c6f746b/packages/core/src/main/prometheus/operator-provider.injectable.ts.ts#L80

When you query container_memory_working_set_bytes{pod=~"$POD", namespace="default"} you'll actually get nContainers + 1 metrics. Each metric for container will have corresponding container label and one extra metric with container="" which seems to show the usage for entire pod:

image

So the query in operator-provider.injectable.ts.ts should be:

sum(container_memory_working_set_bytes{pod=~"${opts.pods}", namespace="${opts.namespace}", container!=""}) by (${opts.selector})

SleepWalker avatar Apr 03 '23 18:04 SleepWalker

The problem also occurs with Kubernetes clusters from AKS or on-premises, both on the latest Lens version - 2023.4.141316 and also on the latest OpenLens version 6.4.15. Instead I tried yesterday on an older version of Lens - 5.3.4-latest.20220120.1 and there are no duplicated metrics.

MariusRenta avatar May 03 '23 06:05 MariusRenta

A similar problem actually exists with CPU metrics. tell me if there is an announcement when the problem will be fixed?

serhii-satanenko avatar Aug 24 '23 13:08 serhii-satanenko

Yesterday I noticed the same behavior on several of my clusters.

I'm using OpenLens 6.4.15 and my clusters are all on AKS spanning from versions 1.21 to 1.26 and all of them had the same issue: Pod memory usage was double the actual container usage (or the usage I got when trying kubectl top).

My colleague had an older version of regular Lens (I believe it was from 2022 but can't remember the number exactly) and he didn't have this issue.

I'm subscribing to this issue to keep an eye out for any fixes.

Bemesko avatar Aug 31 '23 11:08 Bemesko

seeing this too on k3s v1.27.4 clusters 😕

viceice avatar Sep 01 '23 09:09 viceice

This might be valid for some prometheus setup, but in our case it shows double CPU on pod view.

https://github.com/lensapp/lens/blob/f1a960fd785b62a118acd8b1525d879f39917e21/packages/technical-features/prometheus/src/operator-provider.injectable.ts.ts#L83

image

Alegrowin avatar Oct 06 '23 17:10 Alegrowin

I also join the problem. Memory and CPU metrics double when you set non-lensmetrics in the settings Lens. My cluster have install prometheus-community/kube-prometheus-stack via helm chart

LinkUpAlex avatar Dec 01 '23 09:12 LinkUpAlex

Same here - k3s running on Ubuntu 22.10 with prometheus-community/kube-prometheus-stack Helm chart

nagyben avatar Feb 09 '24 18:02 nagyben