lens icon indicating copy to clipboard operation
lens copied to clipboard

Inaccurate Memory Usage Display in Pod List

Open smuu opened this issue 2 years ago • 5 comments

Describe the bug Memory usage shows exorbitant numbers in the pod list but shows correct numbers in the graph inside the detailed view of the pod. CPU usage seems to be correct.

We installed the Prometheus operator using the community helm chart. Lens auto-discovers the Prometheus service correctly (checked in settings). We also checked if there is only one Prometheus installation and no leftovers from the previous installations. Recently, we installed the Prometheus adapter to enable the usage of HPA, but as far as I understand the code of lens, it retrieves the metrics from Prometheus to show them in the UI.

kubectl top returns the correct values.

To Reproduce Steps to reproduce the behavior:

  1. Go to Workloads -> Pods
  2. Look at the memory column

Expected behavior Correct memory usage is shown in the pod list.

Screenshots Screenshot from 2023-10-11 10-23-29 Screenshot from 2023-10-11 10-23-48

Environment (please complete the following information):

  • Lens Version: 2023.9.191233-latest
  • OS: Linux (Ubuntu)
  • Installation method: https://docs.k8slens.dev/getting-started/install-lens/#debian

Screenshot from 2023-10-11 10-24-20

Logs: (can be provided if needed)

Kubeconfig: (can be provided if needed)

Additional context (can be provided if needed)

smuu avatar Oct 11 '23 08:10 smuu

Can confirm this issue. It seems to be consistently one unit of memory higher than the real value. In your case it is 7.1 GiB but shows as 7.1 TiB in the overview. When I look at the kubectl top pod values, they are correct. So the metrics delivered by Prometheus aren't the problem but the interpretation of Lens.

2martens avatar Nov 18 '23 13:11 2martens

I have just run into this also. We have an old cluster with metrics-server deploys and Lens appears to show the values correctly there so I think there much be something subtly different with Prometheus adapter's responses on /apis/metrics.k8s.io/*.

Edit: I think I might have found what the issue is: on prometheus adapter the units of the memory that are returned seems to change but must be within the spec of what kubectl expects:

When the metric looks wrong: image

When it looks correct: image

Metrics server seems to report in Ki at all times. Multiple container pods seems to be reporting wrong as well (in the graph as well). Sigh

gunzy83 avatar Nov 29 '23 03:11 gunzy83

I also think that Lens is not adding up the total memory of containers in a pod as well. The numbers fluctuate on multiple container pods :(

gunzy83 avatar Nov 30 '23 23:11 gunzy83

Yes for a pod with multiple containers, I have observed the following issues:

  • resources requests and limits are displayed for each container as they are set for the first container in the list of containers in a pod, even though they are drastically different
  • total usage of RAM in a pod does not match usage of the containers (it is order of magnitude greater) in the pod detail view but in the pod list view it looks correct.

This is using bitnami kube-prometheus installation of the prometheus stack and node exporters etc: https://artifacthub.io/packages/helm/bitnami/kube-prometheus

Tomasz-Kluczkowski avatar Feb 03 '24 10:02 Tomasz-Kluczkowski