Inaccurate Memory Usage Display in Pod List
Describe the bug Memory usage shows exorbitant numbers in the pod list but shows correct numbers in the graph inside the detailed view of the pod. CPU usage seems to be correct.
We installed the Prometheus operator using the community helm chart. Lens auto-discovers the Prometheus service correctly (checked in settings). We also checked if there is only one Prometheus installation and no leftovers from the previous installations. Recently, we installed the Prometheus adapter to enable the usage of HPA, but as far as I understand the code of lens, it retrieves the metrics from Prometheus to show them in the UI.
kubectl top returns the correct values.
To Reproduce Steps to reproduce the behavior:
- Go to Workloads -> Pods
- Look at the memory column
Expected behavior Correct memory usage is shown in the pod list.
Screenshots
Environment (please complete the following information):
- Lens Version: 2023.9.191233-latest
- OS: Linux (Ubuntu)
- Installation method: https://docs.k8slens.dev/getting-started/install-lens/#debian
Logs: (can be provided if needed)
Kubeconfig: (can be provided if needed)
Additional context (can be provided if needed)
Can confirm this issue. It seems to be consistently one unit of memory higher than the real value. In your case it is 7.1 GiB but shows as 7.1 TiB in the overview. When I look at the kubectl top pod values, they are correct. So the metrics delivered by Prometheus aren't the problem but the interpretation of Lens.
I have just run into this also. We have an old cluster with metrics-server deploys and Lens appears to show the values correctly there so I think there much be something subtly different with Prometheus adapter's responses on /apis/metrics.k8s.io/*.
Edit: I think I might have found what the issue is: on prometheus adapter the units of the memory that are returned seems to change but must be within the spec of what kubectl expects:
When the metric looks wrong:
When it looks correct:
Metrics server seems to report in Ki at all times. Multiple container pods seems to be reporting wrong as well (in the graph as well). Sigh
I also think that Lens is not adding up the total memory of containers in a pod as well. The numbers fluctuate on multiple container pods :(
Yes for a pod with multiple containers, I have observed the following issues:
- resources requests and limits are displayed for each container as they are set for the first container in the list of containers in a pod, even though they are drastically different
- total usage of RAM in a pod does not match usage of the containers (it is order of magnitude greater) in the pod detail view but in the pod list view it looks correct.
This is using bitnami kube-prometheus installation of the prometheus stack and node exporters etc: https://artifacthub.io/packages/helm/bitnami/kube-prometheus