k9s
k9s copied to clipboard
k9s warns on low memory, but memory is not low it is just cached
K9s warning banner flags that I am using 70% memory or more.
Well if I look at the pod consumption it only adds upp to about one third of the available ram. The rest of the memory is cached/buffered.
I think K9s should be aware of how basic Linux memory management is handled and not warn newbies that "Ram is running out" because it will ALWAYS be running out if you don't measure it correctly.
https://www.linuxatemyram.com
Yes, I've seen this problem. After a while, the used memory can go over 100% :
In this case, the warning about critical memory is no longer shown...
Anyway, here's the RSS memory usage of the same cluster reported by Azure:
We can see that the RSS usage is about 30%, well below the 104% reported by k9s.
Could you fix the MEM display to show RSS usage ?
All the best
I have also noticed discrepancies in memory usage metrics for Azure clusters while using K9s. Specifically, the reported memory usage appears to be higher than the total available memory.
Example:
- K9s Node Memory Reading: 5747M used out of 4543M available.
- Pod Memory Sum: 2956M when summing up individual pod memory usage on that node.
- Azure Dashboard Reading: 3.3GB usage.
Given these metrics, it's unclear if Azure might be providing incorrect metrics, or if K9s may require additional configurations or updates when connected to Azure clusters to accurately interpret the metrics.
Versions:
- K9s: v0.27.4
- Kubernetes: v1.25.11
Has there been any progress on this issue?
Just noticed the same. The code responsible for fetching and calculating the values is here:
https://github.com/derailed/k9s/blob/702f6f01b2144b973ab2c4c1a2e6faddc8aef7a0/internal/client/metrics.go#L62-L84
I might be wrong, but according to the documentation the function call to Memory returns "the Memory limit if specified". Shouldn't the calculation be done based on the node's Capacity
and Allocatable
fields (see NodeStatus) ?
Can anyone confirm this?
I think in general there should be at least a switch to use RSS usage for both totals and value per each pod. I found that many people do not understand the difference.
I usually use this tool same as htop for processes when I test something on self hosted cluster and very often I have to explain why these figures are different than container_memory_rss in prometheus.