kube-capacity icon indicating copy to clipboard operation
kube-capacity copied to clipboard

incorrect data

Open christiancadieux opened this issue 3 years ago • 5 comments
trafficstars

in some cases, memory values for a node will not include the 'Mi' suffix:

10.145.197.168   42125m (75%)     148700m (265%)     221838Mi (82%)          416923Mi (154%)
10.145.197.169   45325m (80%)     121200m (216%)     62346Mi (23%)           180263Mi (66%)
10.145.197.170   14425m (25%)     37700m (67%)       45346Mi (16%)           100345Mi (37%)
162.150.14.214   13790m (24%)     45700m (81%)       39411368960000m (29%)   106336625408000m (78%)
162.150.14.215   13790m (24%)     39700m (70%)       38874498048000m (28%)   90767368960000m (67%)
162.150.14.216   16790m (29%)     42700m (76%)       46390690816000m (34%)   98283561728000m (72%)
162.150.14.217   12490m (22%)     39200m (70%)       38606062592000m (28%)   91841110784000m (68%)

In these cases, the report is wrong. need to change the logic here: https://github.com/robscott/kube-capacity/blob/master/pkg/capacity/resources.go#L356

for example, use more specific requestString and limitString so the code does not fall on the wrong unit: Example: add requestStringM() and limitStringM() that only converts Memory units to avoid the problem::

func (tp *tablePrinter) printClusterLine() {
	tp.printLine(&tableLine{
		node:           "*",
		namespace:      "*",
		pod:            "*",
		container:      "*",
		cpuRequests:    tp.cm.cpu.requestString(tp.availableFormat),
		cpuLimits:      tp.cm.cpu.limitString(tp.availableFormat),
		cpuUtil:        tp.cm.cpu.utilString(tp.availableFormat),
		memoryRequests: tp.cm.memory.requestStringM(tp.availableFormat),
		memoryLimits:   tp.cm.memory.limitStringM(tp.availableFormat),
		memoryUtil:     tp.cm.memory.utilString(tp.availableFormat),
		podCount:       tp.cm.podCount.podCountString(),
	})
}

func (tp *tablePrinter) printNodeLine(nodeName string, nm *nodeMetric) {
	tp.printLine(&tableLine{
		node:           nodeName,
		namespace:      "*",
		pod:            "*",
		container:      "*",
		cpuRequests:    nm.cpu.requestString(tp.availableFormat),
		cpuLimits:      nm.cpu.limitString(tp.availableFormat),
		cpuUtil:        nm.cpu.utilString(tp.availableFormat),
		memoryRequests: nm.memory.requestStringM(tp.availableFormat),
		memoryLimits:   nm.memory.limitStringM(tp.availableFormat),
		memoryUtil:     nm.memory.utilString(tp.availableFormat),
		podCount:       nm.podCount.podCountString(),
	})
}

christiancadieux avatar Jan 23 '22 18:01 christiancadieux

Hey @christiancadieux, thanks for reporting this! I'm not sure how soon I'll be able to fix this, but very open to PRs.

robscott avatar May 13 '22 18:05 robscott

The same. I wil try to fix this

cloud-66 avatar Jul 14 '22 06:07 cloud-66

Thanks @cloud-66!

robscott avatar Jul 14 '22 17:07 robscott

Seems to happen when the deployment requests / limits has been specified using M rather than Mi eg. we have a deployment where it's been entered as:

        resources:
          requests:
            cpu: 500m
            memory: 500M
          limits:
            cpu: 500m
            memory: 500M

Then kubectl resource-capacity --pods displays that deployment erroneously compared to all the others:

NODE                                NAMESPACE                POD                                                               CPU REQUESTS   CPU LIMITS       MEMORY REQUESTS      MEMORY LIMITS

aks-core-35064155-vmss000000        aqua                     aqua-sec-enforcer-fsdev-aks-foresight-muse2-ds-kkg7r              500m (12%)     500m (12%)       500000000000m (3%)   500000000000m (3%)
aks-core-35064155-vmss000000        kube-system              azuredefender-collector-ds-n2kwh                                  60m (1%)       210m (5%)        64Mi (0%)            128Mi (1%)
aks-core-35064155-vmss000000        kube-system              azuredefender-publisher-ds-9jz77                                  30m (0%)       60m (1%)         32Mi (0%)            200Mi (1%)

edrandall avatar Oct 11 '22 17:10 edrandall

@christiancadieux @edrandall You should try, this issue was fixed by #71

cloud-66 avatar Feb 06 '23 15:02 cloud-66