Lior Franko

Results 42 comments of Lior Franko

Sure. So the question is how `node_cpu_hourly_cost` and `node_ram_hourly_cost` are calculated.

Both nodes has the same `kube_node_status_capacity_cpu_cores`, `kube_node_status_capacity_memory_bytes`, `kube_node_status_allocatable_memory_bytes` and `kube_node_status_allocatable_cpu_cores`.

And I do see many 'outlier detected' logs: ``` 2023-12-05T20:06:35.669815581Z WRN RAM cost outlier detected; skipping data point. 2023-12-05T20:06:35.669852271Z WRN CPU cost outlier detected; skipping data point. 2023-12-05T20:06:35.669862292Z WRN RAM...

I don't think it's my case, we do use Spots, but if it would have been the case I wouldn't have seen the same instance type in the same AZ...

Hi @sachin-rafay 1. Were both instances spot instances but showed different prices? - yes. 2. Did you check logs from the start in your inaccurate node? yes, it happens all...

I've checked the PR: https://github.com/opencost/opencost/pull/2386 on multiple instance types and it does solve the issue. Thanks @sachin-rafay

I'm having the same issue. I'll update you if I figure out the root cause.

Two things need to be changed: 1. Drop the following metrics from the opencost scrape: - kube_deployment_spec_replicas - kube_deployment_status_replicas_available - kube_job_status_failed - kube_namespace_labels - kube_node_labels - kube_node_status_allocatable - kube_node_status_capacity -...

I think you can add relabel logic when scraping the opencost metrics.

You can add the following syntax: ``` relabelings: - action: replace sourceLabels: - targetLabel: ``` This is something I'm already doing: ``` relabelings: - action: replace sourceLabels: - __meta_kubernetes_pod_node_name targetLabel:...