kube-state-metrics
kube-state-metrics copied to clipboard
Missing replacement config for VPA collector in CRM
What would you like to be added:
I would like to have a complete replacement example for VPA CRM configuration here instead of a simple example defining how to create vpa annotation metrics when most useful metrics (basically any of the resource) are complex to define for people starting out with CRM
Why is this needed:
Current documentation is not really usable. It more or less feels like users have to do the replacement themselves, are forced to learn how CRM works and dig into the code of KSM when a sufficient documentation could help users get started faster.
Describe the solution you'd like
Have a full example of a crm config to have the same vpa metrics as before. Some metrics might change (because the VPA collected does some calculation to change the unit that is currently not supported by CRM) but at least having the same information (labelset and values) would help people upgrade to 2.9.0.
Additional context Coming from this https://github.com/kubernetes/kube-state-metrics/issues/1718#issuecomment-1497213264
/triage accepted
Hi @QuentinBisson, I would like to work on this issue. I am new to Kubernetes repo, can you help me where i can get started for this issue?
@samyakjain10 I would first check the existing VPA metrics and read about the CRM exporter to try and reproduce the metrics :)
Do you have any update on this? Can you provide at least one useful example? I tried to read the doc, did few tests but so far I couldn't get any metric. I can get maps from this path but then it seems impossible to get anything else deeper.
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations].
tried with labelFromPath: containerName and valueFrom but I only got nil or kube-state-metrics crash/panic
@agarbato,
I'm a novice with CustomResourceStateMetrics but this worked for me:
metrics:
- name: "cpu_recommendations"
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations, "0", target, cpu]
labelsFromPath:
containerName: [status, recommendation, containerRecommendations, "0", containerName]
If you have multiple containers in a pod you will need to duplicate the above for each container and increment the path, you can use the same name for the metric.
I've stuck with this for now until CEL is implemented as wildcards aren't supported.
@benhodgkiss thank you, that works. Hope it will be useful also to other people.
@benhodgkiss unfortunately this does not work all the time. I've some errors when cpu/target/recommendation is more than 1k(m)
E0620 09:39:22.228995 1 registry_factory.go:649] "kube_customresource_vpa_cpu_recommendations" err="[status,recommendation,containerRecommendations,0,target,cpu]: []: strconv.ParseFloat: parsing "3136m": invalid syntax"
Did you face the same error?
@agarbato,
I'm not seeing this error and I do have containers with targets over 1000m.
Are you running the latest version of KSM?
I had one cluster with an older version of ksm. Problem fixed once I updated to the latest version. Quantities percentages were not supported on older versions. https://github.com/kubernetes/kube-state-metrics/pull/1989
I am trying to resolve this one to. What i got so far (i am using kube-prometheus-stack to deploy it):
- name: "containerrecommendations_target"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path:
- status
- recommendation
- containerRecommendations
- "0"
- target
- memory
labelsFromPath:
container:
- status
- recommendation
- containerRecommendations
- "0"
- containerName
commonLabels:
resource: "memory"
unit: "byte"
- name: "containerrecommendations_target"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path:
- status
- recommendation
- containerRecommendations
- "0"
- target
- cpu
labelsFromPath:
container:
- status
- recommendation
- containerRecommendations
- "0"
- containerName
commonLabels:
resource: "cpu"
unit: "core"
Result:
But it still has some issues like:
It only get's the first entry, of the container list when using "0". I couldn't find a way to do a "for each".
status:
conditions:
- lastTransitionTime: "2022-06-13T13:52:07Z"
status: "True"
type: RecommendationProvided
recommendation:
containerRecommendations:
- containerName: grafana
lowerBound:
cpu: 10m
memory: "144644419"
target:
cpu: 11m
memory: "163378051"
uncappedTarget:
cpu: 11m
memory: "163378051"
upperBound:
cpu: 11m
memory: "164136870"
- containerName: grafana-sc-dashboard
lowerBound:
cpu: 10m
memory: "109813731"
target:
cpu: 11m
memory: "126805489"
uncappedTarget:
cpu: 11m
memory: "126805489"
upperBound:
cpu: 11m
memory: "127394175"
- containerName: grafana-sc-datasources
lowerBound:
cpu: 10m
memory: "93632226"
target:
cpu: 11m
memory: "109814751"
uncappedTarget:
cpu: 11m
memory: "109814751"
upperBound:
cpu: 11m
memory: "110324558"
And the pod value is pod="kube-prometheus-kube-state-metrics-7cc55b859f-zznmn"
and not the vpa pod, as there is no label on the vpa pod it's hard to define.
Update:
Can confirm that the following works fine in the prometheus-community/kube-prometheus-stack chart :)
- kube-state-metrics:
customResourceState:
enabled: true
config:
spec:
resources:
- groupVersionKind:
group: autoscaling.k8s.io
kind: "VerticalPodAutoscaler"
version: "v1"
labelsFromPath:
verticalpodautoscaler:
- metadata
- name
namespace:
- metadata
- namespace
target_api_version:
- apiVersion
target_kind:
- spec
- targetRef
- kind
target_name:
- spec
- targetRef
- name
metricNamePrefix: kube_customresource_vpa_containerrecommendations
metrics:
- name: "target"
help: "VPA container recommendations for memory."
commonLabels:
resource: "memory"
unit: "byte"
each:
type: Gauge
gauge:
path:
- status
- recommendation
- containerRecommendations
valueFrom:
- target
- memory
labelsFromPath:
container:
- containerName
- name: "target"
help: "VPA container recommendations for cpu."
commonLabels:
resource: "cpu"
unit: "core"
each:
type: Gauge
gauge:
path:
- status
- recommendation
- containerRecommendations
valueFrom:
- target
- cpu
labelsFromPath:
container:
- containerName
rbac:
extraRules:
- apiGroups:
- customresourcedefinitions.apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- list
- watch
- apiGroups:
- autoscaling.k8s.io
resources:
- verticalpodautoscalers
verbs:
- list
- watch
@Mahagon
Unfortunately I don't think it's possible to get the target for all containers in a pod without duplicating the config as I mentioned in my first comment until CEL is added.
Also, you can't get the pod name but you shouldn't need to, the target or vpa name should be enough as you will know which pods that is targeting.
I am on the latest KSM and struggling to get this to work... every config I try for this ends up with got nil while resolving path
and crashing the pod. My config at this point is the following:
kind: CustomResourceStateMetrics
spec:
resources:
- groupVersionKind:
group: autoscaling.k8s.io
kind: VerticalPodAutoscaler
version: v1
labelsFromPath:
namespace:
- metadata
- namespace
target_api_version:
- apiVersion
target_kind:
- spec
- targetRef
- kind
target_name:
- spec
- targetRef
- name
verticalpodautoscaler:
- metadata
- name
metrics:
- commonLabels:
resource: memory
unit: byte
each:
gauge:
path:
- status
- recommendation
- containerRecommendations
- "0"
- target
- memory
type: Gauge
help: VPA target container memory
labelsFromPath:
container:
- status
- recommendation
- containerRecommendations
- "0"
- containerName
name: vpa_target_memory
- commonLabels:
resource: cpu
unit: core
each:
gauge:
path:
- status
- recommendation
- containerRecommendations
- "0"
- target
- cpu
type: Gauge
help: VPA target container memory
labelsFromPath:
container:
- status
- recommendation
- containerRecommendations
- "0"
- containerName
name: vpa_target_cpu
And then I get a long error log like this:
I0720 21:49:10.389700 1 custom_resource_metrics.go:79] "Custom resource state added metrics" familyNames=[kube_customresource_vpa_target_memory kube_customresource_vpa_target_cpu]
I0720 21:49:17.088128 1 builder.go:275] "Active resources" activeStoreNames="certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments,autoscaling.k8s.io/v1, Resource=verticalpodautoscalers"
E0720 21:49:20.287534 1 registry_factory.go:662] "kube_customresource_vpa_target_memory" err="[status,recommendation,containerRecommendations,0,target,memory]: got nil while resolving path"
E0720 21:49:20.287607 1 registry_factory.go:662] "kube_customresource_vpa_target_cpu" err="[status,recommendation,containerRecommendations,0,target,cpu]: got nil while resolving path"
...
I0720 21:49:25.786452 1 trace.go:236] Trace[1141997465]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 (20-Jul-2023 21:49:10.391) (total time: 15295ms):
Trace[1141997465]: ---"Objects listed" error:<nil> 8497ms (21:49:18.889)
Trace[1141997465]: ---"SyncWith done" 6797ms (21:49:25.686)
Trace[1141997465]: [15.295061939s] [15.295061939s] END
I0720 21:49:26.291420 1 trace.go:236] Trace[1688298006]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 (20-Jul-2023 21:49:10.391) (total time: 15899ms):
Trace[1688298006]: ---"Objects listed" error:<nil> 11797ms (21:49:22.189)
Trace[1688298006]: ---"SyncWith done" 4101ms (21:49:26.291)
Trace[1688298006]: [15.89981501s] [15.89981501s] END
I0720 21:49:34.087523 1 trace.go:236] Trace[356674503]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 (20-Jul-2023 21:49:10.391) (total time: 23695ms):
Trace[356674503]: ---"Objects listed" error:<nil> 23395ms (21:49:33.787)
Trace[356674503]: [23.695587667s] [23.695587667s] END
@jeisen probably you forgot the RBAC Rules required?!
Here you go with my custom yaml configs to the kube-prometheus-stack
helm chart being passed to the downstream kube-state-metrics
sub-chart:
## Configuration for kube-state-metrics subchart
##
kube-state-metrics:
rbac:
extraRules:
- apiGroups: ["autoscaling.k8s.io"]
resources: ["verticalpodautoscalers"]
verbs: ["list", "watch"]
prometheus:
monitor:
enabled: true
# https://github.com/kubernetes/kube-state-metrics/blob/main/docs/customresourcestate-metrics.md#verticalpodautoscaler
# https://github.com/kubernetes/kube-state-metrics/issues/2041#issuecomment-1614327806
customResourceState:
enabled: true
config:
kind: CustomResourceStateMetrics
spec:
resources:
- groupVersionKind:
group: autoscaling.k8s.io
kind: "VerticalPodAutoscaler"
version: "v1"
labelsFromPath:
verticalpodautoscaler: [metadata, name]
namespace: [metadata, namespace]
target_api_version: [apiVersion]
target_kind: [spec, targetRef, kind]
target_name: [spec, targetRef, name]
metrics:
- name: "vpa_containerrecommendations_target"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations, "0", target, memory]
labelsFromPath:
container: [status, recommendation, containerRecommendations, "0", containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_target"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations, "0", target, cpu]
labelsFromPath:
container: [status, recommendation, containerRecommendations, "0", containerName]
commonLabels:
resource: "cpu"
unit: "core"
selfMonitor:
enabled: true
With that config I get a new metric kube_customresource_vpa_containerrecommendations_target
with the container name, namespace, resource, unit, .. etc. as labels.
Hints on my versions:
- I am using Kubernetes State Metrics version
2.9.2
- Kube Prometheus Stack Chart version
48.1.2
@sherifkayad Hmm, I'm not using the operator, but otherwise my config is the same. Maybe that's the missing piece...
Use valueFrom works for me to get all containers metrics
- name: "verticalpodautoscaler_status_recommendation_containerrecommendations_target_memory"
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [target, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
Result:
kube_customresource_verticalpodautoscaler_status_recommendation_containerrecommendations_target_memory{container="busybox",customresource_group="autoscaling.k8s.io",customresource_kind="VerticalPodAutoscaler",customresource_version="v1",namespace="default",target_api_version="apps/v1",target_kind="Deployment",target_name="nginx-deployment",verticalpodautoscaler="test-vpa"} 1.31072e+08
kube_customresource_verticalpodautoscaler_status_recommendation_containerrecommendations_target_memory{container="nginx1",customresource_group="autoscaling.k8s.io",customresource_kind="VerticalPodAutoscaler",customresource_version="v1",namespace="default",target_api_version="apps/v1",target_kind="Deployment",target_name="nginx-deployment",verticalpodautoscaler="test-vpa"} 1.31072e+08
Use valueFrom works for me to get all containers metrics
- name: "verticalpodautoscaler_status_recommendation_containerrecommendations_target_memory" each: type: Gauge gauge: path: [status, recommendation, containerRecommendations] valueFrom: [target, memory] labelsFromPath: container: [containerName] commonLabels: resource: "memory" unit: "byte"
Result:
kube_customresource_verticalpodautoscaler_status_recommendation_containerrecommendations_target_memory{container="busybox",customresource_group="autoscaling.k8s.io",customresource_kind="VerticalPodAutoscaler",customresource_version="v1",namespace="default",target_api_version="apps/v1",target_kind="Deployment",target_name="nginx-deployment",verticalpodautoscaler="test-vpa"} 1.31072e+08 kube_customresource_verticalpodautoscaler_status_recommendation_containerrecommendations_target_memory{container="nginx1",customresource_group="autoscaling.k8s.io",customresource_kind="VerticalPodAutoscaler",customresource_version="v1",namespace="default",target_api_version="apps/v1",target_kind="Deployment",target_name="nginx-deployment",verticalpodautoscaler="test-vpa"} 1.31072e+08
I can confirm this is working for me. Just had to fix the commonLabels indentation.
- name: "vpa_containerrecommendations_target"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [target, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
Thanks!
Hi, just wanted to say thanks to all for this thread, helped me a lot 🙇♂️ Just remember to upgrade the registry.k8s.io/kube-state-metrics/kube-state-metrics version to above v2.9.0 to be able to collect the CPU values 👍
@wcgomes that works like a charm for me as well! Thanks a lot
Sorry for the inconvenience caused by that breaking change.
Based on all the improvements that have been made through this issue, would anyone be interested in udpating the doc from https://github.com/kubernetes/kube-state-metrics/blob/main/docs/customresourcestate-metrics.md#verticalpodautoscaler to help others?
/remove-kind feature /kind documentation
@dgrisonnet I can happily submit a PR as soon as tomorrow to address that. No problems 👍
Awesome! Thank you :)
/assign @sherifkayad
@dgrisonnet PR submitted and linked to the issue
Since the upgrade I missed the metrics for the lowerBound, upperBound and uncappedTarget VPA recommendations. I configured the kube-state-metrics subchart to the following to get them back.
##Configuration for kube-state-metrics subchart
##
kube-state-metrics:
rbac:
extraRules:
- apiGroups: ["autoscaling.k8s.io"]
resources: ["verticalpodautoscalers"]
verbs: ["list", "watch"]
prometheus:
monitor:
enabled: true
customResourceState:
enabled: true
config:
kind: CustomResourceStateMetrics
spec:
resources:
- groupVersionKind:
group: autoscaling.k8s.io
kind: "VerticalPodAutoscaler"
version: "v1"
labelsFromPath:
verticalpodautoscaler: [metadata, name]
namespace: [metadata, namespace]
target_api_version: [apiVersion]
target_kind: [spec, targetRef, kind]
target_name: [spec, targetRef, name]
metrics:
- name: "vpa_containerrecommendations_target"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [target, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_lowerbound"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [lowerBound, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_uncappedtarget"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [uncappedTarget, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_upperbound"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [upperBound, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_target"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [target, cpu]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "cpu"
unit: "core"
- name: "vpa_containerrecommendations_lowerbound"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [lowerBound, cpu]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "cpu"
unit: "core"
- name: "vpa_containerrecommendations_uncappedtarget"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [uncappedTarget, cpu]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "cpu"
unit: "core"
- name: "vpa_containerrecommendations_upperbound"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [upperBound, cpu]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "cpu"
unit: "core"
selfMonitor:
enabled: true`
@drisbee thanks it works!
@drisbee
I have installed vpa as follows
kind: Release
metadata:
name: vpa
namespace: upbound-system
spec:
deletionPolicy: Delete
forProvider:
chart:
name: vpa
repository: https://charts.fairwinds.com/stable
version: 3.0.2
namespace: vpa
skipCreateNamespace: false
wait: true
skipCRDs: false
values:
updater:
enabled: false
admissionController:
enabled: false
recommender:
enabled: true
extraArgs:
prometheus-address: |
http://kube-prometheus-stack-prometheus.monitoring:9090/
storage: prometheus
podMonitor:
enabled: true
labels:
release: kube-prometheus-stack
providerConfigRef:
name: helm-provider
With the following enabled in the kub-prometheus-stack
## Component scraping kube state metrics
##
kubeStateMetrics:
enabled: true
## Configuration for kube-state-metrics subchart
##
kube-state-metrics:
namespaceOverride: ""
rbac:
create: true
extraRules:
- apiGroups: ["autoscaling.k8s.io"]
resources: ["verticalpodautoscalers"]
verbs: ["list", "watch"]
prometheus:
monitor:
enabled: true
releaseLabel: true
customResourceState:
enabled: true
config:
kind: CustomResourceStateMetrics
spec:
resources:
- groupVersionKind:
group: autoscaling.k8s.io
kind: "VerticalPodAutoscaler"
version: "v1"
labelsFromPath:
verticalpodautoscaler: [metadata, name]
namespace: [metadata, namespace]
target_api_version: [apiVersion]
target_kind: [spec, targetRef, kind]
target_name: [spec, targetRef, name]
metrics:
- name: "vpa_containerrecommendations_target"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [target, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_lowerbound"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [lowerBound, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_uncappedtarget"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [uncappedTarget, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_upperbound"
help: "VPA container recommendations for memory."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [upperBound, memory]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "memory"
unit: "byte"
- name: "vpa_containerrecommendations_target"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [target, cpu]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "cpu"
unit: "core"
- name: "vpa_containerrecommendations_lowerbound"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [lowerBound, cpu]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "cpu"
unit: "core"
- name: "vpa_containerrecommendations_uncappedtarget"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [uncappedTarget, cpu]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "cpu"
unit: "core"
- name: "vpa_containerrecommendations_upperbound"
help: "VPA container recommendations for cpu."
each:
type: Gauge
gauge:
path: [status, recommendation, containerRecommendations]
valueFrom: [upperBound, cpu]
labelsFromPath:
container: [containerName]
commonLabels:
resource: "cpu"
unit: "core"
## Scrape interval. If not set, the Prometheus default scrape interval is used.
##
interval: ""
## SampleLimit defines per-scrape limit on number of scraped samples that will be accepted.
##
sampleLimit: 0
## TargetLimit defines a limit on the number of scraped targets that will be accepted.
##
targetLimit: 0
## Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer.
##
labelLimit: 0
## Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer.
##
labelNameLengthLimit: 0
## Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer.
##
labelValueLengthLimit: 0
## Scrape Timeout. If not set, the Prometheus default scrape timeout is used.
##
scrapeTimeout: ""
## proxyUrl: URL of a proxy that should be used for scraping.
##
proxyUrl: ""
# Keep labels from scraped data, overriding server-side labels
##
honorLabels: true
## MetricRelabelConfigs to apply to samples after scraping, but before ingestion.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#relabelconfig
##
metricRelabelings: []
# - action: keep
# regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
# sourceLabels: [__name__]
## RelabelConfigs to apply to samples before scraping
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#relabelconfig
##
relabelings: []
# - sourceLabels: [__meta_kubernetes_pod_node_name]
# separator: ;
# regex: ^(.*)$
# targetLabel: nodename
# replacement: $1
# action: replace
selfMonitor:
enabled: true
Which grafana dashboard are you using for getting metrics?
https://grafana.com/grafana/dashboards/14588-vpa-recommendations/ https://grafana.com/grafana/dashboards/16294-vpa-recommendations/
I am following this link https://medium.com/linkbynet/request-limits-recommendations-using-vpa-goldilocks-and-grafana-2239b19bfd1
I'm using this one: https://grafana.com/grafana/dashboards/14588-vpa-recommendations/
I'm using this one: https://grafana.com/grafana/dashboards/14588-vpa-recommendations/
@drisbee That one I authored .. let me know if you need anything with it
@sherifkayad I imported the dashboard but i do not see any data. I only see the below metrics in prometheus.
@linuxbsdfreak The names of the metrics you have are different from the ones configured in the dashboard. The dashboard is expecting e.g. a metric with the name kube_customresource_vpa_containerrecommendations_target
and others that you can see in the dashboard.