feat: Add label about metric selector for kube_horizontalpodautoscaler_(spec/status)_target_metric
What this PR does / why we need it:
Currently, the indicator kube_horizontalpodautoscaler_spec_target_metric/kube_horizontalpodautoscaler_status_target_metric does not expose the metric selector field. If there is an hpa object, which has two metrics with the same name but different selectors, the results will only display one of them.
How does this change affect the cardinality of KSM: (increases, decreases or does not change cardinality) increases
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
/assign /triage accepted
If there is an hpa object, which has two metrics with the same name but different selectors
QQ: are these two metrics the same or different? In thinking whether we need this label or not.
If there is an hpa object, which has two metrics with the same name but different selectors
QQ: are these two metrics the same or different? In thinking whether we need this label or not.
For example, hpa can autoscale based on the number of http requests. But if you want to be more accurate, for example, setting different thresholds for requests with url=/api/v1/blog and url=/api/v1/car, then we usually will not set different metric names for them, but the same metric with different labels. (e.g. http_requests_per_minute{method="get",url="/api/v1/blog"}) and http_requests_per_minute{method="get",url="/api/v1/car"})
Another example: task production and consumption model. For example, there are multiple types of tasks in message queue. We usually use labels to distinguish the types of these tasks, rather than metrics with different names. (e.g. task_queue_pending_total{taskType="write"}) and task_queue_pending_total{taskType="read"})
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: whitebear009 Once this PR has been reviewed and has the lgtm label, please ask for approval from dgrisonnet. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Hi, How do you think for it? @CatherineF-dev @dgrisonnet
@dgrisonnet Hello, how can I promote this pr ?
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.