ingress-nginx
ingress-nginx copied to clipboard
[AutoScaling] Kubernetes Ingress Object Metrics with Community NGINX Ingress Controller
What would you like to be added:
A custom metrics API implementation for capturing the request count per unit time of a Kubernetes Ingress resource object, when using the community NGINX Ingress Controller in order to be used as a metric for Horizontal Pod Autoscaling (HPA).
The ultimate goal is to be able to add a request per unit time metric to Horizontal Pod Autoscaler as described in the walk through, when using the community NGINX Ingress Controller implementation.
type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 2k
An example similar to the suggested solution can be found at Skipper collector with the kube-metrics-adapter. This particular solution works when using the skipper Ingress Controller implementation.
Why is this needed:
-
The accuracy of request per unit time, as a metric for HPA is considered to be very high. Especially, when working with container based deployments of language runtimes involving garbage collection. Please see this #sig-autoscaling Slack channel discussion for a details about this topic.
-
Community NGINX Ingress Controller being one of the most widely used implementations.
Notes:
-
Original requests made at #sig-autoscaling and #ingress-nginx-users Slack channels.
Suggested Assignees:
@rikatz @strongjz
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
IMO this is a valid requirement which needs addressing.
Folks, any update on this?
Hi @chirangaalwis
I'm with my head a bit under other stuff and couldn't look into this.
If possible, can you provide some implementation PR for us? We can help you on this. Otherwise, you will rely on the availability of some of us, and talking from my side, I'm really on a rush.
Anyway I will add this as something for milestone v1.2.0 and see what we can do, ok?
We currently have HPA and KEDA related configurations, so this issue mainly wants to add some related metrics, right?
Hi @chirangaalwis
I'm with my head a bit under other stuff and couldn't look into this.
If possible, can you provide some implementation PR for us? We can help you on this. Otherwise, you will rely on the availability of some of us, and talking from my side, I'm really on a rush.
Anyway I will add this as something for milestone v1.2.0 and see what we can do, ok?
@rikatz ack.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/triage accepted /priority longterm-important
@iamNoah1: The label(s) priority/longterm-important
cannot be applied, because the repository doesn't have them.
In response to this:
/triage accepted /priority longterm-important
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/priority important-longterm
We currently have HPA and KEDA related configurations, so this issue mainly wants to add some related metrics, right?
Anyone knows if this may help? :D
/help
@iamNoah1: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Any updates on this? While reading the walkthrough I thought it was supported OOTB what I learned now is not the case :(
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This functionality would be so useful... because configuring HPA using CPU/Memory is not accurate as sometimes there are enough resources but worker_connections is reached :(
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted