ingress-nginx icon indicating copy to clipboard operation
ingress-nginx copied to clipboard

[AutoScaling] Kubernetes Ingress Object Metrics with Community NGINX Ingress Controller

Open chirangaalwis opened this issue 3 years ago • 21 comments

What would you like to be added:

A custom metrics API implementation for capturing the request count per unit time of a Kubernetes Ingress resource object, when using the community NGINX Ingress Controller in order to be used as a metric for Horizontal Pod Autoscaling (HPA).

The ultimate goal is to be able to add a request per unit time metric to Horizontal Pod Autoscaler as described in the walk through, when using the community NGINX Ingress Controller implementation.

type: Object
object:
  metric:
    name: requests-per-second
  describedObject:
    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    name: main-route
  target:
    type: Value
    value: 2k

An example similar to the suggested solution can be found at Skipper collector with the kube-metrics-adapter. This particular solution works when using the skipper Ingress Controller implementation.

Why is this needed:

  • The accuracy of request per unit time, as a metric for HPA is considered to be very high. Especially, when working with container based deployments of language runtimes involving garbage collection. Please see this #sig-autoscaling Slack channel discussion for a details about this topic.

  • Community NGINX Ingress Controller being one of the most widely used implementations.

Notes:

Suggested Assignees:

@rikatz @strongjz

chirangaalwis avatar Jun 25 '21 04:06 chirangaalwis

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 23 '21 05:09 k8s-triage-robot

/remove-lifecycle stale

chirangaalwis avatar Sep 23 '21 09:09 chirangaalwis

IMO this is a valid requirement which needs addressing.

chirangaalwis avatar Sep 23 '21 09:09 chirangaalwis

Folks, any update on this?

chirangaalwis avatar Oct 07 '21 06:10 chirangaalwis

Hi @chirangaalwis

I'm with my head a bit under other stuff and couldn't look into this.

If possible, can you provide some implementation PR for us? We can help you on this. Otherwise, you will rely on the availability of some of us, and talking from my side, I'm really on a rush.

Anyway I will add this as something for milestone v1.2.0 and see what we can do, ok?

rikatz avatar Oct 07 '21 12:10 rikatz

We currently have HPA and KEDA related configurations, so this issue mainly wants to add some related metrics, right?

tao12345666333 avatar Oct 07 '21 13:10 tao12345666333

Hi @chirangaalwis

I'm with my head a bit under other stuff and couldn't look into this.

If possible, can you provide some implementation PR for us? We can help you on this. Otherwise, you will rely on the availability of some of us, and talking from my side, I'm really on a rush.

Anyway I will add this as something for milestone v1.2.0 and see what we can do, ok?

@rikatz ack.

chirangaalwis avatar Oct 07 '21 14:10 chirangaalwis

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 05 '22 14:01 k8s-triage-robot

/remove-lifecycle stale

chirangaalwis avatar Jan 05 '22 20:01 chirangaalwis

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 05 '22 21:04 k8s-triage-robot

/remove-lifecycle stale

chirangaalwis avatar Apr 06 '22 04:04 chirangaalwis

/triage accepted /priority longterm-important

iamNoah1 avatar Apr 12 '22 09:04 iamNoah1

@iamNoah1: The label(s) priority/longterm-important cannot be applied, because the repository doesn't have them.

In response to this:

/triage accepted /priority longterm-important

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 12 '22 09:04 k8s-ci-robot

/priority important-longterm

iamNoah1 avatar Apr 12 '22 09:04 iamNoah1

We currently have HPA and KEDA related configurations, so this issue mainly wants to add some related metrics, right?

Anyone knows if this may help? :D

rikatz avatar Apr 12 '22 16:04 rikatz

/help

iamNoah1 avatar Jun 14 '22 16:06 iamNoah1

@iamNoah1: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jun 14 '22 16:06 k8s-ci-robot

Any updates on this? While reading the walkthrough I thought it was supported OOTB what I learned now is not the case :(

23henne avatar Jul 19 '22 08:07 23henne

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 17 '22 09:10 k8s-triage-robot

This functionality would be so useful... because configuring HPA using CPU/Memory is not accurate as sometimes there are enough resources but worker_connections is reached :(

luarx avatar Apr 10 '23 18:04 luarx

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot avatar Apr 09 '24 19:04 k8s-triage-robot