descheduler icon indicating copy to clipboard operation
descheduler copied to clipboard

Performance tests

Open damemi opened this issue 3 years ago • 4 comments
trafficstars

We currently have very little measurement on how the Descheduler performs at scale. Issues like https://github.com/kubernetes-sigs/descheduler/issues/774 highlight this lack of information.

It would be good to provide benchmarks and identify changes/new strategies in particular that cause memory or performance degradation with a set of perf-scale tests similar to what the scheduler offers. To start these could measure descheduling throughput and memory consumption end-to-end as well as for individual strategies

damemi avatar Apr 07 '22 14:04 damemi

/cc

JaneLiuL avatar Apr 08 '22 00:04 JaneLiuL

@damemi, I'm thinking if this could be broken into parts with tests either based on strategies or different criteria like types of cluster( KIND or managed K8s cluster by any cloud provider).

Can these be used as a starting reference: https://github.com/kubernetes/perf-tests ?

pravarag avatar Apr 08 '22 00:04 pravarag

@damemi, I'm thinking if this could be broken into parts with tests either based on strategies or different criteria like types of cluster( KIND or managed K8s cluster by any cloud provider).

Can these be used as a starting reference: https://github.com/kubernetes/perf-tests ?

@pravarag yeah, I think ideally this would be at least broken into tests for individual strategies (so we can identify which strategies suffer the most)

damemi avatar May 03 '22 18:05 damemi

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 01 '22 18:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 31 '22 18:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Sep 30 '22 19:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 30 '22 19:09 k8s-ci-robot