Mike Dame
Mike Dame
> > is there a reason leader election can't be enabled with dryRun? > > In cases when I want to run multiple instances of the descheduler all with --dry-run=true....
This might be difficult because the `KubeSchedulerConfiguration` as passed to the scheduler isn't really a standard API object in the cluster (it can be mounted as a configmap, or a...
/remove-lifecycle stale
@a7i I think that's a fine suggestions, but we should document that it should be used with a matching default topology spread constraint (like you used in your example). Otherwise,...
> @damemi, I'm thinking if this could be broken into parts with tests either based on `strategies` or different criteria like types of cluster( KIND or managed K8s cluster by...
@matti just to clarify, @JaneLiuL is saying that you should run it as a Deployment (not a Job or CronJob). The Deployment has a descheduling interval flag that keeps a...
If you have it running as a deployment, that should give us an idea of the long-running usage, yeah. 10s is a pretty short cycle length, especially for a large...
Fyi I opened https://github.com/kubernetes-sigs/descheduler/issues/782 to track an effort to add performance tests so we can work on things like this.
Would this need to be an entirely new strategy? Or is it just a status phase we can parse and check in [PodLifetime](https://github.com/kubernetes-sigs/descheduler#podlifetime) or [RemoveFailedPods](https://github.com/kubernetes-sigs/descheduler#removefailedpods)?
@a7i so this is only reported through the conditions? We could probably do some logic to check that it's the most recent condition. But I think that could be enough...