autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

[VPA] support dynamic named target reference

Open mcanevet opened this issue 1 year ago • 5 comments

Which component are you using?:

Vertical Pod Autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

Sometimes the controller managing the set of pods for the autoscaler to control is dynamically named. This is the case for example of Crossplane providers which have a controller of kind Provider.pkg.crossplane.io that creates a Deployment with a generated named, for example: upbound-provider-family-aws-6e68a8d74a6f

Describe the solution you'd like.:

Maybe a solution would be to allow wildcards in the TargetRef, or maybe allow filtering using a selector.

Describe any alternative solutions you've considered.:

I tried to use a Deployment as TargetRef with a wildcard, but it does not work. I also tried with a Provider.pkg.crossplane.io but it does not work either.

Additional context.:

mcanevet avatar Dec 18 '23 09:12 mcanevet

Another example for use is rook-ceph. It creates a lot of mon and osd deployments with numbers and letters trailing the deployment name. Would be good to catch all OSDs called "rook-ceph-osd-*". I just came looking to see if this was possible for this situation.

2fst4u avatar Jan 04 '24 21:01 2fst4u

Hey @mcanevet and @2fst4u, thanks for bringing this up! As I'm not very familiar with the two use-cases you're describing, I hope you can help me understand a bit more about it. Currently, I'm understanding that you have one (ore even many?) of Deployments created by a controller with names that you don't know beforehand and probably could even change over the lifetime of the component? Most likely these generated Deployments are owned by some other k8s resource, probably a custom resource that the controller watches? Is there a 1:1 relationship between the custom resource and the generated Deployment, or could one custom resource result in more than 1 Deployment?

I'm guessing that if you have more than one of these controller-owned Deployments, each of them would need their own VPA, as they could see very different load. If that's the case, a wildcard that catches more than 1 of these Deployments would not yield the desired result – recommendations are created per VPA object. If the recommendations are independent, we need also multiple VPA objects.

If a 1:1 mapping between custom resource and the generated Deployment exists and the custom resource is implemented with support for VPA, it should be possible to point your VPA to the custom resource instead.

Does this help for your use-cases?

voelzmo avatar Jan 11 '24 10:01 voelzmo

@voelzmo I think support for Custom Resource should works, but as it does not, I guess Provider.pkg.crossplane.io does not implements the /scale subresource.

mcanevet avatar Jan 11 '24 15:01 mcanevet

Indeed: https://github.com/crossplane/crossplane/blob/c388baa88eaf2efe59be1638f7be5d775cdf3bff/cluster/crds/pkg.crossplane.io_providers.yaml#L197-L198

mcanevet avatar Jan 12 '24 07:01 mcanevet

@voelzmo looks like enabling scale subresource is not possible in the context of Crossplane: https://github.com/crossplane/crossplane/issues/5230#issuecomment-1888706646

mcanevet avatar Jan 12 '24 09:01 mcanevet

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 11 '24 09:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 11 '24 09:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jun 10 '24 10:06 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jun 10 '24 10:06 k8s-ci-robot

/reopen

mcanevet avatar Jun 10 '24 14:06 mcanevet

@mcanevet: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jun 10 '24 14:06 k8s-ci-robot

/remove-lifecycle rotten

mcanevet avatar Jun 10 '24 14:06 mcanevet

/area vertical-pod-autoscaler

adrianmoisey avatar Jul 08 '24 18:07 adrianmoisey