autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

How to setup a different VPA for each namespace in a single Kubernetes cluster?

Open gaganso opened this issue 3 years ago • 2 comments
trafficstars

I would like to bring up a different VPA with different parameter values (e.g, one with cpu-histogram-decay-half-life=2h and another with cpu-histogram-decay-half-life=6h) for each namespace on a single Kubernetes cluster. I followed this pull request by @povilasv but couldn't get it working. If anyone could point to a list of steps or documentation, it would be helpful.

I tried the below steps for each i'th deployment: 1) export NAMESPACE=deployment_i. 2) Changed the namespace to deployment_i from kube-system in the manifest files of admission-controller, recommender, and updater in /autoscaler/vertical-pod-autoscaler/deploy. 3) I also modified vpa-rbac.yaml to create access control. 4) Ran ./hack/vpa-up.sh

I see the below error message in the logs of recommender:

E0524 08:26:07.272573 1 cluster_feeder.go:441] Cannot get ContainerMetricsSnapshot from MetricsClient. Reason: pods.metrics.k8s.io is forbidden: User "system:serviceaccount:test:vpa-recommender" cannot list resource "pods" in API group "metrics.k8s.io" in the namespace "test": RBAC: clusterrole.rbac.authorization.k8s.io "test:metrics-reader" not found

The admission-controller fails to run with the below error:

MountVolume.SetUp failed for volume "tls-certs" : secret "vpa-tls-certs" not found

I see that the issue is with the certificate file and I have to probably modify ./pkg/admission-controller/gencerts.sh to get a different certificate for each namespace.

Please let me know if the steps to do this are documented somewhere.

gaganso avatar May 24 '22 08:05 gaganso

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 29 '22 14:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 28 '22 14:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 28 '22 15:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 28 '22 15:10 k8s-ci-robot

Sorry are there any plans ?

jakirpatel avatar Jan 30 '24 02:01 jakirpatel