autoscaler icon indicating copy to clipboard operation
autoscaler copied to clipboard

Failed to watch *v1beta1.PodDisruptionBudget

Open apryiomka opened this issue 2 years ago • 8 comments

Which component are you using?:

EKS k8s 1.27 cluster-autoscaler component: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

What version of the component are you using?: 1.27.2

What environment is this in?: AWS

Getting this error form cluster-autoscaler deployment log:

I0823 18:17:31.027517       1 reflector.go:255] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309
E0823 18:17:31.029219       1 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource

To my best knowladge, v1beta1.PodDisruptionBudget API was deprecated in favor of policy/v1. Why is 1.27 autoscaler trying to watch v1beta1?

Another error observed is

I0823 18:27:31.272850       1 reflector.go:255] Listing and watching *v1beta1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:134
E0823 18:27:31.274527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource

Same issue, it should storage.k8s.io/v1 API, not v1beta1

apryiomka avatar Aug 23 '23 18:08 apryiomka

I can confirm the error , on AWS EKS as well.

adrianmiron avatar Sep 08 '23 06:09 adrianmiron

I have this error on AWS EKS as well. It looks like the autoscaler is internally hardcoded to look for the PDB at the v1beta1 version but the policy schema on the control plane is v1. Perhaps there is a way to install the v1beta as a workaround?

kyland-holmes avatar Sep 11 '23 21:09 kyland-holmes

This appears to have been addressed in 1.25 via https://github.com/kubernetes/autoscaler/pull/4990. You might try confirming the actual deployed version in the CA POD log. We (I work with @kyland-holmes) found we had been deploying 1.17 (via helm) and updated our charts such that 1.27.2 got deployed and the issue is no longer present.

An interesting piece of the charts in this area is: https://github.com/kubernetes/autoscaler/blob/8e984d7c1a87740f43aa03b3cac00b2247c9e37c/charts/cluster-autoscaler/templates/_helpers.tpl#L78-L88 which was introduced by #4888.

kevin-bates avatar Sep 12 '23 16:09 kevin-bates

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 28 '24 05:01 k8s-triage-robot

/remove-lifecycle stale

seifrajhi avatar Feb 01 '24 13:02 seifrajhi

I face the same issue EKS v.11.25 and chart prometheus-operator v45.4.0

seifrajhi avatar Feb 01 '24 13:02 seifrajhi

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 19 '24 17:06 k8s-triage-robot

Facing the same issue with CSIStorageCapacity and PodDisruptionBudget APIs. EKS 1.29 Helm chart: cluster-autoscaler-9.35.0 1.29.0 CSIStorageCapacity k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource PodDisruptionBudget k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource

lheringer-bt avatar Jun 25 '24 10:06 lheringer-bt

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 25 '24 11:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Aug 24 '24 12:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 24 '24 12:08 k8s-ci-robot