client-go
client-go copied to clipboard
Conflict running on different cluster versions: failed to list *v2beta2.HorizontalPodAutoscaler: the server could not find the requested resource
Hello all,
I'm not sure if this is the right place, maybe there is a simple solution, but I have been struggling with this use case, so I'll post it here and I hope someone point me to the right direction
I have an operator (based on KNative eventing) that needs to run in 2 different versions of Kubernetes (1.19 and 1.27) Those operators create HPA resources
These are the apiversions of autoscaling
on each cluster:
1.19
autoscaling.k8s.io/v1
autoscaling.k8s.io/v1beta2
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
1.27
autoscaling.k8s.io/v1
autoscaling.k8s.io/v1beta2
autoscaling/v1
autoscaling/v2
Until now, we only had 1.19, and it was working fine creating HPA v2beta2
, but as you can see the new cluster doesn't have autoscaling v2beta2, so we need to migrate it to either v1
or v2
v1
is not an option as it doesn't support scaling with Memory
v2
is not available on the 1.19
cluster, so first I tried using a code similar to this:
import (
...
autoscalingv2listers "k8s.io/client-go/listers/autoscaling/v2"
autoscalingv2beta2listers "k8s.io/client-go/listers/autoscaling/v2beta2"
)
type Reconciler struct {
...
hpaListerv2beta2 autoscalingv2beta2listers.HorizontalPodAutoscalerLister
hpaListerv2 autoscalingv2listers.HorizontalPodAutoscalerLister
}
...
var hpaListerV2 autoscalingv2listers.HorizontalPodAutoscalerLister
var hpaListerV2beta2 autoscalingv2beta2listers.HorizontalPodAutoscalerLister
if shared.IsApiVersionSupported(clientSet, "autoscaling", "v2") {
hpaListerV2 = hpainformerv2.Get(ctx).Lister()
} else {
hpaListerV2beta2 = hpainformerv2beta2.Get(ctx).Lister()
}
reconciler := &Reconciler{
hpaListerv2beta2: hpaListerV2beta2,
hpaListerv2: hpaListerV2,
}
EDIT: I also have added the conditional when adding the event handler
if shared.IsApiVersionSupported(clientSet, "autoscaling", "v2") {
hpainformerv2.Get(ctx).Informer().AddEventHandler(cache.FilteringResourceEventHandler{
FilterFunc: controller.FilterControllerGK(eventingv1.Kind("Broker")),
Handler: controller.HandleAll(impl.EnqueueControllerOf),
})
} else {
hpainformerv2beta2.Get(ctx).Informer().AddEventHandler(cache.FilteringResourceEventHandler{
FilterFunc: controller.FilterControllerGK(eventingv1.Kind("Broker")),
Handler: controller.HandleAll(impl.EnqueueControllerOf),
})
}
It doesn't work as expected and even tho I have a conditional it tries to watch the resource not available on the cluster, throwing the errors:
W0627 10:03:58.137362 1 reflector.go:533] knative.dev/pkg/controller/controller.go:732: failed to list *v2beta2.HorizontalPodAutoscaler: the server could not find the requested resource
E0627 10:03:58.137387 1 reflector.go:148] knative.dev/pkg/controller/controller.go:732: Failed to watch *v2beta2.HorizontalPodAutoscaler: failed to list *v2beta2.HorizontalPodAutoscaler: the server could not find the requested resource
error: http2: client connection lost
If we look at the line mentioned there controller.go:732
it is the line that calls the Run
method of the Informers
func StartInformers(stopCh <-chan struct{}, informers ...Informer) error {
for _, informer := range informers {
informer := informer
go informer.Run(stopCh) // Here
}
for i, informer := range informers {
if ok := cache.WaitForCacheSync(stopCh, informer.HasSynced); !ok {
return fmt.Errorf("failed to wait for cache at index %d to sync", i)
}
}
return nil
}
I have tried using an interface, and also using generics, but didn't go far, always ended up on some sort of limitation I have commented out the version not available in the cluster and then I was able to run it successful, but of course, this is not ideal
Any idea how to achieve this?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@vedant15188: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.