kcp
kcp copied to clipboard
bug: cache-server misses the scope client
Describe the bug
The ApiExtensionsClusterClient in the cache server should use scope clients instead of NewClusterForConfig
Steps To Reproduce
I tried to wire in a context-based client as advised in https://github.com/kcp-dev/kcp/pull/1815#discussion_r954151311 by using SetMultiClusterRoundTripper along with SetCluster
So basically I did apiextensionsclient.NewForConfig( kcpclienthelper.SetMultiClusterRoundTripper(kcpclienthelper.SetCluster(rest.CopyConfig(cfg), logicalcluster.Wildcard)).
Then when I tried to create a CRD with a ctx with a logical cluster name set it failed because the path was incorrect.
The path was set to /clusters/*/clusters/system:system-crds/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apiresourceschemas.apis.kcp.dev
Expected Behaviour
Either something is broken or I don't know how to use a context-based client. I'd expect the path to be correctly set to the cluster from the context.
Additional Context
No response
/cc @varshaprasad96
I'm happy to prepare a PR if you could provide some input here. Thanks!
@p0lyn0mial The process which I usually follow to scope clients is: Option 1:
- Wrap an existing config with the cluster round tripper - (ie) using https://pkg.go.dev/github.com/kcp-dev/apimachinery/pkg/client#SetMultiClusterRoundTripper
- Pass a scoped context while making client calls (
logical cluster.WithContext(ctx, someCluster)
Option 2:
- Set the cluster in the rest config directly (ie) with https://github.com/kcp-dev/apimachinery/blob/dbb759406933f20051f134e5d2cd740bcda53900/pkg/client/cluster_config.go#L43. If we do this, then we need not pass a scoped context, or even use cluster aware round tripper.
Based on the url, it looks like both options are being performed at once. If the rest config is directly being modified, we shouldn't be passing scoped context.
@varshaprasad96 yes, it looks like I applied both options at the same time.
Would you accept a PR that would allow for overwriting the cluster set by Option 2 when a cluster is also set in the context (Option 1)?
/cc @stevekuznetsov
Hm, I thought both at once should have worked. In any case, I think for now we are pausing this so let's re-consider it when we have more clarity about how we're moving forward.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kcp-ci-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.