kcp
kcp copied to clipboard
bug: kcp apiserver does not serve openapi specifications
Describe the bug
The /clusters/foo/openapi endpoints for kcp apiserver return the kubernetes OpenAPI spec.
Steps To Reproduce
kubectl get --raw /openapi/v2 | grep ClusterWorkspace
Expected Behaviour
We generate the definitions, so I imagine we want to serve them? This would also be nice to allow downstream users to generate clients in non-Go SDKs.
Additional Context
No response
Need to make sure we do this properly so we don't overload the server with too much OpenAPI generation across potentially hundreds/thousands of workspaces
In addition to dynamically determining this based on bound APIs within a workspace, I think we can also get a lot of value today from having (semi-static) published definitions for the core kcp APIs so that devs could explore the API surface and e.g. generate clients for non-Go SDKs.
This should also fix #1449.
Adding some more detail:
We are supposed to be including the built-in k8s types in our OpenAPI doc. But this is not happening at the moment because the MiniAggregatorConfig stores a pointer to a GenericConfig that is used by both it (mini aggregator) and the k8s control plane apiserver:
https://github.com/kcp-dev/kcp/blob/e40bffc1a67e56d780bc97b1ced45d0c1d7c786e/pkg/server/config.go#L520
This is a problem because in the mini aggregator code, we tell it to skip OpenAPI installation so we can install our own aggregated handler:
https://github.com/kcp-dev/kubernetes/blob/6ebb9f064d8cd524bafba7154d5e47619303be0a/pkg/genericcontrolplane/aggregator/aggregator.go#L103-L107
Unfortunately this is the same GenericConfig, which ends up disabling OpenAPI installation for the k8s control plane server.
I was able to "fix" this by making a shallow copy of the GenericConfig and storing that in the MiniAggregatorConfig. It feels more like a band-aid, though, and I'm wondering if we could do something better.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kcp-ci-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.