bug: it is not possible to create CRDs through virtual workspaces
Describe the bug
Is it not possible to deploy CRDs through a virtual workspace.
I’ve added permissions claims for CRDs in my APIExport and accepted in the binding for the corresponding workspace. However, when I try to apply a CRD, getting below errors (with and without --validate=false):
❯ kubectl apply -f https://raw.githubusercontent.com/crossplane-contrib/provider-aws/refs/heads/master/package/crds/acm.aws.crossplane.io_certificates.yaml
error: error validating "https://raw.githubusercontent.com/crossplane-contrib/provider-aws/refs/heads/master/package/crds/acm.aws.crossplane.io_certificates.yaml": error validating data: failed to download openapi: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
❯ kubectl apply -f https://raw.githubusercontent.com/crossplane-contrib/provider-aws/refs/heads/master/package/crds/acm.aws.crossplane.io_certificates.yaml --validate=false
The CustomResourceDefinition "certificates.acm.aws.crossplane.io" is invalid: spec.validation.openAPIV3Schema.type: Required value: must not be empty at the root
The same command works fine against a regular workspace:
❯ kubectl apply -f https://raw.githubusercontent.com/crossplane-contrib/provider-aws/refs/heads/master/package/crds/acm.aws.crossplane.io_certificates.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.acm.aws.crossplane.io created
Slack thread: https://kubernetes.slack.com/archives/C021U8WSAFK/p1745918809839519
Steps To Reproduce
- kcp start
- Install some api and export it with permissionclaims to crds.
kubectl apply -f https://raw.githubusercontent.com/kcp-dev/contrib/refs/heads/main/20250401-kubecon-london/workshop/02-explore-workspaces/apis/apiresourceschema.yaml
kubectl apply -f -<<EOM
apiVersion: apis.kcp.io/v1alpha1
kind: APIExport
metadata:
name: cowboys
spec:
latestResourceSchemas:
- today.cowboys.wildwest.dev
permissionClaims:
- group: ""
resource: configmaps
all: true
- group: "apiextensions.k8s.io"
resource: customresourcedefinitions
all: true
EOM
- Create a child workspace and bind to apiexport
kubectl create workspace ws1 --enter
kubectl kcp bind apiexport root:cowboys --name cowboys --accept-permission-claim configmaps.core,customresourcedefinitions.apiextensions.k8s.io
- Form a server URL from virtual workspace URL with logical cluster and try to apply a CRD:
$ kubectl -s "https://127.0.0.1:6443/services/apiexport/root/cowboys/clusters/22tadf4mx3os05qi" apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/customresourcedefinition/shirt-resource-definition.yaml
Error from server (BadRequest): error when creating "https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/customresourcedefinition/shirt-resource-definition.yaml": CustomResourceDefinition in version "v1" cannot be handled as a CustomResourceDefinition: strict decoding error: unknown field "spec.versions[0].schema.openAPIV3Schema.properties", unknown field "spec.versions[0].schema.openAPIV3Schema.type"
$ kubectl -s "https://127.0.0.1:6443/services/apiexport/root/cowboys/clusters/22tadf4mx3os05qi" apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/customresourcedefinition/shirt-resource-definition.yaml --validate=false
The CustomResourceDefinition "shirts.stable.example.com" is invalid:
* spec.validation.openAPIV3Schema.type: Required value: must not be empty at the root
* spec.selectableFields[0].jsonPath: Invalid value: ".spec.color": is an invalid path: does not refer to a valid field
* spec.selectableFields[1].jsonPath: Invalid value: ".spec.size": is an invalid path: does not refer to a valid field
Expected Behaviour
It should be possible to create CRDs through virtual workspace URL. This is particularly important when I want to build a multicluster controller with multicluster-runtime and multicluster-provider against kcp.
Additional Context
kcp version v1.31.6+kcp-v0.27.1
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Apologies this hasn't been triaged. Let me add it to the next milestone to make sure we fix it.
Can you assign this to me?
/assign @olamilekan000