kcp
kcp copied to clipboard
`kubectl apply` and `kubectl create` on APIExport virtual API server do not work
Describe the bug
Executing the kubectl apply or kubectl create commands using an APIExport virtual workspace URL do not work.
To Reproduce
Steps to reproduce the behavior:
$ kubectl kcp workspace create producer --enter
$ kubectl apply -f test/e2e/virtual/apiexport/apiresourceschema_cowboys.yaml
$ cat <<EOF | kubectl apply -f -
apiVersion: apis.kcp.dev/v1alpha1
kind: APIExport
metadata:
name: cowboy
spec:
latestResourceSchemas:
- today.cowboys.wildwest.dev
EOF
$ kubectl kcp workspace ..
$ kubectl kcp workspace create consumer --enter
$ cat <<EOF | kubectl apply -f -
apiVersion: apis.kcp.dev/v1alpha1
kind: APIBinding
metadata:
name: cowboy
spec:
reference:
workspace:
path: root:default:producer
exportName: cowboy
EOF
$ cat <<EOF | kubectl --server https://<ADDRESS:PORT>/services/apiexport/root:default:producer/cowboy/clusters/root:default:consumer/ apply -f -
apiVersion: wildwest.dev/v1alpha1
kind: Cowboy
metadata:
name: cowboy
EOF
Error from server (NotFound): the server could not find the requested resource
The following trace is printed with the --v=9 option:
I0705 15:43:56.770274 86811 loader.go:372] Config loaded from file: .kcp/admin.kubeconfig
I0705 15:43:56.771325 86811 round_trippers.go:466] curl -v -XGET -H "Accept: application/[email protected]+protobuf" -H "User-Agent: kubectl/v1.23.4 (darwin/amd64) kubernetes/e6c093d" -H "Authorization: Bearer <masked>" 'https://192.168.0.24:6443/services/apiexport/root:default:producer/cowboy/clusters/root:default:consumer/openapi/v2?timeout=32s'
I0705 15:43:56.771865 86811 round_trippers.go:510] HTTP Trace: Dial to tcp:192.168.0.24:6443 succeed
I0705 15:43:56.776751 86811 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3 ms ServerProcessing 1 ms Duration 5 ms
I0705 15:43:56.776766 86811 round_trippers.go:577] Response Headers:
I0705 15:43:56.776772 86811 round_trippers.go:580] Audit-Id: b40dfda6-e7ca-466e-a729-509e1a9e80e1
I0705 15:43:56.776778 86811 round_trippers.go:580] Cache-Control: no-cache, private
I0705 15:43:56.776784 86811 round_trippers.go:580] Content-Type: text/plain; charset=utf-8
I0705 15:43:56.776789 86811 round_trippers.go:580] X-Content-Type-Options: nosniff
I0705 15:43:56.776795 86811 round_trippers.go:580] Content-Length: 19
I0705 15:43:56.776800 86811 round_trippers.go:580] Date: Tue, 05 Jul 2022 13:43:56 GMT
I0705 15:43:56.813359 86811 request.go:1181] Response Body: 404 page not found
I0705 15:43:56.833071 86811 round_trippers.go:466] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.23.4 (darwin/amd64) kubernetes/e6c093d" -H "Authorization: Bearer <masked>" 'https://192.168.0.24:6443/services/apiexport/root:default:producer/cowboy/clusters/root:default:consumer/swagger-2.0.0.pb-v1?timeout=32s'
I0705 15:43:56.833961 86811 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 0 ms Duration 0 ms
I0705 15:43:56.833973 86811 round_trippers.go:577] Response Headers:
I0705 15:43:56.833978 86811 round_trippers.go:580] Content-Type: text/plain; charset=utf-8
I0705 15:43:56.833981 86811 round_trippers.go:580] X-Content-Type-Options: nosniff
I0705 15:43:56.833985 86811 round_trippers.go:580] Content-Length: 19
I0705 15:43:56.833988 86811 round_trippers.go:580] Date: Tue, 05 Jul 2022 13:43:56 GMT
I0705 15:43:56.833991 86811 round_trippers.go:580] Audit-Id: ed499e42-ad20-454b-a7ae-27d4bae16286
I0705 15:43:56.833995 86811 round_trippers.go:580] Cache-Control: no-cache, private
I0705 15:43:56.869847 86811 request.go:1181] Response Body: 404 page not found
I0705 15:43:56.907010 86811 helpers.go:219] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "404 page not found"
}
]
},
"code": 404
}]
F0705 15:43:56.907575 86811 helpers.go:118] Error from server (NotFound): the server could not find the requested resource
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3c83d20, 0x3, 0x0, 0xc000297b20, 0x2, {0x31f3e5e, 0x10}, 0xc000580000, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc00004b130, 0x4e, 0x0, {0x0, 0x0}, 0x27f0950, {0xc000189880, 0x1, 0x1})
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x1ae
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1518
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc00004b130, 0x4e}, 0xc00041e640)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:96 +0xc5
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x2beaa80, 0xc00041e640}, 0x2a78e80)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:191 +0x7d7
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:118
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/apply.NewCmdApply.func1(0xc000bc3b80, {0xc0004a4910, 0x0, 0x5})
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/apply/apply.go:188 +0x6d
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000bc3b80, {0xc0004a48c0, 0x5, 0x5})
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:860 +0x5f8
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000019400)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/vendor/k8s.io/component-base/cli.run(0xc000019400)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/cli/run.go:146 +0x325
k8s.io/kubernetes/vendor/k8s.io/component-base/cli.RunNoErrOutput(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/cli/run.go:84
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:30 +0x1e
Expected behavior
The kubectl apply command should work.
Additional context
The kubectl get command works:
$ cat <<EOF | k apply -f -
apiVersion: wildwest.dev/v1alpha1
kind: Cowboy
metadata:
name: cowboy
EOF
cowboy.wildwest.dev/cowboy created
$ kubectl --server https://<ADDRESS:PORT>/services/apiexport/root:default:producer/cowboy/clusters/root:default:consumer/ get cowboy cowboy -o yaml
apiVersion: wildwest.dev/v1alpha1
kind: Cowboy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"wildwest.dev/v1alpha1","kind":"Cowboy","metadata":{"annotations":{},"name":"cowboy","namespace":"default"}}
clusterName: root:default:consumer
creationTimestamp: "2022-07-05T13:45:38Z"
generation: 1
name: cowboy
namespace: default
resourceVersion: "3128"
uid: 7151720b-0883-49a9-925b-622ef4397b98
The issue also affects the kubectl create command.
OpenAPI on virtual workspaces is not supported for now, so this is known not to work.
Using kubectl apply --validate=false as suggested by @davidfestal makes it work.
Maybe a short-term option would be to improve the return the error message if that's possible, to something like operation not supported, instead of Error from server (NotFound): the server could not find the requested resource.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kcp-ci-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.