kcp
kcp copied to clipboard
bug: Proxy APIExport identity validation is incorrect.
Describe the bug
When creating "proxy" apiExport (root to custom workspace), I made a mistake with bootstrapper and ended up with APIExport as below:
[mjudeikis@unknown2 faros]$ k get apiexport compute.faros.sh -o yaml
apiVersion: apis.kcp.io/v1alpha1
kind: APIExport
metadata:
annotations:
kcp.io/cluster: nu66to0aor944bum
kcp.io/path: root:faros:service:controllers
creationTimestamp: "2023-03-24T09:46:07Z"
generation: 8
name: compute.faros.sh
resourceVersion: "8573"
uid: fc7c3bae-63bf-42da-acab-aab8782357ef
spec:
identity:
secretRef:
name: compute.faros.sh
namespace: kcp-system
permissionClaims:
- all: true
group: workload.kcp.io
identityHash: random-prefix-97d7c56385241358fbd0c4d0e461e15ba1449b1a7fbbb88112cd094e10eb2eb4-random-string-suffix
resource: synctargets
status:
conditions:
- lastTransitionTime: "2023-03-24T09:46:07Z"
status: "True"
type: IdentityValid
- lastTransitionTime: "2023-03-24T09:46:07Z"
status: "True"
type: VirtualWorkspaceURLsReady
identityHash: 2baf09807a4c861b04815ed6cd2f773ad1c819901fe21133f3c5d80489fce3a2
virtualWorkspaces:
- url: https://kcp.dev.faros.sh:443/services/apiexport/nu66to0aor944bum/compute.faros.sh
And still got valid status, where identityHash is not valid.
Steps To Reproduce
- Create APIExport for 3rd party resource.
- Add the wrong identityhash
- observe resources
Expected Behaviour
Fail on identityhash validation
Additional Context
No response
Is this a duplicate of #2152?
Need to read code to understand better :/ might be similar or same. Not clear from the first glimpse. WIll try to clarify bit later
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kcp-ci-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.