kcp icon indicating copy to clipboard operation
kcp copied to clipboard

bug: APIExport permission claims are not respected

Open lionelvillard opened this issue 2 years ago • 4 comments

Describe the bug

APIExport permission claims are not respected.

Steps To Reproduce

  • In one terminal, start kcp (kcp start)
  • Open a new terminal
  • Run export KUBECONFIG=$(pwd)/.kcp/admin.kubeconfig
  • Create provider ws kubectl kcp ws create provider --enter
  • Create an APIExport claiming namespaces:
cat <<EOF | kubectl apply -f -
apiVersion: apis.kcp.io/v1alpha1
kind: APIExport
metadata:
  name: example
spec:
  latestResourceSchemas: []
  permissionClaims:
    - resource: namespaces
      all: true
EOF
  • Go to your ws kubectl ws
  • Bind the APIExport example kubectl kcp bind apiexport root:provider:example
  • Get your workspace id export MYWS=$(kubectl get namespace default -ojsonpath="{.metadata.annotations['kcp\.io/cluster']}")
  • Go back to the provider ws kubectl ws root:provider
  • Allow provider admin to access views
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: default
rules:
  - apiGroups:
      - apis.kcp.io
    resources:
      - apiexports/content
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: default
subjects:
  - apiGroup:
    kind: ServiceAccount
    name: default
    namespace: default
EOF
  • Get the APIExport view URL export VWURL=$(kubectl get apiexport example -ojsonpath='{.status.virtualWorkspaces[0].url}')
  • Get the provider token export PROVIDER_TOKEN=$(kubectl get secret default-token-hl48b -ojsonpath='{.data.token}' | base64 -D)
  • Get the list of your workspace namespace via the view:
kubectl --server=$VWURL/clusters/* --token=$PROVIDER_TOKEN get namespaces
No resources found
  • Create a new namespace:
kubectl --server=$VWURL/clusters/$MYWS --token=$PROVIDER_TOKEN create namespace boo
namespace/boo created

Expected Behaviour

I would expect a 503 (or 404) for both operations.

Additional Context

No response

lionelvillard avatar Feb 20 '23 20:02 lionelvillard

Looks like TestAPIExportPermissionClaims in test/e2e/virtual/apiexport/virtualworkspace_test.go does not fully handle these cases. First, it claims:

t.Logf("Verify that we get empty lists for all claimed resources (other than apibindings) because the claims have not been accepted yet")

Realistically, shouldn't this be a 403? We need to indicate to the provider somehow that no claims have been accepted, although it's not clear to me that this would work with a *-cluster request unless all claims had not been accepted. Likely we need to consider if it's really that valid and useful to allow un-accepted claims and, if so, how we want to expose this to the APIExport provider.

Second, it does not attempt to do any mutating requests before the claims are accepted. I think it will be more appropriate (or the only appropriate thing, once we have scoped claims?) for us to dynamically handle the set of a) resources and b) verbs on those resources that our forwarding storage allows as a function of the accepted claims? Although, this would mean that discovery is per-cluster, which ... unclear if we can support that or if clients would know what to do.

stevekuznetsov avatar Feb 20 '23 22:02 stevekuznetsov

The namespace creation looks odd, would expect it to be 403 (if I don't miss anything).

The * list is exactly as expected. It is not by bound workspaces, but has to answer uniformely and continously, i.e. empty list 200.

sttts avatar Feb 21 '23 13:02 sttts

The prerequisite to fix this is to introduce admission in the virtual apiserver view which is done in https://github.com/kcp-dev/kcp/pull/2456, see https://github.com/kcp-dev/kcp/pull/2456#discussion_r1120612736.

Once admission in the virtual apiserver view lands, checks can be implemented there to handle the case described here.

s-urbaniak avatar Mar 21 '23 12:03 s-urbaniak

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kcp-ci-bot avatar Apr 28 '24 20:04 kcp-ci-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kcp-ci-bot avatar May 28 '24 20:05 kcp-ci-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kcp-ci-bot avatar Jun 27 '24 20:06 kcp-ci-bot

@kcp-ci-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

kcp-ci-bot avatar Jun 27 '24 20:06 kcp-ci-bot