cluster-api-provider-nested icon indicating copy to clipboard operation
cluster-api-provider-nested copied to clipboard

Need Resource Syncing Policy

Open weiling61 opened this issue 3 years ago • 4 comments

User Story

As an operator, I would like to control which resource needs to be synced to tenant virtual cluster for security and billing purpose.

Detailed Description

Issue and Requirement

  • Not all super cluster resources need to be synced to every tenant virtual cluster. Some tenant may be allowed to use only limited types of resources in their cluster, for example, only a subset of storage classes or runtime classes.
  • A per tenant syncing policy needs to be provisioned before a tenant cluster is created. And resource syncer needs to read the policy and perform upward or downward syncing for selected resources.

Policy Provision

Proposal 1: Use Configmap for Resource Syncing Policy

  1. Per tenant syncing policy consists of an allowed resource list for a tenant
  2. Store tenant syncing policy in Configmap for each virtual cluster
  3. Deploy the configmap in virtualcluster control plane.
  4. Resource syncer reads the configmap of tenant and perform syncing accordingly.
apiVersion: v1
kind: ConfigMap
metadata:
  name: vc-sample-1
  namespace: default-532c0e-vc-sample-1
data:
  <api-group>.allowed: |
    <resource kind>=<resource-instance-name> 
  v1.allowed: |
    runtimeclass=microvm
    runtimeclass=kata
    storageclass=local-storage
  scheduling.k8s.io.allowed: |
    priorityclasses``=``p1
    priorityclasses=``p2`

Proposal 2: Create CRD for Resource Syncing Policy

  1. Create a new CRD for resource syncing policy
  2. Super cluster admin needs to create a CR for each tenant to provision specific sync policy
  3. The CR is deployed in virtual cluster control plane.
  4. Syncer reads the CR and perform the syncing based on tenant’s policies.
type SyncerPolicy struct {
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Rules []PolicyRule 
}

type SyncRule struct {

 // Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.
// Currently only Deny or Allow are used
 Verbs []string `json:"verbs" protobuf:"bytes,1,rep,name=verbs"`

 // APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of
 // the enumerated resources in any API group will be allowed.
 // +optional
 APIGroups []string `json:"apiGroups,omitempty" protobuf:"bytes,2,rep,name=apiGroups"`
 
// Resources is a list of resources this rule applies to. '*' represents all resources.
 // +optional
 Resources []string `json:"resources,omitempty" protobuf:"bytes,3,rep,name=resources"` 
}

Policy Handling

Option 1: Direct Access

By using the same name of virtual cluster, resource syncer can access the policy (in form of CR or configmap) directly. The configmap or syncpolicy needs to be provisioned in the same namespace of virtualcluster.

Option 2: Bind Policy to Virtualcluster CR

In this approach, a new attribute --- ClusterSyncPolicy will be added into VirtualClusterSpec to specify a predefined syncPolicy name or configmap name. The configmap or syncpolicy needs to be provisioned in the same namespace of virtualcluster.

type VirtualClusterSpec struct {
   ...
   `ClusterSyncPolicy string`
   ...
}

A cache will be created in each virtual cluster domain to facilitate policy access from syncer.

weiling61 avatar Mar 25 '22 04:03 weiling61

@christopherhein @Fei-Guo

weiling61 avatar Mar 25 '22 04:03 weiling61

I am not fully convince the need of making syncing super resources be per-tenant basis. We can discuss it in community meeting.

Fei-Guo avatar Mar 25 '22 23:03 Fei-Guo

An update, we discussed this a handful of weeks ago and the consensus was this makes sense in the case that you want to expose "platform/super cluster" features to specific tenants.

Implementation wise:

  • Labels on resources that explicitly allow this type of control eg tenancy.x-k8s.io/public.clusters: <vcname>,<vcname>
  • Update the VC Syncer to support loading configs from a file then mutating this to allow specific resource
  • CRD for the syncer that would allow each cluster to have additional tooling.

christopherhein avatar May 02 '22 18:05 christopherhein

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 31 '22 18:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 30 '22 19:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Sep 29 '22 20:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 29 '22 20:09 k8s-ci-robot