cluster-api-provider-nested
cluster-api-provider-nested copied to clipboard
Need Resource Syncing Policy
User Story
As an operator, I would like to control which resource needs to be synced to tenant virtual cluster for security and billing purpose.
Detailed Description
Issue and Requirement
- Not all super cluster resources need to be synced to every tenant virtual cluster. Some tenant may be allowed to use only limited types of resources in their cluster, for example, only a subset of storage classes or runtime classes.
- A per tenant syncing policy needs to be provisioned before a tenant cluster is created. And resource syncer needs to read the policy and perform upward or downward syncing for selected resources.
Policy Provision
Proposal 1: Use Configmap for Resource Syncing Policy
- Per tenant syncing policy consists of an allowed resource list for a tenant
- Store tenant syncing policy in Configmap for each virtual cluster
- Deploy the configmap in virtualcluster control plane.
- Resource syncer reads the configmap of tenant and perform syncing accordingly.
apiVersion: v1
kind: ConfigMap
metadata:
name: vc-sample-1
namespace: default-532c0e-vc-sample-1
data:
<api-group>.allowed: |
<resource kind>=<resource-instance-name>
v1.allowed: |
runtimeclass=microvm
runtimeclass=kata
storageclass=local-storage
scheduling.k8s.io.allowed: |
priorityclasses``=``p1
priorityclasses=``p2`
Proposal 2: Create CRD for Resource Syncing Policy
- Create a new CRD for resource syncing policy
- Super cluster admin needs to create a CR for each tenant to provision specific sync policy
- The CR is deployed in virtual cluster control plane.
- Syncer reads the CR and perform the syncing based on tenant’s policies.
type SyncerPolicy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Rules []PolicyRule
}
type SyncRule struct {
// Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.
// Currently only Deny or Allow are used
Verbs []string `json:"verbs" protobuf:"bytes,1,rep,name=verbs"`
// APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of
// the enumerated resources in any API group will be allowed.
// +optional
APIGroups []string `json:"apiGroups,omitempty" protobuf:"bytes,2,rep,name=apiGroups"`
// Resources is a list of resources this rule applies to. '*' represents all resources.
// +optional
Resources []string `json:"resources,omitempty" protobuf:"bytes,3,rep,name=resources"`
}
Policy Handling
Option 1: Direct Access
By using the same name of virtual cluster, resource syncer can access the policy (in form of CR or configmap) directly. The configmap or syncpolicy needs to be provisioned in the same namespace of virtualcluster.
Option 2: Bind Policy to Virtualcluster CR
In this approach, a new attribute --- ClusterSyncPolicy will be added into VirtualClusterSpec to specify a predefined syncPolicy name or configmap name. The configmap or syncpolicy needs to be provisioned in the same namespace of virtualcluster.
type VirtualClusterSpec struct {
...
`ClusterSyncPolicy string`
...
}
A cache will be created in each virtual cluster domain to facilitate policy access from syncer.
@christopherhein @Fei-Guo
I am not fully convince the need of making syncing super resources be per-tenant basis. We can discuss it in community meeting.
An update, we discussed this a handful of weeks ago and the consensus was this makes sense in the case that you want to expose "platform/super cluster" features to specific tenants.
Implementation wise:
- Labels on resources that explicitly allow this type of control eg
tenancy.x-k8s.io/public.clusters: <vcname>,<vcname> - Update the VC Syncer to support loading configs from a file then mutating this to allow specific resource
- CRD for the syncer that would allow each cluster to have additional tooling.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.