community
community copied to clipboard
Clarify namespace sameness control via optional policy
After many discussions with customers and implementors, I think we need to clarify that implementations should have an axis of freedom around implementation-defined control of sameness.
E.g. "Sameness applies to
This should be ALLOWED but not REQUIRED.
/sig multi-mcluster
@thockin: The label(s) sig/multi-mcluster
cannot be applied, because the repository doesn't have them.
In response to this:
After many discussions with customers and implementors, I think we need to clarify that implementations should have an axis of freedom around implementation-defined control of sameness.
E.g. "Sameness applies to
only in , and to in ." This should be ALLOWED but not REQUIRED.
/sig multi-mcluster
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: thockin
To complete the pull request process, please assign jeremyot after the PR has been reviewed.
You can assign the PR to them by writing /assign @jeremyot
in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
/sig multi-cluster /assign lauralorenz
@thockin: The label(s) sig/multi-cluster
cannot be applied, because the repository doesn't have them.
In response to this:
/sig multi-cluster /assign lauralorenz
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: thockin Once this PR has been reviewed and has the lgtm label, please assign jeremyot for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
@thockin Sorry for taking so long to follow up on this - I'd like to share a bit more context motivating my concerns.
Very much agreed that enforcing "perfect" sameness or disjointedness can be quite difficult or not pragmatic in many contexts - the concern I have about this proposal is coming from the perspective of MCS API consumers, particularly service meshes or third-party cluster management projects. Allowing MCS API providers to loosen these guarantees in an implementation-specific manner without introducing a standard API to express where and how namespace sameness does or does not apply would make it significantly more difficult to build additional functionality on top of the MCS primitives, or to offer a multi-cloud MCS API implementation.
This is definitely a real adoption challenge, but I think proposing a concrete experimental API for addressing this would be a be a better alternative than removing this guarantee entirely. Would this perhaps be a topic worth adding to the agenda for a future SIG-Multicluster meeting?
Hi @mikemorris,
the concern I have about this proposal is coming from the perspective of MCS API consumers, particularly service meshes or third-party cluster management projects
ACK this. The problem I see here is very similar to the "inventory" problem - defining a control-plane API in terms of Kubernetes resources implies a cluster somewhere, which a) is a lot of overhead; b) has all the downsides of a cluster (including SPOF).
We can't assume that there's a single API endpoint for all clusters (could be a regional replica) or that the "localcluster" is the source of truth or that it is not. All of those are viable models with real tradeoffs.
So given a (mostly hypothetical, sadly) project that wants to do something interesting across a clusterset, what API would satisfy it? What API would satisfy them all?
I've mostly seen it going the other way - some "bridge actor" which is aware of the clusterset source-of-truth extracts the requisite information and pre-cooks it into something the (somewhat less hypothetical) project can consume. This is complicated and won't scale to hundreds of such projects, but it does let them all do their own thing, without us defining the contraints of how they work or what they are allowed to know. I am not sure this is ideal in the long term, but I am afraid we don't know enough (yet?) to do better.
How many such projects exist and have their own notion of config? For example:
https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd_cluster_add/
https://ranchermanager.docs.rancher.com/v2.5/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters
https://istio.io/latest/docs/setup/install/multicluster/multi-primary/
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale