community icon indicating copy to clipboard operation
community copied to clipboard

Clarify namespace sameness control via optional policy

Open thockin opened this issue 2 years ago • 18 comments

After many discussions with customers and implementors, I think we need to clarify that implementations should have an axis of freedom around implementation-defined control of sameness.

E.g. "Sameness applies to only in , and to in ."

This should be ALLOWED but not REQUIRED.

/sig multi-mcluster

thockin avatar Jul 19 '22 03:07 thockin

@thockin: The label(s) sig/multi-mcluster cannot be applied, because the repository doesn't have them.

In response to this:

After many discussions with customers and implementors, I think we need to clarify that implementations should have an axis of freedom around implementation-defined control of sameness.

E.g. "Sameness applies to only in , and to in ."

This should be ALLOWED but not REQUIRED.

/sig multi-mcluster

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 19 '22 03:07 k8s-ci-robot

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: thockin To complete the pull request process, please assign jeremyot after the PR has been reviewed. You can assign the PR to them by writing /assign @jeremyot in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar Jul 19 '22 03:07 k8s-ci-robot

/sig multi-cluster /assign lauralorenz

thockin avatar Jul 19 '22 03:07 thockin

@thockin: The label(s) sig/multi-cluster cannot be applied, because the repository doesn't have them.

In response to this:

/sig multi-cluster /assign lauralorenz

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 19 '22 03:07 k8s-ci-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 17 '22 04:10 k8s-triage-robot

/remove-lifecycle stale

thockin avatar Oct 23 '22 03:10 thockin

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 21 '23 03:01 k8s-triage-robot

/remove-lifecycle stale

thockin avatar Jan 21 '23 03:01 thockin

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: thockin Once this PR has been reviewed and has the lgtm label, please assign jeremyot for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar Apr 22 '23 21:04 k8s-ci-robot

@thockin Sorry for taking so long to follow up on this - I'd like to share a bit more context motivating my concerns.

Very much agreed that enforcing "perfect" sameness or disjointedness can be quite difficult or not pragmatic in many contexts - the concern I have about this proposal is coming from the perspective of MCS API consumers, particularly service meshes or third-party cluster management projects. Allowing MCS API providers to loosen these guarantees in an implementation-specific manner without introducing a standard API to express where and how namespace sameness does or does not apply would make it significantly more difficult to build additional functionality on top of the MCS primitives, or to offer a multi-cloud MCS API implementation.

This is definitely a real adoption challenge, but I think proposing a concrete experimental API for addressing this would be a be a better alternative than removing this guarantee entirely. Would this perhaps be a topic worth adding to the agenda for a future SIG-Multicluster meeting?

mikemorris avatar Jul 05 '23 16:07 mikemorris

Hi @mikemorris,

the concern I have about this proposal is coming from the perspective of MCS API consumers, particularly service meshes or third-party cluster management projects

ACK this. The problem I see here is very similar to the "inventory" problem - defining a control-plane API in terms of Kubernetes resources implies a cluster somewhere, which a) is a lot of overhead; b) has all the downsides of a cluster (including SPOF).

We can't assume that there's a single API endpoint for all clusters (could be a regional replica) or that the "localcluster" is the source of truth or that it is not. All of those are viable models with real tradeoffs.

So given a (mostly hypothetical, sadly) project that wants to do something interesting across a clusterset, what API would satisfy it? What API would satisfy them all?

I've mostly seen it going the other way - some "bridge actor" which is aware of the clusterset source-of-truth extracts the requisite information and pre-cooks it into something the (somewhat less hypothetical) project can consume. This is complicated and won't scale to hundreds of such projects, but it does let them all do their own thing, without us defining the contraints of how they work or what they are allowed to know. I am not sure this is ideal in the long term, but I am afraid we don't know enough (yet?) to do better.

How many such projects exist and have their own notion of config? For example:

https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd_cluster_add/

https://ranchermanager.docs.rancher.com/v2.5/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters

https://istio.io/latest/docs/setup/install/multicluster/multi-primary/

thockin avatar Jul 17 '23 19:07 thockin

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 20 '24 14:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 19 '24 14:02 k8s-triage-robot

/remove-lifecycle rotten

thockin avatar Feb 20 '24 16:02 thockin

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 20 '24 17:05 k8s-triage-robot

/remove-lifecycle stale

thockin avatar May 20 '24 17:05 thockin