org
org copied to clipboard
Support for the "same" team across different orgs
What would you like to be added:
In GitHub (at least GitHub Enterprise), Teams
cannot be cross-org. I would like to support cross-org "Teams".
Why is this needed:
To "get around" the problem, we just replicate the same teams across different orgs. It would be nice to reference a Team
, with like teamRef
or something, that is specified at the Orgs
level so they can be reused.
Is this with peribolos
?
yeah
/cc @fejta @cblecker @nikhita
/sig contributor-experience A pattern we have used in the past is to make the configs used directly by the tool verbose and low level, and then use something else to generate those configs. Baking directly into the tool may make it too clever.
Somewhat implied by this would be the idea that org membership should be cross-org as well. Unclear whether it would be better to have a "global" overlay, or treating one org as a source of truth to be mirrored (ref: https://github.com/kubernetes/org/issues/966)
the Xzibit strategy? 😄 (I put a tool around your tool to make a new tool)
I agree with the "make the tool too clever" thing--the though part is where to draw the line... We already assume GitHub, so it seems like we should have the same limitations/constraints as GitHub, acknowledging that they will change (we have already been "bitten" by their change to protected branches that match a given regex stepping on our branches protected via regex pattern).
I do like your idea of generating the config though... 🤔
/area github-management
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /remove-lifecycle rotten ref: https://github.com/kubernetes/org/issues/966#issuecomment-793159840
I'm going to move this to kubernetes/org, if there are technical implementation details that need to find their way into this repo, we can discuss on followup issues or PRs
@spiffxp: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten ref: https://github.com/kubernetes/org/issues/966#issuecomment-793159840
I'm going to move this to kubernetes/org, if there are technical implementation details that need to find their way into this repo, we can discuss on followup issues or PRs
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/lifecycle frozen
/priority awaiting-more-evidence I'm bumping this down to lowest priority because it's not clear to me this a problem actively plaguing the kubernetes community today. I would love to hear about use cases this is preventing
/remove-priority important-longterm