Use peribolos to manage team repo permissions
Fixes https://github.com/kubernetes/org/issues/2465
This PR starts using peribolos to manage team repo permissions. Note - peribolos will fail if the repo doesn't exist, so we need to ensure that the repo exists before specifying it in the config. We could probably write a test to query the GitHub API to validate this, but I'd like to cover this in a follow-up.
Specifically, it includes the following changes:
-
Add a new binary in
cmd/restrictionsto define restrictions to the team repo config so that a team in a sub-folder can't arbitrarily give themselves kubernetes/kubernetes admin access. The restrictions config is similar to slack-infra, and can be extended in the future to define further restrictions. -
Add a verify script for restrictions. Note: I've not added this to bazel directly because I want to be able to run this script without bazel. This also meant that
verify-all.shnow contains code to run allhack/verify-*scripts. -
Update all org configs to dump team repo permissions.
-
There are a few teams that use the
triageandmaintainpermissions. Peribolos was recently updated to support these (https://github.com/kubernetes/test-infra/pull/21526, thanks @MadhavJivrajani!). To support this feature + some critical fixes related to team permissions (https://github.com/kubernetes/test-infra/pull/19338), this PR also bumps test-infra to the latest master (https://github.com/kubernetes/test-infra/commit/01a7a103473042944f5edb60f0cf4a98be9c60cc). I also copied over the replace statements from test-infra's go.mod on master. Most of the diff is generated bazel changes due to this bump. -
Add a
config/restrictions.yamlto restrict which repos teams can specify. The GitHub Management team will be able to approve changes to this file.
/cc @spiffxp @cblecker @mrbobbytables /assign @spiffxp
/hold for review
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: nikhita
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [nikhita]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
It'd be nice to nuke some bazel-related code here :(
Example
When this diff is applied:
diff --git a/config/kubernetes/sig-contributor-experience/teams.yaml b/config/kubernetes/sig-contributor-experience/teams.yaml
index df118895..b77ed564 100644
--- a/config/kubernetes/sig-contributor-experience/teams.yaml
+++ b/config/kubernetes/sig-contributor-experience/teams.yaml
@@ -15,6 +15,7 @@ teams:
privacy: closed
repos:
community: admin
+ kubernetes: admin
community-maintainers:
description: ""
maintainers:
diff --git a/config/kubernetes/sig-testing/teams.yaml b/config/kubernetes/sig-testing/teams.yaml
index 9d0f0187..ec930ed2 100644
--- a/config/kubernetes/sig-testing/teams.yaml
+++ b/config/kubernetes/sig-testing/teams.yaml
@@ -75,6 +75,7 @@ teams:
repos:
publishing-bot: admin
test-infra: admin
+ kubernetes: write
test-infra-maintainers:
description: write access to test-infra
maintainers:
We get this error:
INFO[0000] Validating restrictions for kubernetes org
ERRO[0000]
"kubernetes/sig-contributor-experience/teams.yaml": cannot define repo "kubernetes" for team "community-admins"
ERRO[0000]
"kubernetes/sig-testing/teams.yaml": cannot define repo "kubernetes" for team "test-infra-admins"
INFO[0000] Validating restrictions for kubernetes-sigs org
INFO[0000] Validating restrictions for kubernetes-csi org
INFO[0000] Validating restrictions for kubernetes-incubator org
INFO[0000] Validating restrictions for kubernetes-retired org
INFO[0000] Validating restrictions for kubernetes-client org
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Update - had a chat with @MadhavJivrajani, he is going to continue the work for this PR. Thanks, Madhav! :)
I've rebased the PR:
- Added repo permissions for the newly created teams since this PR was created
- As a result of these new repos added, I've also updated
restrictions.yamlto reflect the same.
In addition to the rebase, I've also made a few (minor) changes:
- Till now the changes detected violations to the restriction rules and logged them, but the program exited with a 0 exit code and
hack/verify-restrictions.shwould still pass. If I understand correctly, we want restriction violations to be blocking (and if a restriction rule is to be modified, then gh admins will have to review that)- If I've misunderstood something here, please lmk, happy to revert the change.
/cc @nikhita
@MadhavJivrajani: GitHub didn't allow me to request PR reviews from the following users: nikhita.
Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.
In response to this:
I've rebased the PR:
- Added repo permissions for the newly created teams since this PR was created
- As a result of these new repos added, I've also updated
restrictions.yamlto reflect the same.In addition to the rebase, I've also made a few (minor) changes:
- Till now the changes detected violations to the restriction rules and logged them, but the program exited with a 0 exit code and
hack/verify-restrictions.shwould still pass. If I understand correctly, we want restriction violations to be blocking- If I've misunderstood something here, please lmk, happy to revert the change.
/cc @nikhita
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@nikhita: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
On Thu, 28 Apr 2022 at 00:55, Kubernetes Triage Robot < @.***> wrote:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
- After 30d of inactivity since lifecycle/rotten was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle stale
- Mark this issue or PR as rotten with /lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage https://www.kubernetes.dev/docs/guide/issue-triage/
Please send feedback to sig-contributor-experience at kubernetes/community https://github.com/kubernetes/community.
/lifecycle stale
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/org/pull/2614#issuecomment-1111394572, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD24BUDHLE3ZEILGEWLFZJTVHGIDTANCNFSM42LHWTAQ . You are receiving this because you were mentioned.Message ID: @.***>
/unassign spiffxp /uncc spiffxp
needs-rebase
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
On Sat, 17 Sept 2022 at 05:15, Kubernetes Triage Robot < @.***> wrote:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
- After 30d of inactivity since lifecycle/rotten was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage https://www.kubernetes.dev/docs/guide/issue-triage/
Please send feedback to sig-contributor-experience at kubernetes/community https://github.com/kubernetes/community.
/lifecycle rotten
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/org/pull/2614#issuecomment-1249939730, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD24BUG4OT3FQINUUXIBKE3V6UBBZANCNFSM42LHWTAQ . You are receiving this because you were mentioned.Message ID: @.***>
/approve (for Windows) /remove-sig windows
@nikhita, I've rebased the PR & made updates similar to what's listed in https://github.com/kubernetes/org/pull/2614#issuecomment-975207117. PTAL. Thank you!
Hello @liggitt @cblecker, I had rebased & made a few updates on the PR last week (here > https://github.com/kubernetes/org/pull/2614#issuecomment-1431739327).
Could you please do another round of review. Thank you so much!
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I will rebase the PR this week.
LGTM.
Since I had technically authored the original code, I'd like one of @MadhavJivrajani or @palnabarun to approve here.
Summing up the recent changes after the last PR review:
-
all org config files updated to add repo permissions for the newly created teams, since this PR was created
restrictions.yamlupdated to reflect the same
-
go version & package dependencies bump in
go.mod. Major changes, among all:-
golang version bump -
1.20 -
k8s.io/test-infrabump -v0.0.0-20230726003218-c95b43963de2pointing to c95b43963de20d3ce1c9fa600d2b859809c3259d (latest head on master branch as of July 26, 2023, at the time of last commit )
-
k8s.io staging repos pinned to kubernetes
v1.27.4in replace directive section
-
-
removed comment lines referencing use of
update-deps.shscript as per comment above
/hold For further acks
In response to this:
Thank you all for working on this and pushing this ahead! ❤️
/lgtm /approve
/hold cancel
/pony party
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: MadhavJivrajani, marosset, nikhita, palnabarun
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [MadhavJivrajani,nikhita,palnabarun]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment