community
community copied to clipboard
Validate sigs.yaml resources created by other tools
The dream: updating sigs.yaml would result in automatic create/deletion of resources managed by other tooling, e.g.
- GitHub labels
- tool: label_sync
- repo: kubernetes/test-infra
- GitHub teams (peribolos)
- tool: peribolos
- repo: kubernetes/org
- Slack channels (tempelis)
- tool: tempelis
- repo: kubernetes/community
The reality: just because something is in sigs.yaml doesn't make it so. Humans need to remember to make PR's to other repos.
As a start, it would be helpful to have a periodic prowjob that can report on which of these resources exist in sigs.yaml but are missing in reality. It could:
- read latest sigs.yaml
- compare labels against https://github.com/kubernetes/test-infra/blob/master/label_sync/labels.yaml
- compare teams against contents of https://github.com/kubernetes/org/tree/master/config
- compare slack channels against contents of https://github.com/kubernetes/community/tree/master/communication/slack-config
- fail with output of which resources are missing
- send e-mail alert to contribex via testgrid
/kind feature /sig contributor-experience
I filed this because I noticed the wg/reliability label was missing (original PR to create: https://github.com/kubernetes/community/pull/5127)
One other item I would like to see is automatic updating of the leads mailing list and subsequent sig-foo-leads lists.
/help I'm willing to help provide reviews and guidance on how to write a prowjob that does this
My suggestion would be to extend the generator app to optionally take paths to local copies of the files/directories mentioned above, and add a flag/verb for validation against those paths if provided
FYI @nikhita if you have other suggestions
@spiffxp: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help I'm willing to help provide reviews and guidance on how to write a prowjob that does this
My suggestion would be to extend the generator app to optionally take paths to local copies of the files/directories mentioned above, and add a flag/verb for validation against those paths if provided
FYI @nikhita if you have other suggestions
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
One other item I would like to see is automatic updating of the leads mailing list and subsequent sig-foo-leads lists.
This is going to require that email addresses find their way into sigs.yaml.
And movement of the sig-foo-leads lists to kubernetes.io groups. I would like to see group approval sharded (same model as kubernetes/test-infra job configs and kubernetes/org team configs) before we lean too heavily on groups. One difference: creation/deletion of groups should require root approval (but membership additions can be fully delegated)
another good fit for this: validate that owners files listed for subprojects actually exist (ref: https://github.com/kubernetes/community/issues/4125)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Hey @spiffxp
Is this something I could pick up? I'm new to the community, happy to get my hands dirty with a bit of direction.
@bunniseng sure! if you comment /assign the issue will get assigned to you
Comparing the labels might be the easiest place to start, since it's a single file. If you can write something that does that comparison, outputs diffs, and exits non-zero if there are diffs... it's pretty straightforward to turn that into a prowjob.
The slack channels and teams are going to require walking either remote APIs (source of actual configuration), or directory trees (source of intended configuration). I would recommend vetting against source of intended configuration.
Personally I would implement the directory tree walking by pre-cloning those repos someplace, and passing the locations of those paths to the tool, rather than having the tool try to be smart and auto-clone the repos or walk the hierarchy remotely on its own.
@spiffxp can we have a program running that compares both the .yaml file at a periodic interval and and see for missing labels in the label.md file and if the condition satisfy we can can do the needful change
Working with @AvineshTripathi on this :)
forgot about this ;) /assign
/assign
We have a couple of queries
- Where will this job/code reside in?
- How do we import constructs from generator and label_sync tools? We were testing things out by building a local tool and we needed structures to read out the YAML files like https://github.com/kubernetes/test-infra/blob/7daeb0b726c1124f5bd09cd9f7ba08174747db0b/label_sync/main.go?_pjax=%23repo-content-pjax-container#L85 but the tools don't seem to importable.
cc @spiffxp
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
On Thu, 21 Apr 2022 at 21:05, Kubernetes Triage Robot < @.***> wrote:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
- After 30d of inactivity since lifecycle/rotten was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage https://www.kubernetes.dev/docs/guide/issue-triage/
Please send feedback to sig-contributor-experience at kubernetes/community https://github.com/kubernetes/community.
/lifecycle rotten
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/community/issues/5425#issuecomment-1105389312, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD24BUAACJIGNWDCP64WLBDVGFYSXANCNFSM4WPDHCKQ . You are receiving this because you were mentioned.Message ID: @.***>
@nikhita Isn't this issue resolved with the addition of the maintainers tool, or there is still some features left to be added in the tool to resolve this issue
@AvineshTripathi the maintainers tool needs to be integrated into a prow job to inform when sigs.yaml gets out of date. @RaghavRoy145 is working on this.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.