test-infra
test-infra copied to clipboard
k8s-triage-robot should not be closing important bugs
We were just chatting with @liggitt about reliability bar and one of the AIs from the discussions was to ensure that important bugs actually aren't closed.
Basically, any issue that is marked with: kind/bug priority/{important-soon,important-longterm,critical-urgent}
should not be touched by k8s-triage-robot for marking as state/rotten/closed
Couple different jobs in this file have to be updated to do this. https://github.com/kubernetes/test-infra/blob/705997b53f349731aa03c355c50637af574a2917/config/jobs/kubernetes/sig-k8s-infra/trusted/sig-contribex-k8s-triage-robot.yaml#L133
Summarizing later discussion, the AIs are:
- [x] we want to proceed with not closing issues that are what's originally proposed + triage/accepted
- [ ] we want to ensure that SIGs are actually triaging issues
- [ ] we want to ensure that issues that don't have SIG assigned are also triaged
- [ ] we want to additionally provide some customization for timelines (e.g. enhancements repo wants more than 3 months for getting stale)
@kubernetes/sig-contributor-experience /help wanted
/sig contributor-experience
xref https://github.com/kubernetes/kubernetes/issues/103151
I agree with this scoped starting point. A confirmed bug that is marked important/critical should not be autoclosed.
I could actually see the stale/rotten labels being useful/interesting to indicate inactivity/neglect, but not auto-closing
I could actually see the stale/rotten labels being useful/interesting to indicate inactivity/neglect, but not auto-closing
+1 - didn't think about it but it makes perfect sense
Also thanks for cross-referencing. Adding some folks here explicitly then: @dims @BenTheElder @sftim @ehashman @spiffxp
for reference, here's a query of the closed lifecycle/rotten bugs with important or critical priority: https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aclosed+label%3Alifecycle%2Frotten+label%3Akind%2Fbug+label%3Apriority%2Fimportant-longterm%2Cpriority%2Fimportant-soon%2Cpriority%2Fcritical-urgent+
Should we just go ahead and reopen them? [Not now, but after changing the bot]
The bot does search queries so changing them is trivial, but up to contributor experience to approve.
I think the problem is "confirmed" bug.
E.g. you could just apply /lifecycle frozen to accepted bugs but anyone can do this. It's also true for /kind bug and /priority critival-urgent though.
I'd be happy to see accepted issues never closing, or rotting with a much longer interval (eg 12 months). We can still explicitly freeze key issues.
I agree that keeping their lifecycle state is useful knowledge, and I believe bugs labeled with triage/accepted not closing is an acceptable middle-ground. Only org members can use the command, so I think there is sufficient gating to prevent abuse.
IMO - It's probably worth sending to the contribex/k-dev mailing lists+community meeting this week for broader discussion.
.... or even adding it to the Community Meeting agenda.
We just discussed that during the community meeting.
The outcome was that:
- we want to proceed with not closing issues that are what's originally proposed + triage/accepted
- we want to ensure that SIGs are actually triaging issues
- we want to ensure that issues that don't have SIG assigned are also triaged
- we want to additionally provide some customization for timelines (e.g. enhancements repo wants more than 3 months for getting stale)
All of those are valid requests, but we shouldn't block (1) on the other.
So
here is the query (for k/k example) of the issues that should NOT be closed:
https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+-label%3Alifecycle%2Ffrozen+label%3Alifecycle%2Frotten+label%3Akind%2Fbug+label%3Atriage%2Faccepted+label%3Apriority%2Fcritical-urgent%2Cpriority%2Fimportant-soon%2Cpriority%2Fimportant-longterm+
here is the query that is used currently: https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+-label%3Alifecycle%2Ffrozen+label%3Alifecycle%2Frotten
But I didn't yet figure out how to get the diff from those two. Any hints?
The configuration of the bot is here: https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-k8s-infra/trusted/sig-contribex-k8s-triage-robot.yaml
The process of recording consensus and notifying folks is here: https://github.com/kubernetes/community/blob/master/sig-contributor-experience/charter.md#cross-cutting-and-externally-facing-processes
@cblecker - thanks; the announcement have been sent in: https://groups.google.com/a/kubernetes.io/g/leads/c/PYjDxRh8ghQ
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
There are still things to do described in: https://github.com/kubernetes/test-infra/issues/25967#issuecomment-1105612832
Can you render that as a checklist in the original issue comment so that everyone knows what still needs to be done? Thanks!
Can you render that as a checklist in the original issue comment so that everyone knows what still needs to be done? Thanks!
Done
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
:eyes:
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale