Prow seems to assign Assignees who are not in OWNERS file
In https://github.com/opendatahub-io/notebooks/pull/587, after adding my LGTM review, openshift-ci (bot) came and "assigned adelton" as Assignee.
I'm not even listed in https://github.com/opendatahub-io/notebooks/blob/main/OWNERS so this is quite surprising and does not seem to match the referenced pull request process at https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process.
Should the prow behaviour (perhaps in https://github.com/kubernetes-sigs/prow/tree/main/pkg/plugins/assign) be modified, or should this be brought for clarification to https://github.com/kubernetes/community/?
Right, Prow does that for approvers after they approve a PR. I am actually not sure why it does that :D I'm not sure how your Prow is configured, it can be configured to consider GH approves as Prow approves, which I think is the case here; your review was a GH approve.
For completeness, here's our Prow configs for the repo.
First, https://github.com/openshift/release/tree/master/ci-operator/config/opendatahub-io/notebooks (that's just CI jobs, probably not relevant).
And second, here's our global prow config https://github.com/openshift/release/blob/master/core-services/prow/02_config/_config.yaml and here's the repo specific config https://github.com/openshift/release/blob/master/core-services/prow/02_config/opendatahub-io/notebooks/_prowconfig.yaml and plugin config https://github.com/openshift/release/blob/master/core-services/prow/02_config/opendatahub-io/notebooks/_pluginconfig.yaml
I am actually not sure why it does that :D
It's just Kubernetes's workflow, to assign people that take over the review / approval. It gives more visibility.
NOTE: anyone can /assign anyhow.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.