Missing documentation on required tests
Looking at the architecture document https://docs.prow.k8s.io/docs/overview/architecture/, I do not understand what controls the contents of this block in a PR on GitHub.
What adds required tests? Those do not come from ProwJobs, right? I have a mysterious one (pull-kubestellar-kubestellar-validate-prow-yaml) that I do not know where it is coming from (and I can not /test it).
The items shown as Required are configured through GitHub's branch protection concept. That can be configured through Prow's branchprotector component. The applied settings can be either origin from the explicit config, or it can be derived from Prowjob configuration - non-optional, unconditionally triggered jobs.
@petr-muller: thank you for providing the information here. The issue is that this information needs to appear in the Prow documentation.
Two of the links I provided lead to Prow documentation, one leads to the documentation for the component that controls the GH settings you asked about. Obviously the docs can always get better.
The clue that I was missing is that block that I was asking about is a broader GitHub concept called "status checks", which is not just populated by successfully configured Prow jobs but also by a setting in a branch protection rule. https://docs.prow.k8s.io/docs/components/optional/branchprotector/#policy-configuration tells how to configure Prow to maintain that branch protection rule setting. But I did not know about that branch protection rule setting, and did not find it when I went looking for information about the status block in the PR that I thought was just populated by Prow jobs. So I think that what would be helpful would be something that talks about that status block and how Prow relates to it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.