test-infra
test-infra copied to clipboard
Allow `ok-to-test` label to approve GitHub workflow runs for new contributors
What would you like to be added:
The ability for Prow to approve GitHub workflow runs after the ok-to-test label has been added to a PR via GitHub's API https://docs.github.com/en/rest/reference/actions#approve-a-workflow-run-for-a-fork-pull-request
Why is this needed:
New contributors are blocked from running GitHub workflows even after a maintainer adds the ok-to-test label to a PR and contributors shouldn't need to rely on GUI access to approve workflow runs knative/test-infra/issues/3057
/sig contributor-experience /sig k8s-infra
@stevekuznetsov this is more of a follow-up on https://github.com/kubernetes/test-infra/issues/21581 to add more features to Prow to integrate with GitHub Actions. Since you helped review that PR, your thoughts/suggestions would be valuable here. Thanks!
/area prow
It also looks like the approval needs to occur any time there's a new change (ie. pushed a commit) on the PR.
@cjwagner @alvaroaleman , wdyt?
I'm not sure if I agree with the decision here, but it seems like we've already made the decision and this PR is a continuation so it is probably fine.
I am primarily concerned about the UX consistency and about surprising behavior for users (both already mentioned).
Hehe I would not call https://github.com/kubernetes/test-infra/issues/21581#issuecomment-811143968 a "decision" - at that time I was just offering the opinion. The code change landed in https://github.com/kubernetes/test-infra/pull/24726.
At this moment (and with some experience with how actions and Prow integrate) I'm not too fond of continuing the integration, too. Our experience is that actions and Prow sometimes do have surprising and confusing behavior.
Our experience is that actions and Prow sometimes do have surprising and confusing behavior.
What have you seen?
Conceptionally I am fine with this but not sure how easy it is to implement. The approval seems straight forward but before we can approve anything, we have to find out the run_id. This also entail additional api calls, so we might want to featuregate it.
The name shown in the UI for checkruns is a combination of the suite and the check or something like that, not sure if we will be able to implement a /test $string_visible_in_github_ui for actions.
What have you seen?
https://github.com/kubernetes/test-infra/issues/24921 was one thing. Confusion about how exactly the GH action (and their sub-entities) "identifiers" work when they are shown by GH UI and how do they relate to identifiers you can use in the GH API was another. I remember there was some case where GH presents you an indetifier than can be present in multiple workflows/actions (I'm not good in action terminology, sorry).
@petr-muller I'm not sure if the issue you are referring to is the lack of filtering GitHub runs by PR which was the main issue to solve for https://github.com/kubernetes/test-infra/pull/24726 fortunately the headsha field in the runs list data can be used to filter the runs list as in this initial approach done purely in GitHub actions https://github.com/shinigambit/serving/blob/main/.github/workflows/knative-checklabel.yaml it's pretty much the same approach I used in Go for the Prow retest GitHub actions feature but instead of listing status: failed runs it looks for status: action_required runs. That's just my two cents about the feasibility, I also think it could be done in even less calls if using the GraphQL API instead of the REST one, that's something I can dig into for this feature and probably the findings could be used to improve the retest feature as well
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
I'm interested in this functionality. For repos with Helm charts in them there are often first time contributor PRs which require the manual intervention of a repo owner to enable the GH actions to run.
As a MVP solution could both the /ok-to-test and /retest commands have repo based opt in behaviour to call the approval endpoint so the GH actions can run? This could, if desired, then be extended to support additional code changes being automatically approved in line with the prow default behaviour.
Just want to add, we also have this issue frequently in ClusterAPI and it would be nice to see it implemented. But of course - as usual - someone has to have the time to actual implement it .. :)
Bumping this with another +1. We face this in Metal3 as well. Contributors outside Metal3 org members are not getting actions run on their PRs, and the re-approve needed on futher changes in PR require constant babysitting by the approvers.
@tuminoid we now use this action as a workaround: https://github.com/kubernetes-sigs/cluster-api/blob/main/.github/workflows/pr-gh-workflow-approve.yaml
I added this functionality to ExternalDNS & Metrics Server and it's been up and running for over a year now. There were initially a couple of issues but the current pattern (which seems to be the one used by cluster-api) has been very stable.
- https://github.com/kubernetes-sigs/metrics-server/blob/master/.github/workflows/gh-workflow-approve.yaml
- https://github.com/kubernetes-sigs/external-dns/blob/master/.github/workflows/gh-workflow-approve.yaml
Thanks @sbueringer and @stevehipwell ... we added the same for one repo first here https://github.com/metal3-io/ip-address-manager/pull/485 and if that works as expected, we'll put the same for all our repos. Thanks for the workaround!
Migrated issue: https://github.com/kubernetes-sigs/prow/issues/194