test-infra
test-infra copied to clipboard
Run `hook` as a GitHub action
What would you like to be added: support for Hook to be run as a GitHub action
Why is this needed: CNCF projects need Prow features without the complexity of Prow
ii is doing some POC'ing over here
- https://github.com/cncf-infra/prow-github-action
- https://github.com/cncf-infra/mock-project-repo
/cc @hh @Riaankl @RobertKielty /assign @BobyMCbobs @hh @Riaankl @RobertKielty
/sig k8s-infra
Update: this is wrong! Where does it belong? @hh
/unsig k8s-infra
/sig nil
@BobyMCbobs: The label(s) sig/nil cannot be applied, because the repository doesn't have them.
In response to this:
/sig nil
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Plan of action:
- build out a implementation of hook/server.go
- given /github/workflow/event.json and sufficient auth, create a http server using httptest and call the endpoint
Thoughts @hh?
I like this
httptest would get the plumbing up the quickest I think.
Alternatively we could create prow/cmd/pga as a rewrite of demuxEvent() that reads the payload from GITHUB_EVENT_PATH and directly switches on GITUB_EVENT_NAME calling the appropriate handleFOOEvent()
From https://docs.github.com/en/github-ae@latest/actions/learn-github-actions/environment-variables#default-environment-variables :
GITHUB_EVENT_NAME The name of the event that triggered the workflow. For example, workflow_dispatch. GITHUB_EVENT_PATH The path to the file on the runner that contains the full event webhook payload. For example, /github/workflow/event.json.
Lift the case statement from demuxEvent() as prow/cmd/pga as the action: https://github.com/kubernetes/test-infra/blob/master/prow/hook/server.go#L91-L176
GITHUB_TOKEN permissions in pull requests
- From https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target
This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does. This prevents execution of unsafe code from the head of the pull request that could alter your repository or steal any secrets you use in your workflow. This event allows your workflow to do things like label or comment on pull requests from forks. Avoid using this event if you need to build or run code from the pull request. Warning: For workflows that are triggered by the pull_request_target event, the GITHUB_TOKEN is granted read/write repository permission unless the permissions key is specified and the workflow can access secrets, even when it is triggered from a fork.
Startup Time
Can we streamline to quickly and directly call the event handlers rather than starting up a web server?
API Limits
- From https://docs.github.com/en/actions/learn-github-actions/usage-limits-billing-and-administration#usage-limits :
API requests - You can execute up to 1000 API requests in an hour across all actions within a repository. If exceeded, additional API calls will fail, which might cause jobs to fail.
/sig testing
Startup Time Can we streamline to quickly and directly call the event handlers rather than starting up a web server?
Sounds good to me I will take a look.
Had an offline chat with @hh, and posting what we had discussed here for visibility:
Overall I think this is an interesting idea, I’m not seeing any obvious blocker yet other than a few thoughts:
- Prowjobs related plugins will not be supported and I guess that’s expected right?
[hh] No prowjobs at this time. Long term, we might look into creating something that suggests PRs to create GHA in .github/workflows/ for repos similar to configured prow job definitions.
- I’m not entirely sure how GitHub webhook would work in GitHub actions, if it’s free then it would be awesome.
[hh] It would be free We are working with GitHub to allow all CNCF projects to join our GitHub enterprise account and get much higher free limits
- In terms of configurations, would it be easier for the plugin config either be stored in repo such as .prow/plugin.yaml or as action param?
[hh] I think we'd like it to have a working default config that could be overwritten with either of those options.
I'm excited that a project should only need to add a single ~/.github/workflows/prow-hook-handler.yaml file to each repo and no other configuration to use most of the prow plugins.
- https://github.com/kubernetes/test-infra/tree/master/prow/plugins
Particularly the ones with repo and GitHub-only interactions (only needing GITHUB_TOKEN) and not prow-jobs or external services (other secrets).
We'll have to see how quickly we approach the unchangeable limit of up to 1000 API requests in an hour across all actions within a repository and notify the project somehow when we get close / hit this limits and find a path forward for projects as they grow.
/cc @onlydole @carolynvs @parispittman
Conversations about Community Infrastructure, including the desire of prow beyond Kubernetes from @carolynvs on the TAG Contributor Strategy Livestream: https://youtu.be/Ei2Q5q1DPPA?t=1070
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.