cherrypicker can't create cherry-pick if PR contains more that 300 files
What happened:
We recently stumbled over our prow cherrypicker plugin silently failing to create a cherry-pick from one of our pull requests inside the KubeVirt org.
Partial error log (for brevity):
...
"component": "cherrypicker",
"error": "failed to get patch: status code 406 not one of [200], body: {\"message\":\"Sorry, the diff exceeded the maximum number of files (300). Consider using 'List pull requests files' API or locallycloning the repository instead.\",\"errors\":[{\"resource\":\"PullRequest\",\"field\":\"diff\",\"code\":\"too_large\"}],\"documentation_url\":\"https://docs.github.com/rest/pulls/pulls#list-pull-requests-files\"}",
...
The docs say the maximum for un-paginated access is 3000, where obviously it's "only" 300.
I didn't look at the code, but I suspect that authors did actively neglect the edge case of > 3000 files, which should rarely happen, but did happen in our case.
What you expected to happen:
We expected a cherry-pick to be created on the PR after issuing the /cherry-pick command.
How to reproduce it (as minimally and precisely as possible): n/a
Please provide links to example occurrences, if any: https://github.com/kubevirt/application-aware-quota/pull/33#issuecomment-2042092569
Anything else we need to know?: Apart from this issue I've opened a discussion on GitHub in order to clarify whether it's an API or docs bug.
/sig ?
@dhiller: The label(s) sig/? cannot be applied, because the repository doesn't have them.
In response to this:
/sig ?
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/sig testing
/area prow
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
We moved prow out to it's own repo earlier this year https://sigs.k8s.io/prow
This sounds like an ongoing issue, but not one the kubernetes project is experiencing (we don't use this plugin currently), would you mind re-filing it there? Thanks.