test-infra
test-infra copied to clipboard
Merging master in feature branch fails because of CLA
Because the bot hasn't signed the CLA, it's not currently possible to merge master back into the feature branch.
@BenTheElder
@fejta how did we fix this before with k8s-ci-robot and test-infra? k8s-merge-robot needs the same fix.
/kind bug /priority critical-urgent @fejta @BenTheElder was this handled?
this was "handled" by having "feature branch managers" who just force push over the branch IIRC https://github.com/kubernetes/test-infra/blob/1f481a3aff57abfb7346a4faeedcac4bb0744f5c/prow/plugins.yaml#L172-L174
We have the similar problem in kubernetes-client/python repo. The release process in client-python requires merging master into release branch. The recently-enabled k8s-ci-bot in that repo blocks the release because it hasn't signed CLA: https://github.com/kubernetes-client/python/pull/650
I suspect other kubernetes-client repos https://github.com/kubernetes/test-infra/pull/9122 that have the bot enabled recently experience the similar issue.
@BenTheElder @fejta Could you advise on how to proceed? :)
https://github.com/kubernetes/test-infra/issues/8241
Thanks for the reference @BenTheElder. I thought there were two bots (k8s-ci-robot that does auto-merging and CLAbot that checks CLA) and I was hoping our problem could be solved if the k8s-ci-robot can sign the CLA itself like a general contributor somehow.
I guess we have to force push like how the serverside-apply feature branch does until #8241 is rolled out.
cc @yliaog
I don't know what the correct answer for the k8s-ci-robot is there ... @spiffxp punting this up to steering :^)
@BenTheElder - /me channeling Aaron - "as long as you come to steering with a yay or nay proposal" :)
/milestone v1.13
cc @thockin at @spiffxp's suggestion ... should the merge robot have the CLA authorized on its account?
@BenTheElder Is this still an issue? I feel like you mentioned one of the bot accounts was now mysteriously passing the CLA check, I can't remember if it was related to this
/remove-milestone
/milestone clear
right, our syntax is inconsistent
IIRC the bot now passes CLA, @fejta can confirm.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/close
per last comment from @BenTheElder
@dims: Closing this issue.
In response to this:
/close
per last comment from @BenTheElder
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
The CLA bot no longer (or perhaps never) passes CLA check for PRs where it is the committer (e.g. https://github.com/kubernetes-client/java/pull/3345)
Is there a way to fix this globally with the EasyCLA infrastructure?
@brendandburns: Reopened this issue.
In response to this:
/reopen
The CLA bot no longer (or perhaps never) passes CLA check for PRs where it is the committer (e.g. https://github.com/kubernetes-client/java/pull/3345)
Is there a way to fix this globally with the EasyCLA infrastructure?
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/sig testing
I read the error message a little more, I think that the email address ([email protected]) needs to be added to the github user for the robot: https://github.com/k8s-ci-robot and then the EasyCLA check will pass (or at least get past the current error)
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten