clee2000
clee2000
@pytorchbot merge
@pytorchbot revert -m "broke slow tests in trunk ex https://ossci-raw-job-status.s3.amazonaws.com/log/8433956087" -c nosignal
@pytorchbot revert -m "broke lots of builds https://hud.pytorch.org/pytorch/pytorch/commit/7c31f6e67213cbe773b0e2556f880f6ce2449fc3 even though the pr was green" -c weird
@pytorchbot revert -m "windows build failure is real, https://github.com/pytorch/pytorch/actions/runs/8910674030/job/24470387612#step:11:11236 is the correct failure line, ignore the statement saying build passed, batch is errorcodes arent propagating again" -c ignoredsignal
> @clee2000 The workflow is kind of misleading. Would it be helpful to file an issue? It should be fixed by https://github.com/pytorch/pytorch/pull/125306
I think it matched against https://github.com/pytorch/pytorch/actions/runs/8819380894/job/24214658473 which was recent and has a different error trace but the same test name. However, it doesn't show up on the main branch afaict....
@malfet: we should never treat build failures as flaky/flaky build failures should never be allowed to merge without -f. Ex infra failure -> do not merge, regardless of whether the...
The intention of the third point *is* that devs should explicitly force merge the PR if they don't want to figure out some way to rerun the job
I'm pretty sure this is suo's token in the github-status-test lambda PATs have rate limit of 5000/user/hr They get refreshed every hour, so this problem will resolve itself and then...
Added ability to use more tokens in https://github.com/pytorch/test-infra/pull/5033 Still need to find another token from a bot to add Another option * Swap from lambda to gha to take advantage...