Xu Zhao
Xu Zhao
> Reverted, let's continue discussion here how it might have happened, as pretty thorough performance analysis has been made in [pytorch/pytorch#81849](https://github.com/pytorch/pytorch/pull/81849) > > Had torchbench detected any cases where performance...
After revert(https://github.com/pytorch/pytorch/commit/799620178cf), we observe performance speedup (https://github.com/pytorch/benchmark/issues/1103), plus we have cross-validated in https://github.com/pytorch/benchmark/issues/1100, so I believe that this is true signal, not a noise.
@mingfeima any progress to this issue?
> @mingfeima I see, I can try to profile it on our CI machine (Cascade Lake) and get back to you.
I tried to reproduce it manually on the CI machine, couldn't reproduce it either. It looks like the problem is in our automatic workflow. I apologize that I should manually...
@mingfeima I looked into this and it seems just because the test is flaky, need sometime to fix it, but I believe it is safe to treat this signal as...
Let's close this issue since this is a false signal. We can create another issue to address the performance test coverage for LayerNorm.
Can you please also remove the `not_implemented` field in pytorch_unet's metadata.yaml? This will enable jit unit test for this model. See https://github.com/pytorch/benchmark/blob/main/torchbenchmark/models/pytorch_unet/metadata.yaml#L7
@zzpmiracle The eval function code was directly copied from upstream: https://github.com/milesial/Pytorch-UNet/blob/master/evaluate.py, so the change in https://github.com/pytorch/benchmark/commit/5b07f357eaed06ccda9e7283f838c11228755229 will change the original code behavior. @davidberard98 do you know if this is the...
> @davidberard98 OK, I got this, thanks! How about the other question, that to change the code to trace the whole evaluation model? I am trying to understand why this...