Jiewen Tan
Jiewen Tan
> Thanks, Brian. Let me verify if it fixes the issue. Okay, I verified that it fixes our autograd problem on https://github.com/pytorch/xla/pull/4485. Original error: ``` # RuntimeError: one of the...
> Great! > > > Detailed error: > > @alanwaketan just to confirm, are you saying that you're seeing an error? Or everything worked as expected Updated the confusing comment....
@min-jean-cho Trying to follow up to see what's the conclusion here? Should we just update the test?
> Hi @alanwaketan, the issue is fixed by #92709. We can close the issue. Sounds good. Let me rebase and double check.
@min-jean-cho, verified it fixes the problem.
> @malfet XLA tests failed because their code need to be modified separately in PyTorch/XLA repo. After this PR get merged, I will submit another PR there. Can you make...
> To test that the xla change works, you need to push it to xla repo so we can find it. Oh, I forgot that xla hash doesn't work for...
@cyyever you can use the hash from this PR: https://github.com/pytorch/xla/pull/6686 Thanks for your contributions and sorry for the cumbersome process.
> I see, and the XLA process document needs to be modified to mention such third-party PR cases. Good point. Let me add that paragraph.
@cyyever Let me know if it makes sense to you: https://github.com/pytorch/xla/pull/6687