Bhavya Bahl
Bhavya Bahl
The E2E tests have been failing for some time
I think that this should be covered as part of https://github.com/pytorch/xla/issues/9315
Thanks for taking this @bfolie
* original PR: #7236 * Reason to backport: Include build dependencies for CUDA 12.4 * 2.4 backport PR link: #7244
* original PR: #7219 * Reason to backport: Minor addition to Triton functionality to support CUDA plugins * 2.4 backport PR link: #7303
Original PR * #7617 * #7329 Backport PR * #7616 * #7618 Reason: To fix CI Risk: Low, since this doesn't change anything in torch_xla library.
Original PR: #7640 Backport PR: #7684 Reason: To fix upstream pytorch build Risk: Low, this doesn't change anything in torch_xla library
Is it possible to add a test for this? Looking at #9049 quickly, I didn't follow the path how calling `_get_xla_tensors_hlo` leads to `ExecuteComputation` call on the PjRtClient
Thanks for looking into this. It will helpful for https://github.com/pytorch/xla/issues/9173
> 1. Can the ability to use a pytorch source checkout be removed entirely? It's easy to allow using a locally built wheel still. I think that the lazy_tensor_generator.py needs...