Pedro Goncalves Mokarzel
Pedro Goncalves Mokarzel
Currently our sharding mechanisms work by utilizing `xla::OpSharding` (first defined [here](https://github.com/pytorch/xla/blob/r2.7/torch_xla/distributed/spmd/xla_sharding.py#L561)). We later pass `xla::OpSharding` to the XLA compiler directly. The sharding operation there is propagated until our compiler creates...
[PyTorch tests](https://github.com/pytorch/pytorch/actions/runs/16453921607/job/46507627977) seem to be failing due to PyTorchXLA depending on an older version of Jax. Specifically they are seeing: ``` error: jaxlib 0.6.2 is installed but jaxlib=0.7.0 is required...
Currently we have not yet built the 2.9 wheel which is cause torchprime tests to fail (See https://github.com/pytorch/xla/actions/runs/16177668026/job/45667831241 for an example). I believe https://github.com/pytorch/xla/pull/9461 should trigger the build necessary to...
I believe the issue from https://github.com/pytorch/xla/issues/9466 might be caused by a misconfiguration on testing. The current test is trying to use the release version of 2.9.0 which does not exist....
# [RFC] Controller for SPMD+MPMD ## Background Current work is being done to design a solution for making `mark_sharding` first trace the model before it is loaded into devices (https://github.com/pytorch/xla/issues/9341)....