fhaolinaws
fhaolinaws
Yes, the API in [#9203](https://github.com/pytorch/xla/pull/9203) should be usable for this case.
@aws-rhsoln thanks for your input. For 1, I think a proposed solution is we simply do nothing for [to_local](https://github.com/pytorch/pytorch/blob/main/torch/distributed/tensor/_api.py#L441) and [from_local](https://github.com/pytorch/pytorch/blob/main/torch/distributed/tensor/_api.py#L357) functions of DTensor, which are the function that utilize...
@jeffhataws for your questions: 1. I'm not sure if I understand your question. If you are asking if we can support XLA device in Torch native, like putting XLA handling...
To fit this proposal into the big picture, using Torch XLA SPMD backend is one solution to enable DTensor API with XLA, in which GSPMD sharding propagator and partitioner are...